Back to Help Center
Analysis6 min readUpdated 2024-12-01

Interpreting Your Results

How to read and understand your analysis report.

Understanding Your AI Perception Report

Your analysis report contains valuable insights. Here's how to interpret each section effectively.

The Overall Score

Your overall score (0-100) represents your aggregate AI perception across all factors. Use this as a benchmark, but dig into the details for actionable insights.

Score Context - Compare to your industry average - Track changes over time - Don't obsess over small fluctuations

Provider Breakdown

Each AI provider may perceive your brand differently:

OpenAI GPT - Largest user base - Prioritize if it's your lowest score - Updates training data periodically

Anthropic Claude - Growing enterprise adoption - Often more cautious in recommendations - Values safety and accuracy

Google Gemini - Integrated with Google Search - May reflect search rankings - Important for discoverability

Perplexity - Popular for research queries - Cites sources explicitly - Good for citation tracking

Category Deep Dives

Visibility Score - Low: Focus on citation building - Medium: Expand content presence - High: Maintain and protect position

Accuracy Score - Issues: Update official sources immediately - Moderate: Create authoritative content - High: Monitor for changes

Sentiment Score - Negative: Address root causes, build positive proof - Neutral: Create differentiation - Positive: Leverage in marketing

Authority Score - Low: Build E-E-A-T signals - Medium: Expand expert content - High: Maintain thought leadership

Competitive Score - Low: Create comparison content - Medium: Highlight differentiators - High: Monitor competitor moves

Recommendations Priority

Recommendations are ranked by: 1. Impact: Potential score improvement 2. Effort: Resources required 3. Urgency: Time sensitivity

Start with high-impact, low-effort items (quick wins).

Key Metrics to Track

1. Score trend: Direction matters more than absolute number 2. Provider consistency: Large gaps indicate issues 3. Category balance: Avoid having any critically low areas 4. Recommendation completion: Track implementation progress

Was this article helpful?

Interpreting Your Results - Help Center