Share of voice
Share of voice tells you whether AI mentions you and how often. It doesn't tell you why it does or doesn't, so it's a starting point for measurement, not the whole story.
Strong signal and real results. Worth committing a pilot to.
Citation·Share of voice
What It Is
Share of voice (SOV) is the percent of a defined set of brand-relevant queries where your business gets mentioned across major AI engines. The query set typically covers your category, your competitors, and the specific products or services you sell. Measurement runs the same prompts repeatedly across ChatGPT, Gemini, Claude, Perplexity, and Microsoft Copilot, then aggregates mentions into a per-engine SOV percentage and a cross-engine roll-up.
Why It Matters
SOV is the metric most AEO buyers look at first. It answers the headline question: how often does AI mention us, in our category, across the engines that matter? Movement in SOV over time is the cleanest top-line indicator of whether AEO work is producing results. Comparison against competitors gives strategic context. If a competitor's SOV is rising and yours is flat, you're losing AI visibility in your category, regardless of what your traditional analytics show.
The limitation: SOV is a single number that aggregates many different things. A 30% SOV across "best CRM for SMBs" can mean very different things if half the mentions are accurate and half are negative. SOV is the right metric for a strategy review. It is not the right metric for diagnosing why the number is what it is.
Key Developments
- 2026: SOV measurement standardised across AEO platforms; cross-engine aggregated SOV becomes the benchmark report.
- 2025: First wave of AEO platforms (Profound, Athena, Goodie, Peec, Otterly, Scrunch) shipped SOV tracking as their headline feature.
- 2024: Practitioner consensus formed around SOV as the AEO equivalent of Search rank tracking.
What to Watch
Watch how AEO platforms differ in how they construct query sets and aggregate mentions across engines. Methodology differences make raw SOV numbers from different platforms hard to compare directly. Track competitor SOV alongside your own. Movement of competitors against you matters more than your absolute number. Watch for SOV decompositions that go beyond aggregate to break down by query type, mention quality, and engine. Single-number SOV is becoming insufficient as the field matures.
Strengths
- Headline AEO metric: The cleanest single number for tracking whether AEO work is producing results.
- Competitor benchmarking: SOV comparisons reveal whether you're winning or losing AI visibility in your category.
- Cross-engine roll-up: Most AEO platforms track SOV across all major engines, so you don't need separate per-engine measurement.
- Trends easily: SOV is well-suited to time-series tracking, which makes "did our work move the number" cleanly answerable.
Considerations
- Coarse on its own: SOV doesn't distinguish quality of mention, accuracy, or sentiment. A high number can mask real problems.
- Methodology variance: Different AEO platforms build SOV differently, so absolute numbers don't compare across vendors.
- Query set bias: SOV is only as good as the queries you're measuring against. Bad query sets produce misleading SOV.
- Doesn't explain causality: SOV moves but doesn't tell you why. Diagnosis requires citation tracking and content-level analysis.
Articles
Industry index showing aggregated SOV trends across the largest cited domains.
Methodology context for cross-engine citation and SOV measurement.
Comparison of 11 AEO platforms tested for SOV measurement.
Aggregated citation data underpinning category-level SOV benchmarking.
Share of voice· Hallucination & accuracy auditing· Sentiment monitoring· AI traffic attribution· Citation tracking