Back to AI Radar

E-E-A-T signals

E-E-A-T was always how Google decided who deserved to rank. In the AI era, it's also how Gemini decides who deserves to be cited. Same playbook, higher stakes.

Proven

Battle-tested in production. Build on it with confidence.

AEO Foundations
No change
AEO Edition — May 2026

Citation·Schema

What It Is

E-E-A-T stands for Experience, Expertise, Authoritativeness, Trust. It's a framework Google uses (originally as part of Search Quality Rater Guidelines) to evaluate content quality. Experience asks whether the content reflects first-hand experience with the topic. Expertise asks whether the author has credentials or demonstrable knowledge. Authoritativeness asks whether the source is recognised as authoritative in its space. Trust asks whether the content and the site are trustworthy in straightforward ways: secure, accurate, transparent about sourcing.

Why It Matters

In 2024 and earlier, E-E-A-T mattered primarily for Search rankings, especially in YMYL (Your Money or Your Life) categories like finance, health, and legal. In 2026, E-E-A-T migrates to citation. Gemini's AI Mode and AI Overviews use E-E-A-T signals to decide which sources to surface in answers. That's the most explicit guidance Google has given about what AI values for citation, and it carries directly across other engines too: Claude and Perplexity also weight authoritative, well-attributed sources heavily.

For AEO buyers, this means investing in author bylines with credentials, citations to primary sources, transparent corrections and updates, and visible expertise signals (certifications, awards, organisation affiliations). These are largely the same signals that make a publication trustworthy to humans, which is convenient.

Key Developments

  • 2026: Google's structured data and Search Central documentation increasingly references E-E-A-T in AI search contexts, not just traditional Search.
  • 2022: Google added the second "E" (Experience) to the framework, formalising first-hand experience as a quality signal.
  • 2014: E-A-T originally introduced via Search Quality Rater Guidelines.

What to Watch

Watch for Google guidance that explicitly maps E-E-A-T signals to AI Mode citation behaviour. Currently the link is implicit. Track the evolution of author-level signals like Person schema, sameAs links to LinkedIn or institutional profiles, and visible bylines. As E-E-A-T migrates to AEO, author authority becomes more measurable. Watch other engines for explicit equivalents. Anthropic and OpenAI haven't published frameworks like Google's, but their citation behaviour suggests similar signals carry weight.

Strengths

  • Cross-engine relevance: The signals E-E-A-T captures (author authority, source primacy, transparency) influence citation across all major answer engines, not just Google.
  • Explicit Google guidance: The most documented framework for what makes content cite-worthy in AI surfaces.
  • Aligns with human quality: Optimising for E-E-A-T also makes content better for human readers, so the work isn't wasted.
  • Compounding signal: Once author and entity authority is established, it benefits future content automatically.

Considerations

  • Slow to build: Author authority and site trust take years to compound. Not a short-term AEO play.
  • YMYL bias: E-E-A-T is most decisive in finance, health, and legal. Other categories see weaker effects.
  • Hard to measure directly: No public score for E-E-A-T strength. Practitioners infer it from rank and citation outcomes.
  • Transparency requirements: Some E-E-A-T signals (visible author bios, credentials) require organisational changes that not all teams are willing to make.