Research

Share of Model Benchmarks

What good Share of Model looks like — median, top-quartile, and leader-tier SoM across categories.

Last updated: 2026-03-22

Quick answer
Share of Model is the defining metric of AI visibility, but without benchmarks it is just a number. AEO Platform analysis suggests that "good" SoM varies enormously depending on category competitiveness, engine, and query type — a 15% SoM that represents dominance in one category might signal underperformance in another.
Key findings

Data at a glance

~8%
Median SoM across categories
AEO Platform analysis suggests that the median Share of Model across all tracked categories and engines is approximately 8%, reflecting the long tail of brands competing for AI visibility.
~24%
Top-quartile SoM threshold
Based on AEO Platform monitoring data, brands in the top 25% of their category typically achieve at least 24% Share of Model on their strongest engine.
~40%+
Category leader SoM
AEO Platform analysis suggests that clear category leaders — brands that dominate their niche — often achieve 40% or higher SoM on favourable engines.
~3.2x
Cross-engine SoM variance
Based on AEO Platform monitoring data, the average brand sees roughly 3.2x variance between their highest and lowest SoM across engines.
~6 mo.
Time to meaningful SoM gains
AEO Platform analysis suggests that brands implementing a structured AEO programme typically see meaningful SoM improvement within approximately 6 months.
~2.1x
SoM advantage from reviews
Based on AEO Platform monitoring data, brands with strong third-party review presence achieve approximately 2.1x higher SoM than those without.
~15%
SoM volatility per quarter
AEO Platform analysis suggests that SoM can fluctuate by approximately 15 percentage points per quarter due to model updates and competitive content changes.
~5
Avg. competitors per response
Based on AEO Platform monitoring data, AI engines typically mention approximately 5 brands in a competitive recommendation response, making the visibility window narrow.

Understanding SoM tiers

AEO Platform analysis suggests that Share of Model scores cluster into distinct tiers that reflect a brand's competitive position in AI search. The "invisible" tier (0-3% SoM) includes brands that rarely appear in AI responses — often because they lack the content signals AI engines need to identify and recommend them.

The "emerging" tier (3-10% SoM) represents brands that appear occasionally but inconsistently. Based on AEO Platform monitoring data, most brands tracked on the platform fall into this tier when they first begin measuring. The "competitive" tier (10-25% SoM) indicates a brand that AI engines regularly recognise and recommend. The "leader" tier (25%+ SoM) represents clear category dominance.

These tiers are not fixed — they vary by category competitiveness. AEO Platform analysis suggests that in a category with 30+ active brands, achieving 15% SoM puts you in the leader tier. In a category with only 5-6 brands, 15% SoM might place you in the emerging tier.

SoM by engine: where to focus

Not all engines are created equal for SoM, and the right focus depends on your audience. AEO Platform analysis suggests that ChatGPT currently commands the largest user base for product research queries, making it the highest-priority engine for most brands. However, Perplexity shows the highest citation density — meaning it names more brands per response — which creates opportunities for challenger brands.

Based on AEO Platform monitoring data, Gemini tends to favour brands with strong Google ecosystem presence (Google Business profiles, YouTube content, and Google Reviews). Claude shows a preference for brands with detailed technical documentation and thought leadership content. Copilot draws heavily from Microsoft ecosystem signals.

The strategic implication is that brands should not optimise for SoM in aggregate but should set engine-specific targets based on where their audience searches. AEO Platform analysis suggests that focusing on the 2-3 engines most relevant to your audience yields better results than spreading effort across all engines equally.

Improving SoM: what works

AEO Platform analysis suggests that the most effective SoM improvement strategies share common characteristics: they focus on creating content that AI engines can easily extract and cite, building third-party authority signals, and maintaining consistent brand presence across the information sources AI engines use for training and retrieval.

Based on AEO Platform monitoring data, the three highest-impact actions for SoM improvement are: (1) publishing detailed comparison and alternative content that directly addresses the queries where you want to appear, (2) building and maintaining profiles on major review platforms and industry directories, and (3) ensuring your website is technically accessible to AI crawlers with proper structured data markup.

The timeline for SoM improvement is important to set expectations. AEO Platform analysis suggests that brands implementing a comprehensive AEO programme typically see initial SoM movement within 2-3 months, with meaningful improvement by 6 months. However, SoM gains are not linear — they often come in steps as AI models are retrained on new content.

SoM benchmarks and competitive strategy

Understanding your SoM relative to competitors is more valuable than tracking your absolute number. AEO Platform analysis suggests that the competitive SoM landscape is a zero-sum game within each response — when one brand gains visibility, others lose it. This means that monitoring competitor SoM is essential for understanding both threats and opportunities.

Based on AEO Platform monitoring data, the most successful AEO programmes track SoM for at least 3-5 named competitors alongside their own brand. This competitive tracking reveals which brands are gaining or losing ground and helps identify the content strategies driving those shifts.

AEO Platform analysis suggests that brands entering a new category or launching a competitive displacement campaign should target the "competitive" tier (10-25% SoM) as an initial milestone. Once in this tier, the brand is consistently appearing in AI responses and can begin optimising for position within responses rather than just presence.

Methodology

How this research was conducted

SoM benchmarks are calculated from AEO Platform monitoring data across all tracked brands and categories. Query banks are executed across major AI engines at regular intervals, and brand mentions in each response are recorded and attributed. SoM is calculated as (responses mentioning brand / total responses) per category per engine.

Benchmark tiers, medians, and percentiles are derived from the distribution of SoM scores across all active tracking configurations. Figures are updated quarterly and represent platform estimates based on the brands and categories monitored through AEO Platform.

FAQ

Share of Model Benchmarks — FAQ

Get started

Start with the pages and proof that AI can actually use

Run the free audit to see what blocks AI from citing your site. Use the trial when you need ongoing monitoring, attribution, prompt discovery, and team workflows after the first fixes are live.