Share of Model Benchmarks
What good Share of Model looks like — median, top-quartile, and leader-tier SoM across categories.
Last updated: 2026-03-22
Data at a glance
Understanding SoM tiers
AEO Platform analysis suggests that Share of Model scores cluster into distinct tiers that reflect a brand's competitive position in AI search. The "invisible" tier (0-3% SoM) includes brands that rarely appear in AI responses — often because they lack the content signals AI engines need to identify and recommend them.
The "emerging" tier (3-10% SoM) represents brands that appear occasionally but inconsistently. Based on AEO Platform monitoring data, most brands tracked on the platform fall into this tier when they first begin measuring. The "competitive" tier (10-25% SoM) indicates a brand that AI engines regularly recognise and recommend. The "leader" tier (25%+ SoM) represents clear category dominance.
These tiers are not fixed — they vary by category competitiveness. AEO Platform analysis suggests that in a category with 30+ active brands, achieving 15% SoM puts you in the leader tier. In a category with only 5-6 brands, 15% SoM might place you in the emerging tier.
SoM by engine: where to focus
Not all engines are created equal for SoM, and the right focus depends on your audience. AEO Platform analysis suggests that ChatGPT currently commands the largest user base for product research queries, making it the highest-priority engine for most brands. However, Perplexity shows the highest citation density — meaning it names more brands per response — which creates opportunities for challenger brands.
Based on AEO Platform monitoring data, Gemini tends to favour brands with strong Google ecosystem presence (Google Business profiles, YouTube content, and Google Reviews). Claude shows a preference for brands with detailed technical documentation and thought leadership content. Copilot draws heavily from Microsoft ecosystem signals.
The strategic implication is that brands should not optimise for SoM in aggregate but should set engine-specific targets based on where their audience searches. AEO Platform analysis suggests that focusing on the 2-3 engines most relevant to your audience yields better results than spreading effort across all engines equally.
Improving SoM: what works
AEO Platform analysis suggests that the most effective SoM improvement strategies share common characteristics: they focus on creating content that AI engines can easily extract and cite, building third-party authority signals, and maintaining consistent brand presence across the information sources AI engines use for training and retrieval.
Based on AEO Platform monitoring data, the three highest-impact actions for SoM improvement are: (1) publishing detailed comparison and alternative content that directly addresses the queries where you want to appear, (2) building and maintaining profiles on major review platforms and industry directories, and (3) ensuring your website is technically accessible to AI crawlers with proper structured data markup.
The timeline for SoM improvement is important to set expectations. AEO Platform analysis suggests that brands implementing a comprehensive AEO programme typically see initial SoM movement within 2-3 months, with meaningful improvement by 6 months. However, SoM gains are not linear — they often come in steps as AI models are retrained on new content.
SoM benchmarks and competitive strategy
Understanding your SoM relative to competitors is more valuable than tracking your absolute number. AEO Platform analysis suggests that the competitive SoM landscape is a zero-sum game within each response — when one brand gains visibility, others lose it. This means that monitoring competitor SoM is essential for understanding both threats and opportunities.
Based on AEO Platform monitoring data, the most successful AEO programmes track SoM for at least 3-5 named competitors alongside their own brand. This competitive tracking reveals which brands are gaining or losing ground and helps identify the content strategies driving those shifts.
AEO Platform analysis suggests that brands entering a new category or launching a competitive displacement campaign should target the "competitive" tier (10-25% SoM) as an initial milestone. Once in this tier, the brand is consistently appearing in AI responses and can begin optimising for position within responses rather than just presence.
How this research was conducted
SoM benchmarks are calculated from AEO Platform monitoring data across all tracked brands and categories. Query banks are executed across major AI engines at regular intervals, and brand mentions in each response are recorded and attributed. SoM is calculated as (responses mentioning brand / total responses) per category per engine.
Benchmark tiers, medians, and percentiles are derived from the distribution of SoM scores across all active tracking configurations. Figures are updated quarterly and represent platform estimates based on the brands and categories monitored through AEO Platform.
Share of Model Benchmarks — FAQ
Key concepts
Explore more research
AI Visibility Benchmarks by Industry
Share of Model, citation rates, and visibility scores benchmarked across 9 industry verticals.
State of AEO Adoption 2026
How brands are approaching AI visibility — adoption rates, maturity levels, and investment trends.
AI Citation Trends
How AI engines cite sources, what gets cited, and what it means for brand visibility strategy.
Start with the pages and proof that AI can actually use
Run the free audit to see what blocks AI from citing your site. Use the trial when you need ongoing monitoring, attribution, prompt discovery, and team workflows after the first fixes are live.