Competitor share-of-model analysis
The most strategically important measurement exercise in GEO. Your real competitors aren't necessarily the ones you'd name in a strategy meeting — they're the brands consistently appearing in the same AI response with you.
Why share-of-model is the most strategic metric
Mention rate tells you whether AI engines know you exist. Share of model tells you where you stand against the specific competitors AI engines pit you against. The distinction matters because most strategic GEO decisions — which competitor to study, which content to produce, which positioning to defend — are downstream of one question: in the responses where my brand appears, who else appears alongside me, and in what order?
The 2025 AI Visibility Report observed that AI engines often produce stable category sets — the same 5-8 brands cited consistently across many variants of a category prompt. Your real competitors for AI citation aren't necessarily the competitors you'd name in a strategy meeting. They're the brands that consistently appear in the same response with you. Knowing who those brands are reshapes how you spend GEO investment.
This lesson covers how to run a share-of-model analysis, how to interpret the results, and how to convert the analysis into specific competitive moves.
The analysis methodology
Five steps, executable over a single weekend for a single brand:
Step 1: lock the prompt set
Use the same 8-15 prompts you use for mention rate tracking. Consistency matters because share of model is a comparative metric — the same prompts must run for your brand and every competitor. Mid-analysis prompt changes invalidate the comparison.
Two prompt categories matter most for share-of-model work:
- Category leadership prompts — "Who are the top X companies in Y category?" These produce ranked lists where position matters.
- Alternatives prompts — "What are the best alternatives to [competitor]?" These surface adjacent competitive dynamics.
Step 2: run each prompt multiple times
Single runs miss the consistency pattern. Run each prompt 10-20 times. The brands that appear in 80%+ of runs are the AI's stable category set. The brands that appear sporadically are noise — they're being mentioned occasionally but aren't part of the AI's mental model of the category.
Step 3: position-weight the appearances
Not all mentions are equal. A brand mentioned first carries more weight than a brand mentioned fifth. A brand named with reasoning carries more weight than a brand named in a list.
A workable scoring system:
- First mention with substantive reasoning: 5 points
- First mention in a list: 4 points
- Second mention with reasoning: 3 points
- Mid-list mention: 2 points
- End-of-list or passing mention: 1 point
- Not mentioned: 0 points
Sum scores across all runs of all prompts. Each brand's total is their position-weighted share of model.
Step 4: calculate share percentage
Sum the total scores across all brands. Each brand's percentage is their score divided by the total. Your brand's percentage is your share of model. The top brand's percentage tells you the gap to close. The distribution across brands tells you whether you're in a concentrated (one dominant player) or fragmented (multiple comparable players) competitive landscape.
Step 5: per-engine breakdown
Run the analysis separately for each AI engine. Share-of-model patterns differ sharply across ChatGPT, Claude, Perplexity, and Google AI Overviews. A competitor dominating on ChatGPT but absent from Perplexity has a specific weakness you can exploit; same competitor at parity across all engines is a tougher position.
Interpreting the patterns
Five competitive landscape types emerge from share-of-model analysis, each requiring different strategy:
Pattern 1: dominant leader (one brand >50%)
One competitor commands more than half of share of model. The category has settled in AI's mental model with a clear winner. Direct frontal challenge is extremely expensive — the leader's compounding entity signals make displacement difficult.
Better strategy: niche dominance. Find a specific context (industry, company size, geography, use case) where you can win 50%+ share of model and dominate that subcategory while the leader spreads across the whole.
Pattern 2: tight cluster (top 5 each at 15-25%)
No clear leader; 5 competitors each holding similar share. The category is fragmented in AI's mental model. This is the most common pattern for emerging categories.
Strategy: aggressive citation infrastructure. The category is up for grabs. Brands that compound 4-8 citation surfaces (Wikipedia, Reddit, aggregators, editorial, plus original research) over 12-18 months tend to pull away from the cluster.
Pattern 3: you-vs-one (you and one competitor each >30%)
Two-horse race. AI engines consistently pair you with one specific competitor. The category is consolidating into a duopoly.
Strategy: direct comparison content. Comparison tables, "vs" pages, and differentiation content become the highest-leverage investment. The buyer is increasingly making a binary choice and your content needs to win that specific choice repeatedly.
Pattern 4: you missing from a stable cluster
The category has a stable top 5-8 brands and you're not one of them. Most painful diagnosis but also the most actionable.
Strategy: foundational off-page work. The brands appearing without you have built citation surfaces you haven't. Audit each brand's Wikipedia, Reddit, aggregator, and editorial presence. Replicate the lowest-hanging path each one took, in order of feasibility.
Pattern 5: per-engine fragmentation
You're a top-5 brand on ChatGPT but absent from Perplexity, or vice versa. The competitive set differs across engines.
Strategy: identify which signals power the engine where you're absent. ChatGPT favors brand authority — if you're absent there, off-page editorial work is the priority. Perplexity favors fact density and recency — if you're absent there, more frequently-updated content with embedded statistics is the priority.
The competitor deep-dive
Once you've identified your real competitors via share-of-model, run a structured deep-dive on each. For each competitor, document:
- Wikipedia/Wikidata presence. Are they there? How substantial is the entry? What sources does it cite?
- Reddit citation surface. How many threads mention them? Are the mentions positive, balanced, or negative?
- Aggregator standing. What's their G2/Capterra/Clutch profile look like? Review count, rating, badges.
- Editorial coverage. Search "[Competitor name]" + "TechCrunch" / "Forbes" / category trade press. How many substantive articles?
- Content portfolio. Spot-check their top pages — are they running the question-H2 + answer-block pattern? Do they have schema? Original research?
- Founder presence. Is the founder publicly visible? Quoted in press? Speaking at conferences?
The deep-dive produces a feature-by-feature comparison of citation infrastructure. The gaps you find define your investment priorities for the next quarter.
Identifying defensible positioning
Share-of-model analysis also surfaces positioning you can defend. Look for prompts where:
- You appear consistently at high position
- The reasoning AI engines give for your inclusion is differentiated from competitors
- The reasoning aligns with your actual positioning
Those are your defensible bridges. The content and citation work that powers them becomes the work you must continue, not optimize away. A common mistake is reallocating investment from working positioning to new positioning before the existing position is solid.
Running the analysis on a quarterly cadence
Share-of-model analysis is expensive enough to run that quarterly is the right cadence for most practitioners. Monthly is too frequent (movement is incremental); annually is too infrequent (you miss competitor moves).
The quarterly schedule:
- Week 1 of quarter: Run analysis. 8-15 prompts × 6 engines × 10-20 runs = significant API/manual effort, but bounded.
- Week 2: Competitor deep-dive on top 3 competitors revealed by analysis.
- Week 3: Synthesis. What changed since last quarter? What positioning is defensible? What gaps need closing?
- Week 4: Translate findings into next quarter's investment priorities.
If you're running this for clients
The share-of-model report is one of the most valuable deliverables a GEO operator can produce. It directly answers the question every client asks: "what should we focus on next?"
Package the analysis as a quarterly strategic deliverable, separate from the routine weekly mention-rate reports. Charge separately if your contracts allow. A done-for-you share-of-model report with competitor deep-dives and next-quarter recommendations is comparable in value to a strategic consulting deliverable, not a routine analytics report.
Implementation: running your first analysis this month
- Week 1. Confirm your prompt set. Run baseline measurements for all 8-15 prompts × all engines × 10 runs each.
- Week 2. Score each appearance using the position-weighted system. Calculate share of model per engine for top 10 brands.
- Week 3. Identify your competitive landscape pattern. Run deep-dive on top 3 competitors revealed.
- Week 4. Document defensible positioning. Document gaps. Define next quarter's three investment priorities.
What comes next
Module 4 is complete. You can now measure GEO outcomes correctly, build the tooling stack to sustain measurement, and run strategic competitor analysis quarterly.
Module 5 shifts to the operator business model — pricing GEO services, scope of work, client onboarding. If you're applying GEO to your own brand, much of Module 5 is optional. If you're providing GEO as a service, it's the foundation for building a sustainable practice.