Engine-specific GEO tactics across the six major AI engines
Generic GEO advice plateaus around month four. Beyond that, every gain comes from per-engine tuning — what ChatGPT rewards is not what Claude rewards is not what Perplexity rewards.
Why every AI engine cites differently
If you treat GEO as one undifferentiated optimization problem, you'll plateau. Six major AI engines dominate citation flow as of mid-2026 — ChatGPT, Microsoft Copilot, Perplexity, Google AI Overviews and AI Mode, Gemini, and Claude — and each one weights signals differently. Content that gets cited 20 times per month in Perplexity might get cited zero times in Claude for the same query, not because the content is wrong but because Claude's selection model values different things.
This lesson walks through what each engine actually rewards, with concrete tactics per engine. Use it as a reference: when a client asks "why isn't my brand showing up in [engine X]?", come back here, find the engine's section, run the diagnostic.
ChatGPT (and Microsoft Copilot)
Treat ChatGPT and Copilot as one optimization problem. They both run on Bing's index. Approximately 87% of pages ChatGPT cites also rank in Bing's top results for the same query. If you're not in Bing's index, you're effectively invisible to ChatGPT, regardless of how well-optimized your content is. This is the single most under-appreciated fact in 2026 GEO.
What ChatGPT rewards
- Bing top-10 ranking — the foundation. Without it, the other signals don't matter.
- Recent content — ChatGPT search emphasizes freshness. Articles updated within 90 days are heavily favored for time-sensitive queries.
- Structured Q&A format — question-formatted H2s with direct answers in the first 40-60 words below.
- Author credentials and bylines — visible author bios with verifiable expertise increase citation probability ~40%.
- Domain authority — readable, well-established sites outrank sparse ones even when content quality is comparable.
What ChatGPT punishes
- Pages blocked to
OAI-SearchBotorChatGPT-Userin robots.txt (verify these are allowed, even if you blockGPTBotfor training). - JavaScript-only rendering. If the content isn't in initial HTML, ChatGPT often can't extract it.
- Stale dateModified — pages last updated >18 months ago get filtered out of time-sensitive queries.
The two-step ChatGPT play
- Verify your site in Bing Webmaster Tools (covered in Module 4 Lesson 4). Submit your sitemap, enable IndexNow. This is non-negotiable.
- For each target query, find where you rank in Bing. Anything in Bing positions 1-10 is a citation candidate. Anything in positions 11-30 needs traditional Bing SEO work before ChatGPT will surface it.
Microsoft Copilot — the LinkedIn factor
Copilot uses Bing's index plus Microsoft's own data sources. For B2B queries — software comparisons, vendor recommendations, professional services — Copilot weights LinkedIn content heavily. A company with a thin LinkedIn company page and quiet employee profiles gets cited less than one with active company-page posts and employees publishing regularly in your category. If you're B2B and ignoring LinkedIn, you're handing Copilot citations to competitors who aren't.
Perplexity
Perplexity is the most citation-transparent engine. It shows users every source it used, numbered inline. That makes it the easiest engine to audit (you can see exactly which pages of yours get cited) and also the most freshness-obsessed. Perplexity's Sonar model heavily prioritizes recent content over comprehensive content.
What Perplexity rewards
- Recency — content updated within 30 days is 3-4x more likely to be cited than content older than 180 days, even when the older content ranks higher in traditional search.
- FAQ-formatted content — Perplexity actively favors question-answer pairs.
- Inline citations within your content — pages that themselves cite primary sources get cited more often. Perplexity treats well-sourced content as more trustworthy.
- Author E-E-A-T signals — visible author credentials, publication dates, and editorial transparency.
- Topic depth — Perplexity tends to cite 21+ sources per answer (vs ~8 for ChatGPT), so comprehensive pages have more chances to appear.
Tactics that move Perplexity specifically
- Allow
PerplexityBotin robots.txt explicitly. - Refresh your top 10 pages on a weekly cadence — even small substantive edits, with
dateModifiedupdated in your Article schema. - Structure long-form content with explicit question-answer pairs. Use H3 questions with 80-120 word answer blocks below.
- Add inline source citations using your own primary research, government data, or peer-reviewed studies. These compound — Perplexity sees you as a "source-rich" domain and starts citing you for citation-quality alone.
Google AI Overviews and AI Mode
Different rules entirely. Google AI Overviews and AI Mode are, at their core, a Google SEO problem with an AI-extraction layer on top. If you rank well organically on Google, you're eligible. If you don't, no amount of GEO-specific work will help.
What Google AI Overviews rewards
- Top-10 Google ranking — 75% of AI Overview citations come from the top 12 organic results.
- Featured snippet eligibility — content already structured for featured snippets is over-represented in AI Overviews.
- Schema markup, especially FAQPage and HowTo — Google's AI extractor leans on structured data heavily.
- Snippable structure — 40-60 word answer paragraphs that can be lifted into the Overview verbatim.
- E-E-A-T signals — Google's quality framework applies fully here.
The order of operations for AI Overviews
- Win the Google organic ranking first. AI Overview optimization without a top-10 organic position is wasted work.
- Add FAQPage schema to your top-ranking pages.
- Structure key answers as 40-60 word self-contained paragraphs after question-formatted H2s. These are the units AI Overviews lift.
- Monitor your AI Overview appearances via Google Search Console (it now reports AI Mode impressions, though not yet AI Overview citation share specifically).
Gemini overlaps significantly with AI Overviews — both use Google's index. Strong Google SEO performance translates to Gemini citations almost automatically. Gemini adds one twist: it's the most multimodal engine, actively pulling from YouTube videos, images, and structured data simultaneously. Multimodal optimization (covered in Lesson 3.5) hits Gemini harder than any other engine.
Claude
Claude is the most selective. It cites less often than ChatGPT or Perplexity, but when it does, it's citing sources it considers authoritative. Claude's citation accuracy is the highest among major models at 91.2%, which means Claude is less likely to invent or misattribute. The flip side: Claude is harder to break into.
What Claude rewards
- Long-form comprehensive content — Claude prefers a 4,000-word thorough treatment over a 600-word quick answer.
- Logical structure and reasoning chains — content that explicitly walks through the "why" behind claims, not just the "what".
- Multi-source verification within your own content — Claude favors content that cross-references multiple authoritative sources.
- Balanced, non-promotional tone — sales-y content gets filtered out. Educational content gets through.
- Established publication credibility — Claude weights established editorial brands heavily.
The Claude tactical playbook
- For each pillar topic, build one 3,000-5,000 word definitive reference page. Claude prefers depth concentrated on one page over breadth spread across many.
- Cite primary sources inline using neutral attribution language ("according to [source]" rather than "[source] proves").
- Add a methodology section to data-driven pages. Claude rewards transparency about how you arrived at claims.
- Allow
ClaudeBotandClaude-Userin robots.txt (different bots, both matter).
The BLUF format — what works across all six engines
BLUF stands for "Bottom Line Up Front." It's the military-doctrine writing format that the AI search industry standardized on in 2025-2026. The structure: lead every section with the answer in 40-60 words, then expand. The format works across every engine because every engine's extractor scans the first sentences below a heading for the highest-confidence answer to lift.
BLUF is what Reffed Lesson 2.1 ("the citation content formula: claim + statistic + source") teaches in detail. Treat BLUF as the universal layer. Then add the engine-specific tactics from this lesson on top of it.
The six-engine cheat sheet
Print this. Take it to client meetings. The single table that summarizes everything in this lesson:
| Engine | Primary signal | Top tactic |
|---|---|---|
| ChatGPT | Bing top-10 ranking + freshness | Verify in Bing Webmaster Tools, enable IndexNow |
| Microsoft Copilot | Bing index + LinkedIn (B2B) | Active company LinkedIn + employee posting cadence |
| Perplexity | Recency + inline citations | Weekly content refreshes + source-rich pages |
| Google AI Overviews | Google top-10 + FAQPage schema | Featured snippet optimization + 40-60 word answer blocks |
| Gemini | Google SEO + multimodal content | YouTube videos and structured image data alongside text |
| Claude | Long-form authority + neutral tone | 3,000-5,000 word pillar pages with reasoning chains |
Implementation: this week
- Day 1. Audit your top 5 target queries in each of the six engines manually. Note which engine cites you, which cites competitors, and which surfaces no one.
- Day 2. For each engine that's not citing you, identify the gap from this lesson's tactics. Build a one-page intervention plan per engine.
- Day 3-7. Ship the highest-leverage intervention. Most operators see the biggest gains from Bing Webmaster Tools verification (ChatGPT + Copilot) and weekly Perplexity-targeted content refreshes — both covered in detail in Module 4.
What comes next
Lesson 2.5 covers query fan-out — the hidden mechanism behind every AI engine's retrieval pipeline. Once you understand that AI doesn't search the exact phrase a user types, but instead decomposes the question into 3-7 sub-queries, your content strategy changes fundamentally.