A few years ago, SEO advice was simple. Pick a target keyword. Build a page optimized for that keyword. Win the ranking. Repeat. Most SEO content is still written that way. It mostly still works for Google's blue links. It mostly does not work for AI citation.
When a user asks ChatGPT "what's the best CRM for a small B2B SaaS team?" — ChatGPT does not search the web for that exact phrase. It silently breaks the question into multiple sub-queries and searches each one separately. The pages that get cited are the ones that appear most consistently across the highest-relevance sub-queries. This is called query fan-out, and it's the single biggest reason traditional SEO tactics underperform for AI citation.
What fan-out actually looks like
For that CRM example, ChatGPT might fan out the question into something like:
- "best CRM small business 2026"
- "CRM B2B SaaS comparison"
- "affordable CRM team under 50 employees"
- "HubSpot vs Pipedrive vs Salesforce small business"
- "CRM features B2B SaaS sales pipeline"
Then it retrieves the top results for each sub-query, synthesizes across them, and produces an answer. The pages cited in the answer are the ones that appeared most consistently across the highest-relevance sub-queries.
Bing exposes these sub-queries explicitly in its AI Performance dashboard, where they're called grounding queries. ChatGPT, Perplexity, and Google AI Overviews all do the same fan-out internally but don't show their work. Bing is currently the only place you can see real fan-out behavior for your own domain.
Why this changes content strategy
Traditional SEO says: pick one target keyword, optimize one page for it, win that ranking. GEO turns this inside out.
Because the AI engine is searching for 3-7 sub-queries per user question, your page only needs to win one or two of them to get cited. The pages that get cited most are the ones that show up across multiple sub-queries simultaneously. A comprehensive page that covers the topic from five angles will be cited more than five separate thin pages each covering one angle.
The thin-page-per-keyword model that worked in 2015 SEO actively underperforms in 2026 GEO. Search Console might show that thin-page strategy ranking for ten different keywords. But none of those thin pages will get cited by AI because each one only matches one sub-query in a typical fan-out.
The Princeton research that quantified this
Princeton's GEO research (KDD 2024) tested specific content interventions against AI citation rates across multiple engines. The headline findings every operator should know:
- Adding statistics: +22% citation lift
- Adding direct quotations: +37% citation lift
- Adding inline source citations: +30-40% citation lift
- Question-formatted H2 headings with 120-180 word answer blocks: ~40% citation lift
- Fluency optimization (clear prose, varied sentence length): +15-20%
These compound. A page that does all five typically sees 2-3x citation rate increases versus the unoptimized baseline. Notice the pattern: every intervention helps the page match more sub-queries during fan-out retrieval. Statistics surface for fact-checking sub-queries. Direct quotes surface for "what did X say" sub-queries. Inline citations make your page itself look like a trustworthy source, which moves it up in every sub-query's ranking. Question H2s match the conversational sub-query phrasings AI engines generate.
Writing for fan-out
Once you internalize fan-out, content writing changes in specific ways.
Cover the topic from multiple angles on one page. A "best CRM for small B2B SaaS" page should cover features, pricing, integration, team size considerations, comparison against alternatives, and migration. Each angle is a sub-query the AI might fan out to.
Use varied phrasings of the core concept. Sub-queries use different language than user queries. "CRM" appears in some, "customer relationship management software" in others, "sales pipeline tool" in yet others. Include all naturally throughout your page, not as keyword stuffing — as the natural variation a knowledgeable writer would use.
Add related-question sections explicitly. A "people also ask" style section toward the bottom of pillar pages catches sub-queries you'd otherwise miss. Use H3 questions with 80-120 word answer blocks below each.
Build topic clusters, not isolated pages. A 12-page cluster on CRM that links internally creates a topical authority signal that helps every page in the cluster get retrieved across more sub-queries. The internal link graph signals "this domain is the place to ask about CRMs" to the AI engine's retrieval layer.
The 90-minute fan-out content audit
A practical audit you can run for any pillar topic:
- Pick one pillar topic you want to dominate. The one that, if you dominated AI citations for it, would meaningfully change your business.
- List 15-25 likely sub-queries a user question would fan out to. Write them as you imagine they'd appear in Bing's grounding query report.
- For each sub-query, search it in ChatGPT, Perplexity, and Google AI Overviews. Note which pages get cited. Note whether your brand appears at all.
- Build a coverage matrix. Sub-queries where you appear and get cited: green. Sub-queries where you appear but don't get cited: yellow. Sub-queries where you don't appear at all: red.
- Address the red rows first. Either fold the missing angles into existing pages, or build new ones.
The yellow rows are usually the highest-leverage. They mean the AI engine considers your page relevant enough to retrieve, just not authoritative enough to cite. Usual fixes: thin content (expand), missing schema (add FAQPage and Article), stale dateModified (refresh), or weak inline source citations (add primary research).
Reading Bing's grounding queries strategically
Once you've set up Bing Webmaster Tools (see our setup guide), the grounding query report becomes your strategic compass. Look for three patterns:
Sub-queries you appear for but didn't write content targeting. This is the gold. You're getting cited incidentally for a topic you never wrote about. Build a dedicated page and you'll typically dominate it within 60 days.
Sub-queries where you appear but get cited rarely. Retrieval without citation. Refresh, add structure, add primary sources.
Sub-queries where you never appear. Pure content gaps. Either your topical coverage doesn't extend that direction, or the AI engine doesn't yet associate your domain with that sub-topic. Build new pages with internal linking back to your existing pillars.
The bottom line
If you've been running a one-page-per-keyword content strategy and wondering why AI citations aren't following from your Google rankings, fan-out is your answer. AI engines don't reward narrow keyword targeting. They reward topical authority concentrated on comprehensive pages that match across the sub-queries users actually mean when they ask their questions.
The fix is straightforward but takes work: identify your pillar topics, audit which sub-queries you cover, build pages that match many sub-queries simultaneously, monitor grounding queries weekly, and iterate. Reffed Academy Quickstart covers fan-out, grounding query analysis, engine-specific tactics, and 25 other lessons. $147 founding price, lifetime access.