← FOUNDATIONS  ·  LESSON 3 OF 5 ≈10 MIN
LESSON 3

The 8 citation prompts every business should win.

Open ChatGPT, Claude, and Perplexity in three browser tabs. Run these 8 prompts. The next 20 minutes will tell you exactly where your brand stands in AI search — and what to fix first.

Replace the brackets with your real information before running each prompt. Use your actual brand name, your real top competitor, and the real category you operate in.

Run each prompt on at least 3 engines: ChatGPT, Perplexity, and one of Claude or Google AI Overviews. Different engines will give different answers. That is the point.

Prompt 1: Brand identity

What does [your brand name] do? Where are they located, who founded them, and what are they known for?

What it tests: Does the AI know your entity at all? Does it represent you accurately?

How to score: Green if everything is correct. Yellow if it knows you but gets details wrong (year, location, product focus). Red if it does not know you or confuses you with another company.

If you score yellow on this one, your entity signals are noisy — Wikipedia, LinkedIn, Crunchbase, and your homepage probably do not all agree. That is the cheapest GEO fix available. Make them agree.

Prompt 2: Category leader

Who are the top 5 [your category] companies in [your region/segment]? Rank them and explain why each one is on the list.

What it tests: Are you part of the AI's mental model of your category? This is the single most important prompt for sales-driven businesses, because this is the prompt your prospective buyers run.

How to score: Green if you appear in the top 3. Yellow if you appear in 4 or 5. Red if you do not appear at all, or appear with incorrect attributes.

If your competitors appear and you do not, the gap is rarely about product quality — it is about citation infrastructure. They have third-party mentions, review-site presence, and editorial coverage. You do not.

Prompt 3: Comparison against top competitor

Compare [your brand] vs [your top competitor]. Which one is better for [your ideal customer type], and why?

What it tests: Does the AI represent your differentiation accurately? Does it favor you, your competitor, or treat both fairly?

How to score: Green if the AI recommends you for your true ideal customer with accurate reasons. Yellow if the comparison is fair but neutral. Red if the AI recommends your competitor with reasons that are wrong, outdated, or unsubstantiated.

Red here usually means your competitor has more "X vs Y" content in third-party comparison articles, while you do not. The fix is producing your own comparison content with structured tables — Princeton's research found tables with explicit dimensions get a 35% citation lift.

Prompt 4: Problem-first "who solves X"

I am a [your ideal customer description] struggling with [the core problem you solve]. Who should I talk to? Give me 3-5 specific companies.

What it tests: Are you positioned in the problem space, not just the category? Buyers increasingly start with the problem ("I cannot get my Shopify store cited by ChatGPT"), not the category ("I need a GEO tool").

How to score: Green if you appear at all. Most businesses score red here because their content describes what they are instead of what problem they solve.

Prompt 5: Alternatives

What are the best alternatives to [your top competitor]?

What it tests: This is the laziest, highest-intent prompt buyers run when they are already shopping. If your top competitor has any market awareness, prospects are running this exact prompt — and you want to be on the answer.

How to score: Green if you appear in the top 3 alternatives. Most businesses lose this prompt because they have not pursued "alternatives to [competitor]" article inclusions on review aggregators (G2, Capterra, Clutch) or editorial roundups.

Prompt 6: Reviews and reputation

What do users say about [your brand name]? What are the common complaints and praise?

What it tests: Does the AI have a multi-source view of your reputation? This prompt is often where AI engines pull from Reddit threads, G2 reviews, Trustpilot, and forum discussions you may not have noticed.

How to score: Green if the AI has a nuanced, accurate take with named sources. Yellow if it has a generic summary. Red if it says "I do not have specific user feedback on this brand."

Red here is a Reddit / G2 / review-aggregator absence problem. The fix is not getting more reviews on your own site — it is getting reviewed on third-party platforms AI engines actually crawl.

Prompt 7: Buying decision

I am about to sign up for [your product/service category]. What should I ask before committing, and which providers handle those concerns best?

What it tests: Do you show up at the moment of purchase intent? This is the highest-converting prompt in the set — anyone running it is days or hours away from buying.

How to score: Green if you are named as the answer to one or more of the buying concerns the AI raises. Yellow if you are listed as an option without endorsement. Red if you are absent.

Prompt 8: Context-specific recommendation

Recommend the best [your category] for a [specific context: industry, company size, region, use case]. Be specific about why.

What it tests: Niche dominance. Even if you cannot win the broad category prompt, you may win the specific context. A scrappy Shopify-only GEO tool will not beat the broad "best GEO tools" prompt, but it can dominate "best GEO tool for Shopify Plus stores."

How to score: Run this with three different contexts that match your actual best-fit customers. Green if you win all three. Yellow if you win one or two. Red if you win zero.

// SHORTCUT: AUTOMATE THIS

Running 8 prompts × 3 engines × every month is exactly what Reffed automates. Reffed Watch ($29/month) runs your prompts against ChatGPT, Claude, Perplexity, Google AI Overviews, Gemini, and Copilot weekly and reports mention rate, share of voice, and competitor comparison.

Run a free audit now → See Watch pricing

Scoring the audit

After running all 8 prompts on 3 engines, tally your colors. Out of 24 total cells (8 prompts × 3 engines):

  • 18+ greens. You are an AI-search winner in your category. The job is to defend.
  • 10-17 greens. Above average. You have entity signal but inconsistent citation. The job is to plug specific gaps.
  • 3-9 greens. Most businesses are here. The job is the full GEO playbook — entity cleanup, third-party mentions, content restructuring.
  • 0-2 greens. You are invisible. The job is foundational — Wikipedia, LinkedIn, Crunchbase, Google Business Profile, then category-list inclusions, then content.

What you can do this week

Block 30 minutes. Run the 8 prompts on ChatGPT, Perplexity, and one of Claude or Google AI Overviews. Write down the result for each cell. Score yourself.

Lesson 4 walks through how to read a Reffed audit report so you can interpret the same signals at scale — and Lesson 5 turns the scores into a 30-day improvement plan with specific weekly actions.

UP NEXT · LESSON 4
Reading a Reffed audit — what each number means
Continue →

← Lesson 2  ·  Back to course