SaaS companies are uniquely exposed to AI search disruption. When someone asks ChatGPT "what's the best tool for X," the AI assistant returns one or two product recommendations — not ten blue links. If your product isn't one of the recommendations, you've lost the customer before they ever visited your site. For a SaaS business, that's the entire funnel.

To see what good (and bad) AI search visibility actually looks like, we ran our audit on five well-known SaaS companies. None of them have a 100. The highest score was 82. The lowest was 60. The patterns we found explain why even great SaaS marketing teams are still under-optimized for AI citation.

The scoreboard

All five scores reflect real audits run in May 2026:

  • github.com — 60/100. Massive content depth, weak structure.
  • stripe.com — 67/100. World-class docs, missing FAQ schema on marketing pages.
  • anthropic.com — 67/100. Same problem — the AI company itself.
  • linear.app — 68/100. Clean design, minimal structured data.
  • tally.so — 82/100. Single-page-app done right.

Each score reflects what AI engines actually see when they crawl these sites — not the polish a human visitor sees. Let's break down what's specifically wrong on each.

GitHub (60/100): the world's most underused content depth

GitHub's homepage and product pages contain a remarkable amount of content. Their problem isn't volume — it's that the content is written in scroll-past human-skim style, not in extractable answer capsules. AI engines look for clear, self-contained passages they can cite. Long marketing prose with mid-sentence pivots doesn't extract cleanly.

What GitHub is missing: structured Q&A blocks (FAQPage schema). Their content answers the questions buyers ask, but it does so in long paragraphs rather than in question-answer pairs that AI engines can pull as discrete citations.

What SaaS founders can steal: if your product page reads like marketing copy, add a "Frequently asked questions" section at the bottom with 6–10 real questions and answers. Format the answers as 250–400 character paragraphs. Wrap the whole section in FAQPage JSON-LD schema. This single change moves citation share more than anything else for content-rich sites.

Stripe (67/100): great docs, weak marketing schema

Stripe's documentation is the gold standard in developer tools — comprehensive, current, structured. But their marketing pages (the ones that drive trial signups) lack the same structural rigor. Specifically, the homepage and product overview pages have minimal schema markup. AI engines see beautiful prose but no structured signals about what Stripe sells, who it's for, or how it compares to alternatives.

What Stripe is missing: FAQPage schema, Product schema with specific feature lists, and Organization schema with verified entity links to Crunchbase and LinkedIn.

What SaaS founders can steal: your marketing pages need just as much schema discipline as your documentation. Add SoftwareApplication schema to your product page. Add Service schema to each feature page. Connect your Organization schema to your LinkedIn company page and Crunchbase profile via the sameAs property. These linkages help AI engines verify you're a real company, not a placeholder site.

Anthropic (67/100): the AI company that isn't optimized for AI

This is the one that surprised us most. Anthropic — the company building Claude — has the same gaps as a typical mid-stage SaaS. Their homepage prose is thoughtful but light on extractable factual claims. Their schema coverage is basic. The site reads as if it's optimized for human visitors who already know what Anthropic is, not for AI assistants that need to learn.

What Anthropic is missing: the same FAQ schema pattern. Their site assumes prior context. AI assistants rarely have prior context — they need pages that answer "what is X, who is it for, how does it work" in clean, citable form.

What SaaS founders can steal: never assume your AI audience knows you. Even if your real visitors are deeply familiar with your product, AI engines indexing your site are starting from zero. Write at least one page on your site that explains your product from absolute zero, in clear question-answer format. That page becomes the AI's preferred citation source for "what is X" queries.

Linear (68/100): brand authority is doing the heavy lifting

Linear has weak technical signals but strong brand recognition. Their site relies almost entirely on authority — backlinks, brand mentions, community discussion — to get cited by AI assistants. It works for Linear because they have the authority. It doesn't work for smaller SaaS startups, who need technical readiness to compensate for limited brand awareness.

What Linear is missing: almost everything structural. Minimal schema, JS-rendered marketing site, no obvious FAQ blocks. They get cited anyway because ChatGPT, Claude, and Perplexity all "know" Linear from training data and brand signals across the web.

What SaaS founders can steal: don't try to be Linear. If you're a sub-$10M ARR SaaS, you don't have Linear's brand authority. Compensate by being more technically rigorous than they are. Strong schema, comprehensive content, FAQ structures — all of these matter much more for you than for them. Linear can afford to score 68. You probably can't.

Tally (82/100): single-page apps done right

Tally is the outlier — and the most informative case for SaaS founders. They're built as a single-page application, which usually destroys AI search visibility because JS-rendered content isn't readable by most crawlers. Tally avoided this by aggressively pre-rendering their marketing pages and embedding rich schema directly in the HTML.

What Tally got right: server-side rendering for marketing pages, clear Q&A content, FAQPage schema, Organization schema with verified entity links, and clean robots.txt that explicitly allows all AI crawlers.

What SaaS founders can steal: if you have a JS-heavy product, you don't need to migrate the whole thing. You just need to make your marketing pages server-rendered. Most modern frameworks (Next.js, Nuxt, SvelteKit) support hybrid rendering — your app pages can stay client-rendered while your marketing pages render server-side. This is the single highest-leverage refactor for SaaS sites stuck in the 50s and 60s.

The pattern across all five

Three issues showed up on four of the five sites:

  1. Missing FAQPage schema on marketing pages. All five had FAQ-like sections on their sites, but only Tally wrapped them in proper JSON-LD schema. This is the single most reproducible win.
  2. No Organization schema with verified sameAs links. Adding links to your LinkedIn, Crunchbase, and Wikipedia entries (when they exist) verifies your identity for AI engines.
  3. Light content depth on commercial pages. Homepage prose alone rarely gives engines enough to cite from. A 1500–2500 word product page with multiple Q&A blocks routinely outperforms a beautifully designed 400-word page.

The 90-day playbook for SaaS founders

If you run a SaaS and want to improve your AI search citation share, here's the prioritized order:

  1. Week 1: audit your site. Run the free Reffed audit on your homepage and your top 3 product pages. Note the priority actions.
  2. Week 2: fix robots.txt and add baseline Organization + WebSite schema. These take 1–2 hours total.
  3. Week 3–4: add FAQPage schema to your homepage and product pages. Write 6–10 real Q&A pairs for each. This is the biggest single lever.
  4. Week 5–8: expand your product pages to 1500+ words with substantive content, not marketing fluff. Add a "How it works" section, a comparison table, and a "Who is this for" section.
  5. Week 9–12: verify your Bing indexation. Register with Bing Webmaster Tools. Submit your sitemap. Request indexing on your top pages.

Most SaaS sites that work through this list see their Reffed score move from the 50s into the 80s within 90 days. Citation share improvements typically show up 30–60 days after technical changes, because AI engines re-index on their own schedule.

The honest reality about authority

Even with perfect technical readiness, smaller SaaS startups will lose to Linear and Stripe on brand-led queries. ChatGPT "knows" the big names from training data. That gap is not closable in 90 days.

What you can close: the long-tail queries. "Best lightweight project management tool for solo founders" or "what's the cheapest customer support platform under $50/month" — these are queries where authority matters less and technical readiness matters more. Aim for those first. Win the long-tail. Brand authority follows.

Run our free audit on your own SaaS site to see exactly where you sit relative to these five. Most SaaS founders are surprised to find they score lower than they expected — and lower than competitors who look less polished but optimized harder.

Once you have your score, our diagnostic checklist for why ChatGPT isn't surfacing your site walks through the most common technical fixes in order of impact. For the deeper architectural questions — schema markup, FAQPage structures, and which JSON-LD types to add first — see our schema markup guide. And for the case study that originally prompted this post, the full SaaS audit breakdown has the raw findings for each company.