Visibility #8: Only 15% of Pages ChatGPT Retrieves Actually Get Cited


The Visibility Report #8

Week of March 17-21, 2026

ChatGPT retrieves a lot of pages. It cites very few of them. A new study put a number on it: only 15% of pages retrieved make the final cut. That changes the optimization question entirely. Also this week: SparkToro can now show you what your audience is actually asking AI tools, WebMCP wants browsers to become callable tools for AI agents, and Semrush tracked 89,000 LinkedIn URLs to figure out what drives AI visibility there.

🔥 Only 15% of Pages ChatGPT Retrieves Get Cited

Getting into ChatGPT's retrieval pool isn't enough. New research shows ChatGPT retrieves far more pages than it actually cites -- only 15% of retrieved pages make it into the final answer. There's a two-stage process happening: first retrieval, then selection. Most SEO advice focuses on the first stage. The second stage is what actually drives visibility.

Why it matters: If you're optimizing purely for "appearing in AI results," you're solving the wrong problem. You need to be the page ChatGPT picks after it's already considered a dozen options. That means content structure, answer completeness, and source authority -- not just topical relevance.

🔗 Full study details at Search Engine Land

📊 AI Visibility Research

SparkToro Now Shows What Your Audience Asks AI Tools

SparkToro added AI prompt topic tracking -- so now you can see not just which AI tools your audience uses, but what they're actually asking. Previously you got the "where." Now you get the "what." Research any audience and you get their most common prompt patterns alongside search behavior. This is a different kind of keyword research: you're seeing intent at the query level, not just the keyword fragment.

🔗 SparkToro announcement

Semrush Analyzed 89K LinkedIn URLs Cited in AI Search

Semrush looked at which LinkedIn content types show up in AI search citations. The finding: professional and educational content dramatically outperforms promotional content. Company updates and sales pitches get ignored. Thought leadership, case studies, and data-backed posts earn citations. This mirrors what we see everywhere in AI search -- authoritative framing over promotional framing.

🔗 Semrush's full analysis

ChatGPT's Default & Premium Models Cite Almost Entirely Different Sources

An analysis of ChatGPT conversations found that the default and premium model versions cite almost entirely different sources for the same queries. If you're testing your AI visibility using one model version, you may have a completely different result in another. This makes monitoring across model versions necessary, not optional.

🔗 Search Engine Journal breakdown

Google AI Mode Keeps More Links Inside Google

New data from the SEO Pulse report shows AI Mode is increasingly keeping users inside Google's ecosystem rather than sending them to third-party sites. Ask Maps, Branded Queries in Search Console, and AI Mode link behaviors are all pointing the same direction: Google is absorbing more of the research journey. Watch the "external click rate" metric -- it's becoming more important than rankings.

🔗 SEJ full report

Branded Queries Filter Now Live in Search Console

Google rolled out branded query filtering in Search Console for all eligible sites. You can now separate branded from non-branded performance directly in GSC rather than building workarounds. This matters for AI search visibility because branded search is increasingly how you measure whether AI-generated awareness is driving intent. If people see you in AI Mode but then search your name, that'll show up here now.

🔗 Search Engine Land

🚀 Platform Updates

WebMCP: Webpages as Callable Tools for AI Agents

WebMCP is a proposed browser standard that lets webpages declare themselves as callable tools for AI agents. A page could announce "I can check mortgage rates" or "I can calculate shipping costs" -- and AI agents could invoke those capabilities directly. Think of it as structured data for AI interactions rather than search engines. If it gets adopted, we shift from optimizing for citations to optimizing for direct AI tool invocation. We've been here before with every new web standard -- but this one has unusually clear business model implications if it lands.

🔗 Semrush's WebMCP breakdown

Google Leaves Door Open to Ads in Gemini

A senior Google executive said the company is "not ruling out" ads in its Gemini AI app -- a notable reversal from previous denials. The business model question for AI search has always been how it gets monetized. If Gemini gets ads, that reshapes the organic visibility question entirely: you're competing for placement in a system that now has paid priority. Watch the Q2 earnings call for more signals.

🔗 Search Engine Land

Google Maps Launches Conversational AI Search

Ask Maps is live in the U.S. and India -- a Gemini-powered conversational feature for local discovery. You can now ask Maps questions like "good coffee shops for working with meetings rooms" and get AI-curated responses. For local businesses, this adds another AI surface where citations matter. Local structured data and entity completeness are going to matter more.

🔗 SEJ coverage

📝 From the Tool Blogs

💡 Strategy & Frameworks

Google's Liz Reid: Search Is Becoming a Conversation

Marie Haynes pulled the key insights from a Google Search head Liz Reid interview. The most telling line: Google is building toward "search as a conversation" rather than discrete queries. AI agents will handle more web activity over time. Traditional ranking factors are evolving as intent understanding gets more sophisticated. The framing is less "we're replacing search" and more "we're making search continuous." Implications for how you structure content for multi-turn queries are real.

🔗 Marie Haynes' full breakdown

iPullRank: We Need New AI Search Metrics

iPullRank argues that traditional SEO metrics don't work in the AI search era -- rankings don't equal revenue when AI overviews answer queries without clicks. They're calling for measurement that tracks citation rates, answer coverage, and brand mention context within AI responses. They propose specific metrics including "citation velocity" and "answer completeness scores." The critique is correct. The measurement gap is the thing holding back serious investment in GEO.

🔗 iPullRank's full framework

Surface-Level SEO Won't Build Lasting AI Visibility

Search Engine Land's piece makes the case that the tactics dominating GEO advice right now -- FAQ pages, schema markup, keyword density tricks -- are the surface layer. Lasting AI search visibility comes from knowledge graphs, expert entity signals, and influencing how LLMs have been trained to see your brand. This is a longer game, and the people winning it are building entity authority, not hacking format requirements.

🔗 Search Engine Land

🔮 What to Watch

  1. The retrieval-vs-citation gap. Only 15% of retrieved pages get cited -- and nobody has cracked what the selection algorithm is optimizing for. Expect a wave of research here. Whoever figures out the citation selection signals first has a significant competitive edge.
  2. WebMCP adoption signals. This is early-stage but the implications are significant. Watch for browser vendors (Safari, Chrome, Firefox) responding. If one major browser endorses it, the optimization category shifts fundamentally.
  3. Gemini monetization. Google opening the door to Gemini ads is the first real signal that the AI search business model is moving toward paid placement. If that materializes, "organic AI visibility" has a clock on it -- same as organic search did in 2012.

The Visibility Report #8 | Will Scott
This newsletter is produced collaboratively by Will Scott and Bob, an AI agent. Human oversight, AI efficiency.

The Visibility Report

Join 500+ digital marketers getting weekly AI visibility tools, tactics, and search updates. Free • 5-min read • No spam.

Read more from The Visibility Report

The Visibility Report #7 Week of March 9, 2026 Google's AI Mode is citing Google more than any other site -- nearly one in five sources now come from Google itself. Plus: GPT-5.4 citation rates jump 7x, a roofer gets cited by Claude in under a week, and HBR says LLMs are overtaking search. 🔥 Google AI Mode Cites Itself Most New research shows Google's AI answers are citing Google properties nearly 20% of the time -- more than any other source. Many of these citations lead users back to more...

The Visibility Report #6 Week of February 24 - March 2, 2026 Ahrefs published a guide for monitoring brand mentions in ChatGPT. Semrush dropped one for finding AI visibility gaps. When the two biggest names in SEO tooling both ship AI visibility guides in the same week, we're probably past the "is this real?" stage. 🔥 Ahrefs Shows You How to Track Brand Mentions in ChatGPT Ahrefs released a guide for tracking brand mentions in ChatGPT. The timing is worth noting -- ChatGPT has 900 million...

The Visibility Report #5 Tuesday, February 25, 2026 AI search optimization now has 14 dedicated conferences in 2026. When a discipline spawns its own event circuit, it's no longer experimental -- it's an industry. Meanwhile, a new metric for measuring AI visibility consistency emerged, and the GEO tool landscape keeps expanding. GEO Conferences Take Over 2026 AI search optimization now commands its own conference circuit. Ahrefs mapped 14 GEO-focused conferences happening in 2026, operating...