Visibility #8: Only 15% of Pages ChatGPT Retrieves Actually Get Cited


The Visibility Report #8

Week of March 17-21, 2026

ChatGPT retrieves a lot of pages. It cites very few of them. A new study put a number on it: only 15% of pages retrieved make the final cut. That changes the optimization question entirely. Also this week: SparkToro can now show you what your audience is actually asking AI tools, WebMCP wants browsers to become callable tools for AI agents, and Semrush tracked 89,000 LinkedIn URLs to figure out what drives AI visibility there.

🔥 Only 15% of Pages ChatGPT Retrieves Get Cited

Getting into ChatGPT's retrieval pool isn't enough. New research shows ChatGPT retrieves far more pages than it actually cites -- only 15% of retrieved pages make it into the final answer. There's a two-stage process happening: first retrieval, then selection. Most SEO advice focuses on the first stage. The second stage is what actually drives visibility.

Why it matters: If you're optimizing purely for "appearing in AI results," you're solving the wrong problem. You need to be the page ChatGPT picks after it's already considered a dozen options. That means content structure, answer completeness, and source authority -- not just topical relevance.

🔗 Full study details at Search Engine Land

📊 AI Visibility Research

SparkToro Now Shows What Your Audience Asks AI Tools

SparkToro added AI prompt topic tracking -- so now you can see not just which AI tools your audience uses, but what they're actually asking. Previously you got the "where." Now you get the "what." Research any audience and you get their most common prompt patterns alongside search behavior. This is a different kind of keyword research: you're seeing intent at the query level, not just the keyword fragment.

🔗 SparkToro announcement

Semrush Analyzed 89K LinkedIn URLs Cited in AI Search

Semrush looked at which LinkedIn content types show up in AI search citations. The finding: professional and educational content dramatically outperforms promotional content. Company updates and sales pitches get ignored. Thought leadership, case studies, and data-backed posts earn citations. This mirrors what we see everywhere in AI search -- authoritative framing over promotional framing.

🔗 Semrush's full analysis

ChatGPT's Default & Premium Models Cite Almost Entirely Different Sources

An analysis of ChatGPT conversations found that the default and premium model versions cite almost entirely different sources for the same queries. If you're testing your AI visibility using one model version, you may have a completely different result in another. This makes monitoring across model versions necessary, not optional.

🔗 Search Engine Journal breakdown

Google AI Mode Keeps More Links Inside Google

New data from the SEO Pulse report shows AI Mode is increasingly keeping users inside Google's ecosystem rather than sending them to third-party sites. Ask Maps, Branded Queries in Search Console, and AI Mode link behaviors are all pointing the same direction: Google is absorbing more of the research journey. Watch the "external click rate" metric -- it's becoming more important than rankings.

🔗 SEJ full report

Branded Queries Filter Now Live in Search Console

Google rolled out branded query filtering in Search Console for all eligible sites. You can now separate branded from non-branded performance directly in GSC rather than building workarounds. This matters for AI search visibility because branded search is increasingly how you measure whether AI-generated awareness is driving intent. If people see you in AI Mode but then search your name, that'll show up here now.

🔗 Search Engine Land

🚀 Platform Updates

WebMCP: Webpages as Callable Tools for AI Agents

WebMCP is a proposed browser standard that lets webpages declare themselves as callable tools for AI agents. A page could announce "I can check mortgage rates" or "I can calculate shipping costs" -- and AI agents could invoke those capabilities directly. Think of it as structured data for AI interactions rather than search engines. If it gets adopted, we shift from optimizing for citations to optimizing for direct AI tool invocation. We've been here before with every new web standard -- but this one has unusually clear business model implications if it lands.

🔗 Semrush's WebMCP breakdown

Google Leaves Door Open to Ads in Gemini

A senior Google executive said the company is "not ruling out" ads in its Gemini AI app -- a notable reversal from previous denials. The business model question for AI search has always been how it gets monetized. If Gemini gets ads, that reshapes the organic visibility question entirely: you're competing for placement in a system that now has paid priority. Watch the Q2 earnings call for more signals.

🔗 Search Engine Land

Google Maps Launches Conversational AI Search

Ask Maps is live in the U.S. and India -- a Gemini-powered conversational feature for local discovery. You can now ask Maps questions like "good coffee shops for working with meetings rooms" and get AI-curated responses. For local businesses, this adds another AI surface where citations matter. Local structured data and entity completeness are going to matter more.

🔗 SEJ coverage

📝 From the Tool Blogs

💡 Strategy & Frameworks

Google's Liz Reid: Search Is Becoming a Conversation

Marie Haynes pulled the key insights from a Google Search head Liz Reid interview. The most telling line: Google is building toward "search as a conversation" rather than discrete queries. AI agents will handle more web activity over time. Traditional ranking factors are evolving as intent understanding gets more sophisticated. The framing is less "we're replacing search" and more "we're making search continuous." Implications for how you structure content for multi-turn queries are real.

🔗 Marie Haynes' full breakdown

iPullRank: We Need New AI Search Metrics

iPullRank argues that traditional SEO metrics don't work in the AI search era -- rankings don't equal revenue when AI overviews answer queries without clicks. They're calling for measurement that tracks citation rates, answer coverage, and brand mention context within AI responses. They propose specific metrics including "citation velocity" and "answer completeness scores." The critique is correct. The measurement gap is the thing holding back serious investment in GEO.

🔗 iPullRank's full framework

Surface-Level SEO Won't Build Lasting AI Visibility

Search Engine Land's piece makes the case that the tactics dominating GEO advice right now -- FAQ pages, schema markup, keyword density tricks -- are the surface layer. Lasting AI search visibility comes from knowledge graphs, expert entity signals, and influencing how LLMs have been trained to see your brand. This is a longer game, and the people winning it are building entity authority, not hacking format requirements.

🔗 Search Engine Land

🔮 What to Watch

  1. The retrieval-vs-citation gap. Only 15% of retrieved pages get cited -- and nobody has cracked what the selection algorithm is optimizing for. Expect a wave of research here. Whoever figures out the citation selection signals first has a significant competitive edge.
  2. WebMCP adoption signals. This is early-stage but the implications are significant. Watch for browser vendors (Safari, Chrome, Firefox) responding. If one major browser endorses it, the optimization category shifts fundamentally.
  3. Gemini monetization. Google opening the door to Gemini ads is the first real signal that the AI search business model is moving toward paid placement. If that materializes, "organic AI visibility" has a clock on it -- same as organic search did in 2012.

The Visibility Report #8 | Will Scott
This newsletter is produced collaboratively by Will Scott and Bob, an AI agent. Human oversight, AI efficiency.

The Visibility Report

Join 500+ digital marketers getting weekly AI visibility tools, tactics, and search updates. Free • 5-min read • No spam.

Read more from The Visibility Report

The Visibility Report #14 Two major AI search optimization guides dropped this week, and we're seeing the first encouraging data on how AI Overviews affect organic click-through rates. Let's dig in. SaaS Gets Its AI Search Playbook Semrush released an 8-step SaaS AI search optimization guide that covers the basics we've been tracking: schema markup, llms.txt files, and common citation pitfalls. The playbook focuses specifically on earning mentions in ChatGPT, Perplexity, and Google's AI...

The Visibility Report #13 Week of April 14 -- April 20, 2026 Ahrefs dropped what might be the most practically useful AI search study we've seen: 1.4 million ChatGPT prompts analyzed to understand why some pages get cited and others don't. Meanwhile, Adobe data confirms AI traffic converts better than organic for U.S. retailers, Google expanded AI Mode into agentic territory, and the tool blogs had another busy week. Let's get into it. 🔥 Why ChatGPT Cites Some Pages and Ignores Others Ahrefs...

A white robot holds a large magnet attracting glowing citation bubbles from Reddit, YouTube, Wikipedia, and LinkedIn.

The Visibility Report #12 Week of April 7 -- April 13, 2026 A financial data company just proved something we've been tracking: AI search optimization works, and it works fast. Using seoClarity's Bot Optimizer, they saw a 90% increase in AI citations in just one week. Meanwhile the tools ecosystem had a busy week -- Peec AI, Profound, AirOps, BrightEdge, and Scrunch all shipped things worth knowing about. Let's get into it. 🔥 90% More AI Citations in One Week The case study from seoClarity...