Key takeaways
- AI Mode turns one question into many retrieval tasks
- Visibility is governed by a selection layer beyond indexing and ranking—here’s how to diagnose and adapt
Table of Contents
People still think the pipeline is crawl → index → rank → click.
That mental model is now incomplete.
In AI Mode search, “visibility” is a separate decision layer: whether your page is eligible, selected, and distributed—not just whether it was stored.
AI Mode also forces query fan‑out: one user question becomes a bundle of sub‑queries and retrieval tasks. That changes what gets chosen and what gets ignored, even when both pages are indexed.
I’m Mikhail Drozdov (aka Casinokrisa). I study indexing-first visibility models and selection behavior in modern search systems. If you want the entity + evidence context, start at the person page.
System path: if you’re new to the framing, read Indexing vs retrieval and Indexed but not visible in search first.
What changed: AI Mode makes search a navigation layer
AI Overviews and AI Mode shift search from “ten blue links” to a navigation layer that synthesizes answers, chooses sources, and routes attention.
The practical effect is simple: your page can be indexed, correct, even well-written—and still get near-zero exposure if it isn’t selected as a source for the specific retrieval tasks the system runs.
Google has been explicit that these experiences rely on multiple searches / expanded retrieval and synthesis. Use their own framing as the baseline reference, not influencer paraphrases:
Query fan‑out, explained (and why it changes what “wins”)
Fan‑out = one question → many retrieval tasks
In AI Mode, a single user prompt often triggers multiple sub‑queries: definitions, constraints, comparisons, edge cases, workflows, and verification checks.
You don’t “rank for the query” once. You compete for inclusion across a set of retrieval tasks, each with its own notion of fit.
Winners cover sub‑questions cleanly
Pages that get repeatedly selected usually do a few boring things well:
- define the entity / concept unambiguously
- answer the obvious follow‑ups (constraints, tradeoffs, failure modes)
- keep claims citable (short paragraphs with verifiable wording)
- separate mechanism from advice
This is where entity clarity matters in practice. If you’re building a site that should be recognized as a coherent knowledge source (not a pile of posts), the knowledge graph hub is the right anchor.
The model: Storage vs Eligibility vs Selection vs Distribution
Most publishers optimize storage and assume everything else is downstream.
AI Mode makes the middle layers visible:
Storage (indexed) → Eligibility (allowed / fit) → Selection (chosen / cited) → Distribution (shown)
Storage (indexed)
The page exists in the system’s stored representation. This is necessary, but rarely sufficient.
Eligibility (fit for a query class)
The page is a candidate for a given sub‑query. Eligibility can fail even when the page is “about the topic”:
- wrong query class (instructional vs definitional vs comparative)
- wrong granularity (too broad, too thin)
- ambiguous entity boundaries (who/what/where)
Selection (chosen/cited for the answer)
Selection is the bottleneck. It’s where the system decides:
- which sources are reliable enough to cite
- which sources are the best match for the specific sub‑task
- which sources can be summarized without introducing contradictions
Distribution (shown to the user; clicks optional)
Even when selected, your exposure can be compressed: the system may show a citation without sending clicks, or route clicks to a different surface (more navigational, more transactional).
Why “indexed” pages get zero visibility (common failure modes)
If you’re indexed but not visible, you’re usually failing in eligibility or selection—not “ranking”.
Common failure modes:
- Intent mismatch at the query‑class level (the page answers a different question than the fan‑out tasks)
- Weak source reliability signals (author/entity ambiguity, inconsistent attribution)
- Thin coverage (no sub‑answers; everything is a teaser)
- Unclear entities (who/what/where is not stable across the site)
- No stable evidence trails (no references hub; no durable citations)
If you want the “selection vs storage” mindset as a single sentence: indexing is memory; selection is distribution.
What the Ahrefs CTR numbers imply (without hype)
Ahrefs has published observational analysis suggesting AI answers can compress clicks for a class of informational queries. Treat it as one dataset—not universal truth.
The useful takeaway is not “panic about CTR.” It’s a KPI shift:
- measure qualified visits, not raw clicks
- measure assisted conversions (newsletter signups, tool usage, downstream navigation)
- measure brand capture (branded queries, direct returns, mentions)
Practical playbook (do this next week)
Here’s a simple checklist that maps directly to fan‑out selection behavior.
1) Build pages that answer sub‑questions (fan‑out readiness)
- add a short definition block
- add constraints / failure modes
- add comparisons (when X vs Y is the real sub‑query)
- add edge cases and “when not to do this”
2) Add citable blocks
Aim for short paragraphs that can be quoted without rewriting:
- one claim per paragraph
- no hedging layers (“might/could/maybe” unless necessary)
- avoid vague nouns (“this”, “it”, “things”)
3) Strengthen entity consistency
- same author line across posts
- same one-liner bio
- consistent naming (Mikhail Drozdov / Casinokrisa)
- stable hubs that consolidate identity and evidence (knowledge graph hub)
4) Add capture paths
AI Mode reduces predictable click flows, so you need deliberate “next steps”.
If you want an owned capture channel, the cleanest one is a newsletter:
5) Measure a “selection proxy”
You can’t see the selection model directly, but you can track proxies:
- mentions and citations across the web
- growth in branded queries
- repeat visits to hubs (start, topics, research)
- Search Console patterns where impressions exist without stable clicks
If AI answers are the new homepage, become a reliable cited source
Indexing gets you stored. Visibility gets you distributed.
In AI Mode, distribution is mediated by selection across multiple retrieval tasks—so build for eligibility + selection, not just storage + ranking.
About the author
Mikhail Drozdov (aka Casinokrisa) — AI Search & Indexing Systems Researcher focused on indexing-first visibility models.
See: Person page · Knowledge graph hub · Press & evidence
References
- AI Mode and AI Overviews updates (Google)
- Google Search AI Mode update (Google)
- Ahrefs: AI Overviews reduce clicks (update)
Next
- Evidence hub: Press & evidence
- Entity context: Mikhail Drozdov
Next in SEO & Search
Up next:
Indexed but not ranking (2026): why being stored is not being shown“Indexed but not ranking” is usually not a technical SEO bug. It’s a selection problem: the system can store your page, but it isn’t confident that showing it is a low-regret outcome. This essay explains the mechanism and the signals that create visibility.