AI referral traffic grew 527% between January and May 2025. ChatGPT referrals convert at 15.9%, nine times Google organic's 1.76%. And 76.1% of AI Overview citations come from pages already ranking in Google's top 10.
Those three numbers tell you everything you need to know about Generative Engine Optimisation. GEO is real. It converts better. And it starts with SEO you've already done.
The question isn't whether to do GEO or SEO. It's how to do both without wasting time on the wrong one. We covered the broader shift in AI Overviews and their impact on SEO. This post gets specific about what to prioritise.
GEO is real, but deeply misunderstood
Generative Engine Optimisation means getting your content cited and recommended by AI-generated answers, from ChatGPT, Perplexity, Google AI Overviews, Gemini, and Claude.
That framing launched an industry of GEO consultants, GEO tools, and GEO courses. Most of them are selling something that doesn't match how these systems actually work. The underlying content requirements barely changed. What changed is the delivery format. Google gives you a ranked list. AI gives you an answer with citations. Your job shifted from "be on the list" to "be in the answer." But the way you get there is the same: rank in search.
The growth numbers demand attention
AI search currently drives a small share of overall web traffic. But three things make it impossible to ignore.
The trajectory is vertical. AI-referred sessions jumped 527% between January and May 2025 according to Previsible's AI Traffic Report. Small base, enormous acceleration.
On March 31, 2026, a clean-room reimplementation of Claude Code hit GitHub and collected 50,000 stars in two hours. Alongside the Rust port, the actual npm package (@anthropic-ai/claude-code) has been sitting in node_modules since release, minified but readabl
SEO isn't dead. It's not even close to dead. Here's what actually happened: Google added an AI layer on top of its existing search results. ChatGPT, Perplexity, and Gemini all run queries against Google (a process called query fan-out), pull the top-ranking pa
More than half of all Google searches in the US now end without anyone clicking a single result. According to SparkToro and Datos, 58.5% of US Google searches in 2024 were zero-click. For every 1,000 searches, only 360 clicks went to the open web.
That number
The traffic that arrives is better. ChatGPT referral traffic converts at 15.9%, Perplexity at 10.5%, compared to Google organic's 1.76%. Someone who asked an AI assistant and then clicked through has already been pre-qualified by the conversation itself.
AI referral traffic converts at dramatically higher rates than Google organic
527%AI referral growth (Jan-May 2025)
9xChatGPT vs Google conversion
Source: SE Ranking, 2025 | ooty.io
How AI search actually works (and why most GEO advice is wrong)
This is the part the GEO industry doesn't want to explain, because it makes their services harder to sell.
When you ask ChatGPT or Perplexity a question, the model doesn't search its own index. It doesn't have one. Instead, it does something called Query Fan-Out (QFO): it generates search queries, sends them to a traditional search engine (usually Google), gets back results, then synthesizes an answer from those results.
This isn't speculation. Analysis of Claude's search architecture confirmed the mechanism: the model makes a nested API call to a server-side search provider, receives a maximum of 8 results, and works only with those. The search provider is Google in most cases. ChatGPT uses Google via SerpAPI. Perplexity, Gemini, and Grok also primarily use Google results.
The implication is enormous: if you don't rank in the search results the LLM retrieves, you don't get cited. Period. URLs are never fabricated. The model can only reference pages that appeared in its search results. GEO without SEO is building on sand.
Here's what that architecture reveals about what actually matters:
First 900 characters of your page get full attention. After that, content is progressively compressed. Your opening paragraph is doing the heavy lifting for both humans and AI.
Direct quotes are capped at 125 characters. If you want to be quoted, your key claims need to be self-contained sentences under 125 characters.
HTML is stripped to plain text. No schema, no headings, no emphasis survives the extraction pipeline. What matters is the raw text content in your first few paragraphs.
Only 8 results per search. Top positions dominate. Position 9 might as well not exist.
This is why the 76.1% overlap stat makes perfect sense. AI cites what Google ranks because AI literally searches Google.
Where SEO and GEO actually differ
What SEO optimises for
Relevance. Does the page match the query's keywords and intent?
Authority. How many credible sites link to this page?
Technical quality. Fast loading, mobile-friendly, crawlable.
User signals. Do visitors stay and engage, or bounce?
The output is a ranked list of URLs. Users choose which to click.
What GEO optimises for
Factual density. Specific, verifiable claims with cited sources.
Structural clarity. Organised so specific answers are easy to extract.
The output isn't a position. It's inclusion or exclusion from the AI's answer.
The citation crunch is brutal
LLMs cite 2 to 7 domains on average per response. A Google results page shows 10. If you're not in the 2 to 7, you get zero mentions, regardless of your Google ranking.
The original GEO research found that 32.5% of AI citations come from comparison articles. Original research, data-driven content, and clear expert positions get cited more than generic informational pages. Fence-sitting doesn't get quoted.
What Each Discipline Optimises For
SEO ranks pages in a list. GEO gets content cited inside AI answers.
SEO Ranking Factors
Keyword Relevance
Backlink Authority
Technical Quality
User Engagement
Content Depth
Output: a ranked list of URLs. Users choose which to click.
GEO Citation Factors
Factual Density + Citations
Structural Clarity
E-E-A-T Signals
Content Freshness
Citation-Worthiness
Output: inclusion (or exclusion) from the AI's answer. No position to track.
Source: Aggarwal et al., Princeton/Georgia Tech, 2023 | ooty.io
The research is clear on what gets cited
The Princeton/Georgia Tech GEO paper tested specific optimisation strategies and measured which ones increased the probability of citation. No guesswork here.
Statistics and data win. Content with specific numbers and research findings gets cited significantly more often than content making the same points without data. This is the single highest-impact change you can make.
Cite your sources and AI will cite you. Content that attributes its own sources is more likely to be cited by AI systems. AI prefers content that demonstrates it draws on established knowledge, not content that just asserts things.
Good writing matters more than you'd think. Well-structured, grammatically correct prose performs better than dense or awkward text. AI extracts content more cleanly from clear sentences.
Named experts carry weight. Quoting named experts with relevant credentials increases citation rates. Anonymous claims get ignored.
The combination of good writing and statistics outperformed any single strategy. Numbers without clarity, or clarity without numbers, both underperform the pair.
What the Research Says Works
Strategies tested against AI systems, ranked by measured visibility improvement
Statistics + Data Addition+67%
Content with specific numbers and research findings
Fluency + Statistics Combined+5.5%
Outperforms any single strategy alone
Citation & Source AttributionHigh
Content that cites sources gets cited by AI
Expert QuotesMedium
Named experts with credentials increase citation rates
Schema Markup+30-40%
Structured data improves AI-generated answer visibility
If AI systems primarily cite content that already ranks well, then the best GEO strategy is great SEO with a format upgrade.
The Critical Overlap
AI systems primarily cite content that already ranks well on Google
76.1%
of AI Overview citations also rank in Google's top 10
The takeaway
The best GEO strategy starts with excellent SEO. Ranking well on Google is still the strongest predictor of being cited by AI systems.
The nuance
AI cites only 2-7 domains per response (vs 10 blue links on Google). Getting ranked isn't enough -- your content must be structured for AI extraction too.
Source: Marketing LTB, 2025 | ooty.io
Once you understand the QFO mechanism, this overlap is obvious. AI cites what Google ranks because AI literally searches Google. The shared foundations:
E-E-A-T signals matter for both
Technical quality (fast, crawlable, well-structured) helps in both contexts
Thin content gets neither ranked nor cited
Backlinks help SEO rankings, and high-ranking pages get cited more
Schema markup helps Google rankings, and Google rankings drive AI citations
Where they diverge:
SEO rewards comprehensive content. GEO rewards direct, extractable answers within that content.
SEO cares about keyword placement. GEO cares about whether your first 900 characters answer the query in quotable sentences.
GEO rewards frequent updates more aggressively. LLMs prefer recent content in their search results.
What GEO demands that SEO doesn't
Lead with the answer, then explain
AI systems extract information more easily when content leads with a direct statement. Start sections with the conclusion. Then provide context. This is the opposite of how most blog posts are written, and it's the single most important structural change for GEO.
Pack every claim with evidence
Every significant claim should have a number from a named source. This takes more research time per post, but it's the single most direct lever for GEO performance. Vague authority claims ("experts agree") get passed over. Specific citations ("SE Ranking found 15.9% conversion") get quoted.
Make your brand unmistakable to machines
AI systems understand entities: organisations, products, people, places. Content that clearly answers "what is [your brand]?", "what does it do?", "who is it for?" in structured language is easier for AI to include in responses. This is entity SEO, and it matters more in AI search than it ever did in traditional search.
Update relentlessly
Mark content with clear publication and update dates. Refresh statistics. Add new examples. A two-year-old article with current stats is more citation-worthy than a two-month-old article with outdated ones.
Schema markup helps, but not the way you think
Content with proper schema markup shows 30 to 40% higher visibility in AI-generated answers. But here's the nuance: AI models strip HTML to plain text during extraction. Schema doesn't survive the pipeline directly. The reason it helps is indirect. Schema improves your Google rankings (rich results, better click-through rates, clearer entity signals), and better Google rankings mean you appear in the search results that LLMs consume. Key types worth implementing: Article with dates, FAQPage for Q&A content, HowTo for instructions, Product and Review for commercial pages.
How to measure GEO (honestly, the tooling isn't there yet)
Unlike SEO, where you check your position, GEO performance is genuinely hard to track. AI responses vary by user, conversation context, model version, and even time of day. There's no Search Console equivalent for AI search, and there may never be one that works the same way.
The paid tools are immature. Several platforms (Semrush, Ahrefs, and a wave of startups) now offer "AI visibility tracking." The professional SEO community's verdict is harsh: inconsistent data, high false attribution, and pricing that doesn't match the value delivered. These tools are improving, but as of early 2026, most experienced practitioners don't trust them for decision-making.
Manual prompt testing is still the most reliable method. Run your target queries in ChatGPT, Perplexity, and Claude regularly. Note whether you're cited and which pages get referenced. Free, time-consuming, but it gives you ground truth that no automated tool currently matches. Track which of your pages appear, and cross-reference with their Google rankings for those same queries. You'll see the QFO pattern in action.
Referral traffic in GA4 is your best quantitative signal. Track sessions from AI platforms. These often show as direct traffic, so configure UTM parameters where possible. Ooty Analytics makes querying this data across platforms easier. The conversion rate difference between AI referrals and organic search is significant enough that even small volumes matter.
Track the QFO, not the citation. The most actionable approach: identify the search queries that LLMs generate when users ask about your topic, then track your Google rankings for those queries. If you rank in the top 8 results for the QFO queries, you'll appear in AI answers. This reframes GEO measurement as an extension of rank tracking, which is something we already know how to do well.
There's a growing industry selling GEO as a separate discipline with its own tools, consultants, and certification courses. Some practitioners in the SEO community have been vocal about this being disinformation, pointing out that LLMs are synthesis engines that outsource their search to Google, not independent search engines with their own indexes.
They're mostly right. But "mostly" matters here.
The core mechanism is search. Rank in Google, get cited by AI. That part is settled. But the format requirements are genuinely different. A page that ranks well in Google but buries its key insight in paragraph six, behind a 200-word introduction, will get outperformed in AI citations by a page that leads with the answer in a quotable sentence. The first 900 characters matter more than they ever did. Self-contained facts under 125 characters get quoted. Front-loaded answers get extracted. That's not traditional SEO. That's a format layer on top of it.
Don't rebuild your strategy around GEO. Don't hire a GEO consultant. Don't buy a GEO tool. Layer these format changes into the content process you already have.
For new content: Lead each section with a direct answer in a single quotable sentence. Research and include specific statistics. Add schema markup. Build links through your normal process. This is just better content that also happens to be AI-extractable.
For existing content: Start with your highest-traffic pages (they already have the authority that gets cited). Add a statistics pass. Restructure introductions to lead with answers. Update dates and stale information. Make sure the first 900 characters answer the core question.
GEO visibility can appear within 2 to 4 weeks, compared to SEO's typical 3 to 6 months. The tradeoff: GEO results are harder to measure and less stable. Model updates can change citation patterns overnight.
The brands that will win in AI search are the ones that already built deep, authoritative, well-cited content for traditional search, and then made it easy for machines to extract. That's not two strategies. It's one strategy with a format pass on top. Anyone selling it as more than that is selling you something.