{"id":1927,"date":"2026-01-22T06:17:59","date_gmt":"2026-01-21T22:17:59","guid":{"rendered":"http:\/\/longzhuplatform.com\/?p=1927"},"modified":"2026-01-22T06:17:59","modified_gmt":"2026-01-21T22:17:59","slug":"why-llm-only-pages-arent-the-answer-to-ai-search","status":"publish","type":"post","link":"http:\/\/longzhuplatform.com\/?p=1927","title":{"rendered":"Why LLM-only pages aren\u2019t the answer to AI search"},"content":{"rendered":"<p><\/p> <div> <p>With new updates in the search world stacking up in 2026, content teams are trying a new strategy to rank: LLM pages.<\/p> <p>They\u2019re building pages that no human will ever see: markdown files, stripped-down JSON feeds, and entire \/ai\/ versions of their articles.<\/p> <p>The logic seems sound: if you make content easier for AI to parse, you\u2019ll get more citations in ChatGPT, Perplexity, and Google\u2019s AI Overviews.<\/p> <p>Strip out the ads. Remove the navigation. Serve bots pure, clean text.<\/p> <p>Industry experts such as Malte Landwehr have documented sites creating .md copies of every article or adding llms.txt files to guide AI crawlers.<\/p> <p>Teams are even building entire shadow versions of their content libraries.<\/p> <p>Google\u2019s John Mueller isn\u2019t buying it.<\/p> <ul class=\"wp-block-list\"> <li>\u201cLLMs have trained on \u2013 read and parsed \u2013 normal web pages since the beginning,\u201d he said in a recent discussion on Bluesky. \u201cWhy would they want to see a page that no user sees?\u201d<\/li> <\/ul> <div class=\"wp-block-image\"> <figure class=\"aligncenter size-full\"><img fetchpriority=\"high\" decoding=\"async\" width=\"634\" height=\"511\" alt=\"JohnMu, Lily Ray on BlueSky\" class=\"wp-image-467700\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/JohnMu-Lily-Ray-on-BlueSky.png\" title=\"Why LLM-only pages aren\u2019t the answer to AI search\u63d2\u56fe\" \/><img fetchpriority=\"high\" decoding=\"async\" width=\"634\" height=\"511\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/JohnMu-Lily-Ray-on-BlueSky.png\" alt=\"JohnMu, Lily Ray on BlueSky\" class=\"wp-image-467700\" title=\"Why LLM-only pages aren\u2019t the answer to AI search\u63d2\u56fe1\" \/><\/figure> <\/div> <p>His comparison was blunt: LLM-only pages are like the old keywords meta tag. Available for anyone to use, but ignored by the systems they\u2019re meant to influence.<\/p> <p>So is this trend actually working, or is it just the latest SEO myth?<\/p> <h2 id=\"the-rise-of-llmonly-web-pages\" class=\"wp-block-heading\">The rise of \u2018LLM-only\u2019 web pages<\/h2> <p>The trend is real. Sites across tech, SaaS, and documentation are implementing LLM-specific content formats.<\/p> <p>The question isn\u2019t whether adoption is happening, it\u2019s whether these implementations are driving the AI citations teams hoped for.<\/p> <p>Here\u2019s what content and SEO teams are actually building.<\/p> <h3 class=\"wp-block-heading\" id=\"h-llms-txt-files\">llms.txt files<\/h3> <p>A markdown file at your domain root listing key pages for AI systems.<\/p> <p>The format was introduced in 2024 by AI researcher Simon Willison to help AI systems discover and prioritize important content.\u00a0<\/p> <p>Plain text lives at yourdomain.com\/llms.txt with an H1 project name, brief description, and organized sections linking to important pages.<\/p> <p>Stripe\u2019s implementation at docs.stripe.com\/llms.txt shows the approach in action:<\/p> <pre class=\"wp-block-code\"><code>markdown# Stripe Documentation &gt; Build payment integrations with Stripe APIs ## Testing - [Test mode]( Simulate payments ## API Reference - [API docs]( Complete API reference<\/code><\/pre> <p>The payment processor\u2019s bet is simple: if ChatGPT can parse their documentation cleanly, developers will get better answers when they ask, \u201chow do I implement Stripe.\u201d<\/p> <p>They\u2019re not alone. Current adopters include Cloudflare, Anthropic, Zapier, Perplexity, Coinbase, Supabase, and Vercel.<\/p> <h3 class=\"wp-block-heading\" id=\"h-markdown-md-page-copies\">Markdown (.md) page copies<\/h3> <p>Sites are creating stripped-down markdown versions of their regular pages.<\/p> <p>The implementation is straightforward: just add .md to any URL. Stripe\u2019s <code>docs.stripe.com\/testing <\/code>becomes <code>docs.stripe.com\/testing.md<\/code>.<\/p> <p>Everything gets stripped out except the actual content. No styling. No menus. No footers. No interactive elements. Just pure text and basic formatting.<\/p> <p>The thinking: if AI systems don\u2019t have to wade through CSS and JavaScript to find the information they need, they\u2019re more likely to cite your page accurately.<\/p> <h3 class=\"wp-block-heading\" id=\"h-ai-and-similar-paths\">\/ai and similar paths<\/h3> <p>Some sites are building entirely separate versions of their content under <code>\/ai\/<\/code>, <code>\/llm\/<\/code>, or similar directories.<\/p> <p>You might find <code>\/ai\/about<\/code> living alongside the regular<code> \/about page<\/code>, or <code>\/llm\/products<\/code> as a bot-friendly alternative to the main product catalog.\u00a0<\/p> <p>Sometimes these pages have more detail than the originals. Sometimes they\u2019re just reformatted.<\/p> <p>The idea: give AI systems their own dedicated content that\u2019s built for machine consumption, not human eyes.\u00a0<\/p> <p>If a person accidentally lands on one of these pages, they\u2019ll find something that looks like a website from 2005.<\/p> <h3 class=\"wp-block-heading\" id=\"h-json-metadata-files\">JSON metadata files<\/h3> <p>Dell took this approach with their product specs. <\/p> <p>Instead of creating separate pages, they built structured data feeds that live alongside their regular ecommerce site.<\/p> <p>The files contain clean JSON \u2013 specs, pricing, and availability. <\/p> <p>Everything an AI needs to answer \u201cwhat\u2019s the best Dell laptop under $1000\u201d without having to parse through product descriptions written for humans.<\/p> <p>You\u2019ll typically find these files as <code>\/llm-metadata.json<\/code> or <code>\/ai-feed.json<\/code> in the site\u2019s directory.<\/p> <pre class=\"wp-block-code\"><code># Dell Technologies &gt; Dell Technologies is a leading technology provider, specializing in PCs, servers, and IT solutions for businesses and consumers. ## Product and Catalog Data - [Product Feed - US Store]( Key product attributes and availability. - [Dell Return Policy]( Standard return and warranty information. ## Support and Documentation - [Knowledge Base]( Troubleshooting guides and FAQs.<\/code><\/pre> <p>This approach makes the most sense for ecommerce and SaaS companies that already keep their product data in databases.\u00a0<\/p> <p>They\u2019re just exposing what they already have in a format AI systems can easily digest.<\/p> <p><strong><em>Dig deeper: LLM optimization in 2026: Tracking, visibility, and what\u2019s next for AI discovery<\/em><\/strong><\/p> <h2 id=\"realworld-citation-data-what-actually-gets-referenced\" class=\"wp-block-heading\">Real-world citation data: What actually gets referenced<\/h2> <p>The theory sounds good. The adoption numbers look impressive.\u00a0<\/p> <p>But do these LLM-optimized pages actually get cited?<\/p> <h3 class=\"wp-block-heading\" id=\"h-the-individual-analysis\">The individual analysis<\/h3> <p>Landwehr, CPO and CMO at Peec AI, ran targeted tests on five websites using these tactics. He crafted prompts specifically designed to surface their LLM-friendly content.<\/p> <p>Some queries even contained explicit 20+ word quotes designed to trigger specific sources.<\/p> <div class=\"wp-block-image\"> <figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"384\" alt=\"Landwehr - LLM experiment 1\" class=\"wp-image-467701\" srcset=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-1.png 800w, https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-1-768x369.png 768w\" data-lazy-sizes=\"(max-width: 800px) 100vw, 800px\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-1.png\" title=\"Why LLM-only pages aren\u2019t the answer to AI search\u63d2\u56fe2\" \/><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"384\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-1.png\" alt=\"Landwehr - LLM experiment 1\" class=\"wp-image-467701\" srcset=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-1.png 800w, https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-1-768x369.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" title=\"Why LLM-only pages aren\u2019t the answer to AI search\u63d2\u56fe3\" \/><\/figure> <\/div> <p>Across nearly 18,000 citations, here\u2019s what he found.<\/p> <p><strong>llms.txt: 0.03% of citations<\/strong><\/p> <p>Out of 18,000 citations, only six pointed to llms.txt files.\u00a0<\/p> <p>The six that did work had something in common: they contained genuinely useful information about how to use an API and where to find additional documentation.\u00a0<\/p> <p>The kind of content that actually helps AI systems answer technical questions. The \u201csearch-optimized\u201d llms.txt files, the ones stuffed with content and keywords, received zero citations.<\/p> <p><strong>Markdown (.md) pages: 0% of citations<\/strong><\/p> <p>Sites using .md copies of their content got cited 3,500+ times. None of those citations pointed to the markdown versions.\u00a0<\/p> <p>The one exception: GitHub, where .md files are the standard URLs.\u00a0<\/p> <p>They\u2019re linked internally, and there\u2019s no HTML alternative. But these are just regular pages that happen to be in markdown format.<\/p> <p><strong>\/ai pages: 0.5% to 16% of citations<\/strong><\/p> <p>Results varied wildly depending on implementation.\u00a0<\/p> <p>One site saw 0.5% of its citations point to its\/ai pages. Another hit 16%.\u00a0<\/p> <p>The difference?\u00a0<\/p> <p>The higher-performing site put significantly more information in their \/ai pages than existed anywhere else on their site.\u00a0<\/p> <p>Keep in mind, these prompts were specifically asking for information contained in these files.\u00a0<\/p> <p>Even with prompts designed to surface this content, most queries ignored the \/ai versions.<\/p> <p><strong>JSON metadata: 5% of citations<\/strong><\/p> <p>One brand saw 85 out of 1,800 citations (5%) come from their metadata JSON file.\u00a0<\/p> <p>The critical detail here is that the file contained information that didn\u2019t exist anywhere else on the website.\u00a0<\/p> <p>Once again, the query specifically asked for those pieces of information.<\/p> <div class=\"wp-block-image\"> <figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"410\" alt=\"Landwehr - LLM experiment 1\" class=\"wp-image-467703\" srcset=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-2.png 800w, https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-2-768x394.png 768w\" data-lazy-sizes=\"(max-width: 800px) 100vw, 800px\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-2.png\" title=\"Why LLM-only pages aren\u2019t the answer to AI search\u63d2\u56fe4\" \/><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"410\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-2.png\" alt=\"Landwehr - LLM experiment 1\" class=\"wp-image-467703\" srcset=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-2.png 800w, https:\/\/searchengineland.com\/wp-content\/seloads\/2026\/01\/Landwehr-LLM-experiment-2-768x394.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" title=\"Why LLM-only pages aren\u2019t the answer to AI search\u63d2\u56fe5\" \/><\/figure> <\/div> <figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"440\" alt=\"Semrush Discover Ai Optimization\" class=\"wp-image-458321\" srcset=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2025\/06\/semrush-discover-ai-optimization.png 800w, https:\/\/searchengineland.com\/wp-content\/seloads\/2025\/06\/semrush-discover-ai-optimization-768x422.png 768w\" data-lazy-sizes=\"(max-width: 800px) 100vw, 800px\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2025\/06\/semrush-discover-ai-optimization.png\" title=\"Why LLM-only pages aren\u2019t the answer to AI search\u63d2\u56fe6\" \/><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"440\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2025\/06\/semrush-discover-ai-optimization.png\" alt=\"Semrush Discover Ai Optimization\" class=\"wp-image-458321\" srcset=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2025\/06\/semrush-discover-ai-optimization.png 800w, https:\/\/searchengineland.com\/wp-content\/seloads\/2025\/06\/semrush-discover-ai-optimization-768x422.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" title=\"Why LLM-only pages aren\u2019t the answer to AI search\u63d2\u56fe7\" \/><\/figure> <h3 class=\"wp-block-heading\" id=\"h-the-large-scale-analysis\">The large-scale analysis<\/h3> <p>SE Ranking took a different approach.\u00a0<\/p> <p>Instead of testing individual sites, they analyzed 300,000 domains to see if llms.txt adoption correlated with citation frequency at scale.<\/p> <p>Only 10.13% of domains, or 1 in 10, had implemented llms.txt.\u00a0<\/p> <p>For context, that\u2019s nowhere near the universal adoption of standards like robots.txt or XML sitemaps.<\/p> <p>During the study, an interesting relationship between adoption rates and traffic levels emerged.<\/p> <p>Sites with 0-100 monthly visits adopted llms.txt at 9.88%.\u00a0<\/p> <p>Sites with 100,001+ visits? Just 8.27%.\u00a0<\/p> <p>The biggest, most established sites were actually slightly less likely to use the file than mid-tier ones.<\/p> <p>But the real test was whether llms.txt impacted citations.\u00a0<\/p> <p>SE Ranking built a machine learning model using XGBoost to predict citation frequency based on various factors, including the presence of llms.txt.<\/p> <p>The result: removing llms.txt from the model actually improved its accuracy.\u00a0<\/p> <p>The file wasn\u2019t helping predict citation behavior, it was adding noise.<\/p> <h2 id=\"the-pattern\" class=\"wp-block-heading\">The pattern<\/h2> <p>Both analyses point to the same conclusion: LLM-optimized pages get cited when they contain unique, useful information that doesn\u2019t exist elsewhere on your site.<\/p> <p>The format doesn\u2019t matter.\u00a0<\/p> <p>Landwehr\u2019s conclusion was blunt: \u201cYou could create a 12345.txt file and it would be cited if it contains useful and unique information.\u201d<\/p> <p>A well-structured about page achieves the same result as an \/ai\/about page. API documentation gets cited whether it\u2019s in llms.txt or buried in your regular docs.<\/p> <p>The files themselves get no special treatment from AI systems.\u00a0<\/p> <p>The content inside them might, but only if it\u2019s actually better than what already exists on your regular pages.<\/p> <p>SE Ranking\u2019s data backs this up at scale. There\u2019s no correlation between having llms.txt and getting more citations.\u00a0<\/p> <p>The presence of the file made no measurable difference in how AI systems referenced domains.<\/p> <p><strong><em>Dig deeper: 7 hard truths about measuring AI visibility and GEO performance<\/em><\/strong><\/p> <h2 id=\"what-google-and-ai-platforms-actually-say\" class=\"wp-block-heading\">What Google and AI platforms actually say<\/h2> <p>No major AI company has confirmed using llms.txt files in their crawling or citation processes.<\/p> <p>Google\u2019s Mueller made the sharpest critique in April 2025, comparing llms.txt to the obsolete keywords meta tag:\u00a0<\/p> <ul class=\"wp-block-list\"> <li>\u201c[As far as I know], none of the AI services have said they\u2019re using LLMs.TXT (and you can tell when you look at your server logs that they don\u2019t even check for it).\u201d<\/li> <\/ul> <p>Google\u2019s Gary Illyes reinforced this at the July 2025 Search Central Deep Dive in Bangkok, explicitly stating Google \u201cdoesn\u2019t support LLMs.txt and isn\u2019t planning to.\u201d<\/p> <p>Google Search Central\u2019s documentation is equally clear:\u00a0<\/p> <ul class=\"wp-block-list\"> <li>\u201cThe best practices for SEO remain relevant for AI features in Google Search. There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.\u201d<\/li> <\/ul> <p>OpenAI, Anthropic, and Perplexity all maintain their own llms.txt files for their API documentation to make it easy for developers to load into AI assistants.\u00a0<\/p> <p>But none have announced their crawlers actually read these files from other websites.<\/p> <p>The consistent message from every major platform: standard web publishing practices drive visibility in AI search.\u00a0<\/p> <p>No special files, no new markup, and no separate versions needed.<\/p> <h2 id=\"what-this-means-for-seo-teams\" class=\"wp-block-heading\">What this means for SEO teams<\/h2> <p>The evidence points to a single conclusion: stop building content that only machines will see.<\/p> <p>Mueller\u2019s question cuts to the core issue:\u00a0<\/p> <ul class=\"wp-block-list\"> <li>\u201cWhy would they want to see a page that no user sees?\u201d\u00a0<\/li> <\/ul> <p>If AI companies needed special formats to generate better responses, they would tell you.\u00a0As he noted:<\/p> <ul class=\"wp-block-list\"> <li>\u201cAI companies aren\u2019t really known for being shy.\u201d\u00a0<\/li> <\/ul> <p>The data proves him right.\u00a0<\/p> <p>Across Landwehr\u2019s nearly 18,000 citations, LLM-optimized formats showed no advantage unless they contained unique information that didn\u2019t exist anywhere else on the site.\u00a0<\/p> <p>SE Ranking\u2019s analysis of 300,000 domains found that llms.txt actually added confusion to their citation prediction model rather than improving it.<\/p> <p>Instead of creating shadow versions of your content, focus on what actually works.<\/p> <p>Build clean HTML that both humans and AI can parse easily.\u00a0<\/p> <p>Reduce JavaScript dependencies for critical content, which Mueller identified as the real technical barrier:\u00a0<\/p> <ul class=\"wp-block-list\"> <li>\u201cExcluding JS, which still seems hard for many of these systems.\u201d\u00a0<\/li> <\/ul> <p>Heavy client-side rendering creates actual problems for AI parsing.<\/p> <p>Use structured data when platforms have published official specifications, such as OpenAI\u2019s ecommerce product feeds.\u00a0<\/p> <p>Improve your information architecture so key content is discoverable and well-organized.<\/p> <p>The best page for AI citation is the same page that works for users: well-structured, clearly written, and technically sound.\u00a0<\/p> <p>Until AI companies publish formal requirements stating otherwise, that\u2019s where your optimization energy belongs.<\/p> <p><strong><em>Dig deeper: GEO myths: This article may contain lies<\/em><\/strong><\/p> <\/div> <p> <em>Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. Search Engine Land is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.<\/em> <\/p> <p>Opinion#LLMonly #pages #arent #answer #search1769033879<\/p> ","protected":false},"excerpt":{"rendered":"<p>With new updates in the search world stacking up in 2026, content teams are trying a new strategy to rank: LLM pages. They\u2019re building pages that no human will ever see: markdown files, stripped-down JSON feeds, and entire \/ai\/ versions of their articles. The logic seems sound: if you make content easier for AI to [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1928,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[18],"tags":[4571,4570,4568,155,4569,95],"class_list":["post-1927","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-careers","tag-answer","tag-arent","tag-llmonly","tag-opinion","tag-pages","tag-search"],"acf":[],"_links":{"self":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/1927","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1927"}],"version-history":[{"count":0,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/1927\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/media\/1928"}],"wp:attachment":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1927"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1927"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1927"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}