{"id":3706,"date":"2026-02-18T04:20:09","date_gmt":"2026-02-17T20:20:09","guid":{"rendered":"http:\/\/longzhuplatform.com\/?p=3706"},"modified":"2026-02-18T04:20:09","modified_gmt":"2026-02-17T20:20:09","slug":"rand-fishkin-proved-ai-recommendations-are-inconsistent-heres-why-and-how-to-fix-it","status":"publish","type":"post","link":"http:\/\/longzhuplatform.com\/?p=3706","title":{"rendered":"Rand Fishkin proved AI recommendations are inconsistent \u2013 here\u2019s why and how to fix it"},"content":{"rendered":"<p><\/p> <div> <p>Rand Fishkin just published the most important piece of primary research the AI visibility industry has seen so far. <\/p> <p>His conclusion \u2013 that AI tools produce wildly inconsistent brand recommendation lists, making \u201cranking position\u201d a meaningless metric \u2013 is correct, well-evidenced, and long overdue. <\/p> <p>But Fishkin stopped one step short of the answer that matters.<\/p> <p>He didn\u2019t explore why some brands appear consistently while others don\u2019t, or what would move a brand from inconsistent to consistent visibility. That solution is already formalized, patent pending, and proven in production across 73 million brand profiles.<\/p> <p>When I shared this with Fishkin directly, he agreed. The AI models are pulling from a semi-fixed set of options, and the consistency comes from the data. He just didn\u2019t have the bandwidth to dig deeper, which is fair enough, but the digging has been done \u2013 I\u2019ve been doing it for a decade. <\/p> <p>Here\u2019s what Fishkin found, what it actually means, and what the data proves about what to do about it.<\/p> <h2 id=\"fishkins-data-killed-the-myth-of-ai-ranking-position\" class=\"wp-block-heading\">Fishkin\u2019s data killed the myth of AI ranking position<\/h2> <p>Fishkin and Patrick O\u2019Donnell ran 2,961 prompts across ChatGPT, Claude, and Google AI, asking for brand recommendations across 12 categories. The findings were surprising for most. <\/p> <p>Fewer than 1 in 100 runs produced the same list of brands, and fewer than 1 in 1,000 produced the same list in the same order. These are probability engines that generate unique answers every time. Treating them as deterministic ranking systems is \u2013 as Fishkin puts it \u2013 \u201cprovably nonsensical,\u201d and I\u2019ve been saying this since 2022. I\u2019m grateful Fishkin finally proved it with data.<\/p> <p>But Fishkin also found something he didn\u2019t fully unpack. Visibility percentage \u2013 how often a brand appears across many runs of the same prompt \u2013 is statistically meaningful. Some brands showed up almost every time, while others barely appeared at all. <\/p> <p>That variance is where the real story lies.<\/p> <p>Fishkin acknowledged this but framed it as a better metric to track. The real question isn\u2019t how to measure AI visibility, it\u2019s why some brands achieve consistent visibility and others don\u2019t, and what moves your brand from the inconsistent pile to the consistent pile. <\/p> <p>That\u2019s not a tracking problem. It\u2019s a confidence problem.<\/p> <div style=\"background: radial-gradient(circle at 30% 40%, rgba(184, 111, 255, 0.15), rgba(0, 169, 255, 0.15) 40%, #CDE8FD 70%); padding: 30px; width: 100%; max-width: 802px; color: #000000 !important; font-family: Arial, sans-serif; margin: 25px 0 30px 0; border-radius: 8px; box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1); position: relative; box-sizing: border-box;\"> <div style=\"width: 100%; max-width: 100%; margin-bottom: 20px; text-align: left; padding-right: 20px; box-sizing: border-box;\"> <p> Your customers search everywhere. Make sure your brand <span style=\"background: linear-gradient(90deg, #D56EFE 0%, #068EF8 51%); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text;\">shows up<\/span>. <\/p> <p id=\"semrush-one-subhead\" style=\"font-family: Roboto, sans-serif; font-size: 18px; font-weight: 300; line-height: 25px; margin: 12px 0 0 0; color: #000000 !important;\"> The SEO toolkit you know, plus the AI visibility data you need. <\/p> <\/p><\/div> <p> <span id=\"semrush-one-cta\" style=\"display: inline-block; background-color: #FF642D; color: white; height: 44px; border: none; border-radius: 5px; cursor: pointer; font-size: 16px; padding: 0 24px; font-weight: bold; white-space: nowrap; box-sizing: border-box; text-decoration: none; line-height: 44px;\">Start Free Trial<\/span> <\/p> <div style=\"font-size: 12px;\"> <p>Get started with<\/p> <p> <img loading=\"lazy\" width=\"400\" height=\"52\" decoding=\"async\" alt=\"Semrush One Logo\" style=\"height: 16px; width: auto; display: block;\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2025\/11\/semrush-one.webp\" title=\"Rand Fishkin proved AI recommendations are inconsistent \u2013 here\u2019s why and how to fix it\u63d2\u56fe\" \/><img loading=\"lazy\" width=\"400\" height=\"52\" decoding=\"async\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2025\/11\/semrush-one.webp\" alt=\"Semrush One Logo\" style=\"height: 16px; width: auto; display: block;\" title=\"Rand Fishkin proved AI recommendations are inconsistent \u2013 here\u2019s why and how to fix it\u63d2\u56fe1\" \/> <\/div> <\/p><\/div> <\/p> <h2 id=\"ai-systems-are-confidence-engines-not-recommendation-engines\" class=\"wp-block-heading\">AI systems are confidence engines, not recommendation engines<\/h2> <p>AI platforms \u2013 ChatGPT, Claude, Google AI, Perplexity, Gemini, all of them \u2013 generate every response by sampling from a probability distribution shaped by:<\/p> <ul class=\"wp-block-list\"> <li>What the model knows.<\/li> <li>How confidently it knows it.<\/li> <li>What it retrieved at the moment of the query. <\/li> <\/ul> <p>When the model is highly confident about an entity\u2019s relevance, that entity appears consistently. When the model is uncertain, the entity sits at a low probability weight in the distribution \u2013 included in some samples, excluded in others \u2013 not because the selection is random but because the AI doesn\u2019t have enough confidence to commit.<\/p> <p>That\u2019s the inconsistency Fishkin documented, and I recognized it immediately because I\u2019ve been tracking exactly this pattern since 2015.\u00a0<\/p> <ul class=\"wp-block-list\"> <li>City of Hope appearing in 97% of cancer care responses isn\u2019t luck. It\u2019s the result of deep, corroborated, multi-source presence in exactly the data these systems consume.\u00a0<\/li> <li>The headphone brands at 55%-77% are in a middle zone \u2013 known, but not unambiguously dominant.\u00a0<\/li> <li>The brands at 5%-10% have low confidence weight, and the AI includes them in some outputs and not others because it lacks the confidence to commit consistently.\u00a0<\/li> <\/ul> <p>Confidence isn\u2019t just about what a brand publishes or how it structures its content. It\u2019s about where that brand stands relative to every other entity competing for the same query \u2013 a dimension I\u2019ve recently formalized as Topical Position.<\/p> <p>I\u2019ve formalized this phenomenon as \u201ccascading confidence\u201d \u2013 the cumulative entity trust that builds or decays through every stage of the algorithmic pipeline, from the moment a bot discovers content to the moment an AI generates a recommendation. It\u2019s the throughline concept in a framework I published this week.<\/p> <p><strong><em>Dig deeper: Search, answer, and assistive engine optimization: A 3-part approach<\/em><\/strong><\/p> <h2 id=\"every-piece-of-content-passes-through-10-gates-before-influencing-an-ai-recommendation\" class=\"wp-block-heading\">Every piece of content passes through 10 gates before influencing an AI recommendation<\/h2> <p>The pipeline is called DSCRI-ARGDW \u2013 discovered, selected, crawled, rendered, indexed, annotated, recruited, grounded, displayed, and won. That sounds complicated, but I can summarize it in a single question that repeats at every stage: How confident is the system in this content?<\/p> <ul class=\"wp-block-list\"> <li>Is this URL worth crawling?\u00a0<\/li> <li>Can it be rendered correctly?\u00a0<\/li> <li>What entities and relationships does it contain?\u00a0<\/li> <li>How sure is the system about those annotations?\u00a0<\/li> <li>When the AI needs to answer a question, which annotated content gets pulled from the index?\u00a0<\/li> <\/ul> <p>Confidence at each stage feeds the next. A URL from a well-structured, fast-rendering, semantically clean site arrives at the annotation stage with high accumulated confidence before a single word of content is analyzed. A URL from a slow, JavaScript-heavy site with inconsistent information arrives with low confidence, even if the actual content is excellent.<\/p> <p>This is pipeline attenuation, and here\u2019s where the math gets unforgiving. The relationship is multiplicative, not additive:<\/p> <ul class=\"wp-block-list\"> <li>C_final = C_initial \u00d7 \u220f\u03c4\u1d62<\/li> <\/ul> <p>In plain English, the final confidence an AI system has in your brand equals the initial confidence from your entity home multiplied by the transfer coefficient at every stage of the pipeline. The entity home \u2013 the canonical web property that anchors your entity in every knowledge graph and every AI model \u2013 sets the starting confidence, and then each stage either preserves or erodes it.\u00a0<\/p> <p>Maintain 90% confidence at each of 10 stages, and end-to-end confidence is 0.9\u00b9\u2070 = 35%. At 80% per stage, it\u2019s 0.8\u00b9\u2070 = 11%. One weak stage \u2013 say 50% at rendering because of heavy JavaScript \u2013 drops the total from 35% to 19% even if every other stage is at 90%. One broken stage can undo the work of nine good ones.<\/p> <p>This multiplicative principle isn\u2019t new, and it doesn\u2019t belong to anyone. In 2019, I published an article, How Google Universal Search Ranking Works: Darwinism in Search, based on a direct explanation from Google\u2019s Gary Illyes. He described how Google calculates ranking \u201cbids\u201d by multiplying individual factor scores rather than adding them. A zero on any factor kills the entire bid, no matter how strong the other factors are.<\/p> <p>Google applies this multiplicative model to ranking factors within a single system, and nobody owns multiplication. But what the cascading confidence framework does is apply this principle across the full 10-stage pipeline, across all three knowledge graphs. <\/p> <p>The system provides measurable transfer coefficients at every transition and bottleneck detection that identifies exactly where confidence is leaking. The math is universal, but the application to a multi-stage, multi-graph algorithmic pipeline is the invention.<\/p> <p>This complete system is the subject of a patent application I filed with the INPI titled \u201cSyst\u00e8me et proc\u00e9d\u00e9 d\u2019optimisation de la confiance en cascade \u00e0 travers un pipeline de traitement algorithmique multi-\u00e9tapes et multi-graphes.\u201d It\u2019s not a metaphor, it\u2019s an engineered system with an intellectual lineage going back seven years to a principle a Google engineer confirmed to me in person.<\/p> <p>Fishkin measured the output \u2013 the inconsistency of recommendation lists. But the output is a symptom, and the cause is confidence loss at specific stages of this pipeline, compounded across multiple knowledge representations. <\/p> <p>You can\u2019t fix inconsistency by measuring it more precisely. You can only fix it by building confidence at every stage.<\/p> <h2 id=\"the-corroboration-threshold-is-where-ai-shifts-from-hesitant-to-assertive\" class=\"wp-block-heading\">The corroboration threshold is where AI shifts from hesitant to assertive<\/h2> <p>There\u2019s a specific transition point where AI behavior changes. I call it the \u201ccorroboration threshold\u201d \u2013 the minimum number of independent, high-confidence sources corroborating the same conclusion about your brand before the AI commits to including it consistently.<\/p> <p>Below the threshold, the AI hedges. It says \u201cclaims to be\u201d instead of \u201cis,\u201d it includes a brand in some outputs but not others, and the reason isn\u2019t randomness but insufficient confidence. <\/p> <p>The brand sits in the low-confidence zone, where inconsistency is the predictable outcome. Above the threshold, the AI asserts \u2013 stating relevance as fact, including the brand consistently, operating with the kind of certainty that produces City of Hope\u2019s 97%.<\/p> <p>My data across 73 million brand profiles places this threshold at approximately 2-3 independent, high-confidence sources corroborating the same claim as the entity home. That number is deceptively small because \u201chigh-confidence\u201d is doing the heavy lifting \u2013 these are sources the algorithm already trusts deeply, including Wikipedia, industry databases, and authoritative media.\u00a0<\/p> <p>Without those high-authority anchors, the threshold rises considerably because more sources are needed and each carries less individual weight. The threshold isn\u2019t a one-time gate. Once crossed, the confidence compounds with every subsequent corroboration, which is why brands that cross it early pull further ahead over time, while brands that haven\u2019t crossed it yet face an ever-widening gap.<\/p> <p>Not identical wording, but equivalent conviction. The entity home states, \u201cX is the leading authority on Y,\u201d two or three independent, authoritative third-party sources confirm it with their own framing, and the AI encodes it as fact.<\/p> <p>This fact is visible in my data, and it explains exactly why Fishkin\u2019s experiment produced the results it did. In narrow categories like LA Volvo dealerships or SaaS cloud computing providers \u2013 where few brands exist and corroboration is dense \u2013 AI responses showed higher pairwise correlation.\u00a0<\/p> <p>In broad categories like science fiction novels \u2013 where thousands of options exist and corroboration is thin \u2013 responses were wildly diverse. The corroboration threshold aligns with Fishkin\u2019s findings.<\/p> <p><strong><em>Dig deeper: The three AI research modes redefining search \u2013 and why brand wins<\/em><\/strong><\/p> <h2 id=\"authoritas-proved-that-fabricated-entities-cant-fool-ai-confidence-systems\" class=\"wp-block-heading\">Authoritas proved that fabricated entities can\u2019t fool AI confidence systems<\/h2> <p>Authoritas published a study in December 2025 \u2013 \u201cCan you fake it till you make it in the age of AI?\u201d \u2013 that tested this directly, and the results confirm that Cascading Confidence isn\u2019t just theory. Where Fishkin\u2019s research shows the output problem \u2013 inconsistent lists \u2013 Authoritas shows the input side.<\/p> <p>Authoritas investigated a real-world case where a UK company created 11 entirely fictional \u201cexperts\u201d \u2013 made-up names, AI-generated headshots, faked credentials. They seeded these personas into more than 600 press articles across UK media, and the question was straightforward: Would AI models treat these fake entities as real experts?<\/p> <p>The answer was absolute: Across nine AI models and 55 topic-based questions \u2013 \u201cWho are the UK\u2019s leading experts in X?\u201d \u2013 zero fake experts appeared in any recommendation. Six hundred press articles, and not a single AI recommendation. That might seem to contradict a threshold of 2-3 sources, but it confirms it.\u00a0<\/p> <p>The threshold requires independent, high-confidence sources, and 600 press articles from a single seeding campaign are neither independent \u2013 they trace to the same origin \u2013 nor high-confidence \u2013 press mentions sit in the document graph only. <\/p> <p>The AI models looked past the surface-level coverage and found no deep entity signals \u2013 no entity home, no knowledge graph presence, no conference history, no professional registration, no corroboration from the kind of authoritative sources that actually move the needle.<\/p> <p>The fake personas had volume, they had mentions, but what they lacked was cascading confidence \u2013 the accumulated trust that builds through every stage of the pipeline. Volume without confidence means inconsistent appearance at best, while confidence without volume still produces recommendations. <\/p> <p>AI evaluates confidence \u2014 it doesn\u2019t count mentions. Confidence requires multi-source, multi-graph corroboration that fabricated entities fundamentally can\u2019t build.<\/p> <p><!-- START INLINE FORM --><\/p> <div class=\"nl-inline-form border py-2 px-1 my-2\"> <div class=\"row align-items-center nl-inline-container\"> <div class=\"col-12 col-lg-3 col-xl-4 pe-md-0 pb-2 pb-lg-0\"> <p class=\"inline-form-text text-center mb-0\">Get the newsletter search marketers rely on.<\/p> <\/p><\/div> <\/p><\/div> <\/div> <p><!-- END INLINE FORM --><\/p> <hr class=\"wp-block-separator has-text-color has-cyan-bluish-gray-color has-css-opacity has-cyan-bluish-gray-background-color has-background\"\/> <h2 id=\"ai-citability-concentration-increased-293-in-under-two-months\" class=\"wp-block-heading\">AI citability concentration increased 293% in under two months<\/h2> <p>Authoritas used the weighted citability score, or WCS, a metric that measures how much AI engines trust and cite entities, calculated across ChatGPT, Gemini, and Perplexity using cross-context questions. <\/p> <p>I have no influence over their data collection or their results. Fishkin\u2019s methodology and Authoritas\u2019 aren\u2019t identical. Fishkin pinged the same query repeatedly to measure variance, while Authoritas tracks varied queries on the same topic. That said, the directional finding is consistent.<\/p> <p>Their dataset includes 143 recognized digital marketing experts, with full snapshots from the original study by Laurence O\u2019Toole and Authoritas in December 2025 and their latest measurement on Feb. 2. The pattern across the entire dataset tells a story that goes far beyond individual scores.<\/p> <ul class=\"wp-block-list\"> <li>The top 10 experts captured 30.9% of all citability in December. By February, they captured 59.5% \u2013 a 92% increase in concentration in under two months. <\/li> <li>The HHI, or Herfindahl-Hirschman Index, the standard measure of market concentration, rose from 0.026 to 0.104 \u2013 a 293% increase in concentration. This happened while the total expert pool widened from 123 to 143 tracked entities. <\/li> <\/ul> <p>More experts are being cited, the field is getting bigger, and the top is pulling away faster. Dominance is compounding while the long tail grows.<\/p> <p>This is cascading confidence at population scale. The experts who actively manage their digital footprint \u2013 clean entity home, corroborated claims, consistent narrative across the algorithmic trinity \u2013 aren\u2019t just maintaining their position, they\u2019re accelerating away from everyone else. <\/p> <p>Each cycle of AI training and retrieval reinforces their advantage \u2013 confident entities generate confident AI outputs, which build user trust, which generate positive engagement signals, which further reinforce the AI\u2019s confidence. It\u2019s a flywheel, and once it\u2019s spinning, it becomes very, very hard for competitors to catch up.<\/p> <p>At the individual level, the data confirms the mechanism. I lead the dataset at a WCS of 23.50, up from 21.48 in December, a gain of +2.02. That\u2019s not because I\u2019m more famous than everyone else on the list. <\/p> <p>It\u2019s because we\u2019ve been systematically building my cascading confidence for years \u2013 clean entity home, corroborated claims across the algorithmic trinity, consistent narrative, structured data, deep knowledge graph presence. <\/p> <p>I\u2019m the primary test case because I\u2019m in control of all my variables \u2013 I have a huge head start. In a future article, I\u2019ll dig into the details of the scores and why the experts have the scores they do.<\/p> <p>The pattern across my client base mirrors the population data. Brands that systematically clean their digital footprint, anchor entity confidence through the entity home, and build corroboration across the algorithmic trinity don\u2019t just appear in AI recommendations. <\/p> <p>They appear consistently, their advantage compounds over time, and they exit the low-confidence zone to enter the self-reinforcing recommendation set.<\/p> <p><strong><em>Dig deeper: From SEO to algorithmic education: The roadmap for long-term brand authority<\/em><\/strong><\/p> <h2 id=\"ai-retrieves-from-three-knowledge-representations-simultaneously-not-one\" class=\"wp-block-heading\">AI retrieves from three knowledge representations simultaneously, not one<\/h2> <p>AI systems pull from what I call the Three Graphs model \u2013 the algorithmic trinity \u2013 and understanding this explains why some brands achieve near-universal visibility while others appear sporadically.<\/p> <ul class=\"wp-block-list\"> <li>The entity graph, or knowledge graph, contains explicit entities with binary verified edges and low fuzziness \u2013 either a brand is in, or it\u2019s not. <\/li> <li>The document graph, or search engine index, contains annotated URLs with scored and ranked edges and medium fuzziness. <\/li> <li>The concept graph, or LLM parametric knowledge, contains learned associations with high fuzziness, and this is where the inconsistency Fishkin documented comes from.<\/li> <\/ul> <p>When retrieval systems combine results from multiple sources \u2013 and they do, using mechanisms analogous to reciprocal rank fusion \u2013 entities present across all three graphs receive a disproportionate boost. <\/p> <p>The effect is multiplicative, not additive. A brand that has a strong presence in the knowledge graph and the document index and the concept space gets chosen far more reliably than a brand present in only one.<\/p> <p>This explains a pattern Fishkin noticed but didn\u2019t have the framework to interpret \u2013 why visibility percentages clustered differently across categories. The brands with near-universal visibility aren\u2019t just \u201cmore famous,\u201d they have dense, corroborated presence across all three knowledge representations. The brands in the inconsistent pool are typically present in only one or two.\u00a0<\/p> <p>The Authoritas fake expert study confirms this from the negative side. The fake personas existed only in the document graph, press articles, with zero entity graph presence and negligible concept graph encoding. One graph out of three, and the AI treated them accordingly.<\/p> <h2 id=\"what-i-tell-every-brand-after-reading-fishkins-data\" class=\"wp-block-heading\">What I tell every brand after reading Fishkin\u2019s data<\/h2> <p>Fishkin\u2019s recommendations were cautious \u2013 visibility percentage is a reasonable metric, ranking position isn\u2019t, and brands should demand transparent methodology from tracking vendors. All fair, but that\u2019s analyst advice. What follows is practitioner advice, based on doing this work in production.<\/p> <h3 class=\"wp-block-heading\" id=\"h-stop-optimizing-outputs-and-start-optimizing-inputs\">Stop optimizing outputs and start optimizing inputs<\/h3> <p>The entire AI tracking industry is fixated on measuring what AI says about you, which is like checking your blood pressure without treating the underlying condition. Measure if it helps, but the work is in building confidence at every stage of the pipeline, and that\u2019s where I focus my clients\u2019 attention from day one.<\/p> <h3 class=\"wp-block-heading\" id=\"h-start-at-the-entity-home\">Start at the entity home<\/h3> <p>My experience clearly demonstrates that this single intervention produces the fastest measurable results. Your entity home is the canonical web property that should anchor your entity in every knowledge graph and every AI model. If it\u2019s ambiguous, hedging, or contradictory with what third-party sources say about you, it is actively training AI to be uncertain.\u00a0<\/p> <p>I\u2019ve seen aligning the entity home with third-party corroboration produce measurable changes in bottom-of-funnel AI citation behavior within weeks, and it remains the highest ROI intervention I know.<\/p> <h3 class=\"wp-block-heading\" id=\"h-cross-the-corroboration-threshold-for-the-critical-claims\">Cross the corroboration threshold for the critical claims<\/h3> <p>I ask every client to identify the claims that matter most:<\/p> <ul class=\"wp-block-list\"> <li>Who you are.<\/li> <li>What you do.<\/li> <li>Why you\u2019re credible.\u00a0<\/li> <\/ul> <p>Then, I work with them to ensure each claim is corroborated by at least 2-3 independent, high-authority sources. Not just mentioned, but confirmed with conviction.\u00a0<\/p> <p>This is what flips AI from \u201csometimes includes\u201d to \u201creliably includes,\u201d and I\u2019ve seen it happen often enough to know the threshold is real.<\/p> <p><strong><em>Dig deeper: SEO in the age of AI: Becoming the trusted answer<\/em><\/strong><\/p> <h2 id=\"build-across-all-three-graphs-simultaneously\" class=\"wp-block-heading\">Build across all three graphs simultaneously<\/h2> <p>Knowledge graph presence (structured data, entity recognition), document graph presence (indexed, well-annotated content on authoritative sites), and concept graph presence (consistent narrative across the corpus AI trains on) all need attention.\u00a0<\/p> <p>The Authoritas study showed exactly what happens when a brand exists in only one \u2013 the AI treats it accordingly.<\/p> <h3 class=\"wp-block-heading\" id=\"h-work-the-pipeline-from-gate-1-not-gate-9\">Work the pipeline from Gate 1, not Gate 9<\/h3> <p>Most SEO and GEO advice operates at the display stage, optimizing what AI shows. But if your content is losing confidence at discovery, selection, rendering, or annotation, it will never reach display consistently enough to matter.\u00a0<\/p> <p>I\u2019ve watched brands spend months on display-stage optimization that produced nothing because the real bottleneck was three stages earlier, and I always start my diagnostic at the beginning of the pipeline, not the end.<\/p> <h3 class=\"wp-block-heading\" id=\"h-maintain-it-because-the-gap-is-widening\">Maintain it because the gap is widening<\/h3> <p>The WCS data across 143 tracked experts shows that AI citability concentration increased 293% in under two months. The experts who maintain their digital footprint are pulling away from everyone else at an accelerating rate.\u00a0<\/p> <p>Starting now still means starting early, but waiting means competing against entities whose advantage compounds every cycle. This isn\u2019t a one-time project. It\u2019s an ongoing discipline, and the returns compound with every iteration.<\/p> <div style=\"background: radial-gradient(circle at 30% 40%, rgba(184, 111, 255, 0.15), rgba(0, 169, 255, 0.15) 40%, #CDE8FD 70%); padding: 30px; width: 100%; max-width: 802px; color: #000000 !important; font-family: Arial, sans-serif; margin: 25px 0 30px 0; border-radius: 8px; box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1); position: relative; box-sizing: border-box;\"> <div style=\"width: 100%; max-width: 100%; margin-bottom: 20px; text-align: left; padding-right: 20px; box-sizing: border-box;\"> <p> See the <span style=\"background: linear-gradient(90deg, #D56EFE 0%, #068EF8 51%); -webkit-background-clip: text; -webkit-text-fill-color: transparent; background-clip: text;\">complete picture<\/span> of your search visibility. <\/p> <p id=\"semrush-one-subhead-bottom\" style=\"font-family: Roboto, sans-serif; font-size: 18px; font-weight: 300; line-height: 25px; margin: 12px 0 0 0; color: #000000 !important;\"> Track, optimize, and win in Google and AI search from one platform. <\/p> <\/p><\/div> <p> <span id=\"semrush-one-cta-bottom\" style=\"display: inline-block; background-color: #FF642D; color: white; height: 44px; border: none; border-radius: 5px; cursor: pointer; font-size: 16px; padding: 0 24px; font-weight: bold; white-space: nowrap; box-sizing: border-box; text-decoration: none; line-height: 44px;\">Start Free Trial<\/span> <\/p> <div style=\"font-size: 12px;\"> <p>Get started with<\/p> <p> <img loading=\"lazy\" width=\"400\" height=\"52\" decoding=\"async\" alt=\"Semrush One Logo\" style=\"height: 16px; width: auto; display: block;\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2025\/11\/semrush-one.webp\" title=\"Rand Fishkin proved AI recommendations are inconsistent \u2013 here\u2019s why and how to fix it\u63d2\u56fe\" \/><img loading=\"lazy\" width=\"400\" height=\"52\" decoding=\"async\" src=\"https:\/\/searchengineland.com\/wp-content\/seloads\/2025\/11\/semrush-one.webp\" alt=\"Semrush One Logo\" style=\"height: 16px; width: auto; display: block;\" title=\"Rand Fishkin proved AI recommendations are inconsistent \u2013 here\u2019s why and how to fix it\u63d2\u56fe1\" \/> <\/div> <\/p><\/div> <\/p> <h2 id=\"fishkin-proved-the-problem-exists-the-solution-has-been-in-production-for-a-decade\" class=\"wp-block-heading\">Fishkin proved the problem exists. The solution has been in production for a decade.<\/h2> <p>Fishkin\u2019s research is a gift to the industry. He killed the myth of AI ranking position with data, he validated that visibility percentage, while imperfect, correlates with something real, and he raised the right questions about methodology that the AI tracking vendors should have been answering all along.<\/p> <p>But tracking AI visibility without understanding why visibility varies is like tracking a stock price without understanding the business. The price is a signal, and the business is the thing.<\/p> <p>AI recommendations are inconsistent when AI systems lack confidence in a brand. They become consistent when that confidence is built deliberately, through:<\/p> <ul class=\"wp-block-list\"> <li>The entity home.<\/li> <li>Corroborated claims that cross the corroboration threshold.<\/li> <li>Multi-graph presence.<\/li> <li>Every stage of the pipeline that processes your content before AI ever generates a response. <\/li> <\/ul> <p>This isn\u2019t speculation, and the evidence comes from every direction.<\/p> <p>The process behind this approach has been under development since 2015 and is formalized in a peer-review-track academic paper. Several related patent applications have been filed in France, covering entity data structuring, prompt assembly, multi-platform coherence measurement, algorithmic barrier construction, and cascading confidence optimization.<\/p> <p>The dataset supporting the work spans 25 billion data points across 73 million brand profiles. In tracked populations, shifts in AI citability have been observed \u2014 including cases where the top 10 experts increased their share from 31% to 60% in under two months while the overall field expanded. Independent research from Authoritas reports findings that align with this mechanism.<\/p> <p>Fishkin proved the problem exists. My focus over the past decade has been on implementing and refining practical responses to it.<\/p> <p><em>This is the first article in a series. The second piece, \u201cWhat the AI expert rankings actually tell us: 8 archetypes of AI visibility,\u201d examines how the pipeline\u2019s effects manifest across 57 tracked experts. The third, \u201cThe ten gates between your content and an AI recommendation,\u201d opens the DSCRI-ARGDW pipeline itself.<\/em><\/p> <\/div> <p> <em>Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. Search Engine Land is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.<\/em> <\/p> <p>Opinion#Rand #Fishkin #proved #recommendations #inconsistent #heres #fix1771359609<\/p> ","protected":false},"excerpt":{"rendered":"<p>Rand Fishkin just published the most important piece of primary research the AI visibility industry has seen so far. His conclusion \u2013 that AI tools produce wildly inconsistent brand recommendation lists, making \u201cranking position\u201d a meaningless metric \u2013 is correct, well-evidenced, and long overdue. But Fishkin stopped one step short of the answer that matters. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3707,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[18],"tags":[12593,210,2280,11130,155,12594,12592,7323],"class_list":["post-3706","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-careers","tag-fishkin","tag-fix","tag-heres","tag-inconsistent","tag-opinion","tag-proved","tag-rand","tag-recommendations"],"acf":[],"_links":{"self":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/3706","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3706"}],"version-history":[{"count":0,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/3706\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/media\/3707"}],"wp:attachment":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3706"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3706"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3706"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}