{"id":6397,"date":"2026-04-13T23:34:33","date_gmt":"2026-04-13T15:34:33","guid":{"rendered":"http:\/\/longzhuplatform.com\/?p=6397"},"modified":"2026-04-13T23:34:33","modified_gmt":"2026-04-13T15:34:33","slug":"how-ai-chooses-which-brands-to-recommend-from-relational-knowledge-to-topical-presence-via-sejournal-dixon_jones","status":"publish","type":"post","link":"http:\/\/longzhuplatform.com\/?p=6397","title":{"rendered":"How AI Chooses Which Brands To Recommend: From Relational Knowledge To Topical Presence via @sejournal, @Dixon_Jones"},"content":{"rendered":"<p><\/p> <div id=\"narrow-cont\"> <p>Ask ChatGPT or Claude to recommend a product in your market. If your brand does not appear, you have a problem that no amount of keyword optimization will fix.<\/p> <p>Most SEO professionals, when faced with this, immediately think about content. More pages, more keywords, better on-page signals. But the reason your brand is absent from an AI recommendation may have nothing to do with pages or keywords. It has to do with something called relational knowledge, and a 2019 research paper that most marketers have never heard of.<\/p> <h2>The Paper Most Marketers Missed<\/h2> <p>In September 2019, Fabio Petroni and colleagues at Facebook AI Research and University College London published \u201cLanguage Models as Knowledge Bases?\u201d at EMNLP, one of the top conferences in natural language processing.<\/p> <p>Their question was straightforward: Does a pretrained language model like BERT actually store factual knowledge in its weights? Not linguistic patterns or grammar rules, but facts about the world. Things like \u201cDante was born in Florence\u201d or \u201ciPod Touch is produced by Apple.\u201d<\/p> <p>To test this, they built a probe called LAMA (LAnguage Model Analysis). They took known facts, thousands of them drawn from Wikidata, ConceptNet, and SQuAD, and converted each one into a fill-in-the-blank statement. \u201cDante was born in ___.\u201d Then they asked BERT to predict the missing word.<\/p> <p>BERT, without any fine-tuning, recalled factual knowledge at a level competitive with a purpose-built knowledge base. That knowledge base had been constructed using a supervised relation extraction system with an oracle-based entity linker, meaning it had direct access to the sentences containing the answers. A language model that had simply read a lot of text performed nearly as well.<\/p> <p>The model was not searching for answers. It had absorbed associations between entities and concepts during training, and those associations were retrievable. BERT had built an internal map of how things in the world relate to each other.<\/p> <p>After this, the research community started taking seriously the idea that language models work as knowledge stores, not merely as pattern-matching engines.<\/p> <h2>What \u201cRelational Knowledge\u201d Means<\/h2> <p>Petroni tested what he and others called relational knowledge: facts expressed as a triple of subject, relation, and object. For example: (Dante, [born-in], Florence). (Kenya, [diplomatic-relations-with], Uganda). (iPod Touch, [produced-by], Apple).<\/p> <p>What makes this interesting for brand visibility (and AIO) is that Petroni\u2019s team discovered that the model\u2019s ability to recall a fact depends heavily on the structural type of the relationship. They identified three types, and the accuracy differences between them were large.<\/p> <h3>1-To-1 Relations: One Subject, One Object<\/h3> <p>These are unambiguous facts. \u201cThe capital of Japan is ___.\u201d There is one answer: Tokyo. Every time the model encountered Japan and capital in the training data, the same object appeared. The association built up cleanly over repeated exposure.<\/p> <p>BERT got these right 74.5% of the time, which is high for a model that was never explicitly trained to answer factual questions.<\/p> <h3>N-To-1 Relations: Many Subjects, One Object<\/h3> <p>Here, many different subjects share the same object. \u201cThe official language of Mauritius is ___.\u201d The answer is English, but English is also the answer for dozens of other countries. The model has seen the pattern (country \u2192 official language \u2192 English) many times, so it knows the shape of the answer well. But it sometimes defaults to the most statistically common object rather than the correct one for that specific subject.<\/p> <p>Accuracy dropped to around 34%. The model knows the category but gets confused within it.<\/p> <h3>N-To-M Relations: Many Subjects, Many Objects<\/h3> <p>This is where things get messy. \u201cPatrick Oboya plays in position ___.\u201d A single footballer might play midfielder, forward, or winger depending on context. And many different footballers share each of those positions. The mapping is loose in both directions.<\/p> <p>BERT\u2019s accuracy here was only about 24%. The model typically predicts something of the correct type (it will say a position, not a city), but it cannot commit to a specific answer because the training data contains too many competing signals.<\/p> <blockquote> <p><em>I find this super useful because it maps directly onto what happens when an AI tries to recommend a brand. Brands (without monopolies) operate in a \u201cmany-to-many\u201d relationship. So \u201cRecommend a [Brand] with a [feature]\u201d is one of the hardest things for AI to \u201cpredict\u201d with consistency. I will come back to that\u2026<\/em><\/p> <\/blockquote> <h2>What Has Happened Since 2019<\/h2> <p>Petroni\u2019s paper established that language models store relational knowledge. The obvious next question was: where, exactly?<\/p> <p>In 2022, Damai Dai and colleagues at Microsoft Research published \u201cKnowledge Neurons in Pretrained Transformers\u201d at ACL. They introduced a method to locate specific neurons in BERT\u2019s feed-forward layers that are responsible for expressing specific facts. When they activated these \u201cknowledge neurons,\u201d the model\u2019s probability of producing the correct fact increased by an average of 31%. When they suppressed them, it dropped by 29%.<\/p> <blockquote> <p><em>OMG! This is not a metaphor. Factual associations are encoded in identifiable neurons within the model. You can find them, and you can change them.<\/em><\/p> <\/blockquote> <p>Later that year, Kevin Meng and colleagues at MIT published \u201cLocating and Editing Factual Associations in GPT\u201d at NeurIPS. This took the same ideas and applied them to GPT-style models, which is the architecture behind ChatGPT, Claude, and the AI assistants that buyers actually use when they ask for recommendations. Meng\u2019s team found they could pinpoint the specific components inside GPT that activate when the model recalls a fact about a subject.<\/p> <p>More importantly, they could change those facts. They could edit what the model \u201cbelieves\u201d about an entity without retraining the whole system.<\/p> <p>That finding matters for SEOs. If the associations inside these models were fixed and permanent, there would be nothing to optimize for. But they are not fixed. They are shaped by what the model absorbed during training, and they shift when the model is retrained on new data. The web content, the technical documentation, the community discussions, the analyst reports that exist when the next training run happens will determine which brands the model associates with which topics.<\/p> <p>So, the progress from 2019 to 2022 looks like this. Petroni showed that models store relational knowledge. Dai showed where it is stored. Meng showed it can be changed. That last point is the one that should matter most to anyone trying to influence how AI recommends brands.<\/p> <h2>What This Means For Brands In AI Search<\/h2> <p>Let me translate Petroni\u2019s three relation types into brand positioning scenarios.<\/p> <h3>The 1-To-1 Brand: Tight Association<\/h3> <p>Think of Stripe and online payments. The association is specific and consistently reinforced across the web. Developer documentation, fintech discussions, startup advice columns, integration guides: They all connect Stripe to the same concept. When someone asks an AI, \u201cWhat is the best payment processing platform for developers?\u201d the model retrieves Stripe with high confidence, because the relational link is unambiguous.<\/p> <p>This is Petroni\u2019s 1-to-1 dynamic. Strong signal, no competing noise.<\/p> <h3>The N-To-1 Brand: Lost In The Category<\/h3> <p>Now consider being one of 15 cybersecurity vendors associated with \u201cendpoint protection.\u201d The model knows the category well. It has seen thousands of discussions about endpoint protection. But when asked to recommend a specific vendor, it defaults to whichever brand has the strongest association signal. Usually, that is the one most discussed in authoritative contexts: analyst reports, technical forums, standards documentation.<\/p> <p>If your brand is present in the conversation but not differentiated, you are in an N-to-1 situation. The model might mention you occasionally, but it will tend to retrieve the brand with the strongest association instead.<\/p> <h3>The N-To-M Brand: Everywhere And Nowhere<\/h3> <p>This is the hardest position. A large enterprise software company operating across cloud infrastructure, consulting, databases, and hardware has associations with many topics, but each of those topics is also associated with many competitors. The associations are loose in both directions.<\/p> <p>The result is what Petroni observed with N-to-M relations: The model produces something of the correct type but cannot commit to a specific answer. The brand appears occasionally in AI recommendations but never reliably for any specific query.<\/p> <p>I see this pattern frequently when working with enterprise brands. They have invested heavily in content across many topics, but have not built the kind of concentrated, reinforced associations that the model needs to retrieve them with confidence for any single one.<\/p> <h2>Measuring The Gap<\/h2> <p>If you accept the premise, and the research supports it, that AI recommendations are driven by relational associations stored in the model\u2019s weights, then the practical question is: Can you measure where your brand sits in that landscape?<\/p> <p>AI Share of Voice is the metric most teams start with. It tells you how often your brand appears in AI-generated responses. That is useful, but it is a score without a diagnosis. Knowing your Share of Voice is 8% does not tell you why it is 8%, or which specific topics are keeping you out of the recommendations where you should appear.<\/p> <p>Two brands can have identical Share of Voice scores for completely different structural reasons. One might be broadly associated with many topics but weakly on each. Another might be deeply associated with two topics but invisible everywhere else. These are different problems requiring different strategies.<\/p> <p>This is the gap that a metric called AI Topical Presence, developed by Waikay, is designed to address. Rather than measuring whether you appear, it measures what the AI associates you with, and what it does not. [Disclosure: I am the CEO of Waikay]<\/p> <figure id=\"attachment_570483\" class=\"wp-caption aligncenter\" style=\"width: 3112px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528.png\" alt=\"Topical Presence is a way to measure Relational Knowledge\" width=\"3112\" height=\"1556\" class=\"wp-image-570483 size-full\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-384x192.png 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-425x213.png 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-480x240.png 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-680x340.png 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-768x384.png 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-850x425.png 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-1024x512.png 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-1280x720.png 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-1300x680.png 1300w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-1536x768.png 1536w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-1600x800.png 1600w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-1920x960.png 1920w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528-2048x1024.png 2048w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/03\/ai-competitive-quadrant-12-528.png 3112w\" sizes=\"auto, (max-width: 3112px) 100vw, 3112px\" loading=\"lazy\" title=\"How AI Chooses Which Brands To Recommend: From Relational Knowledge To Topical Presence via @sejournal, @Dixon_Jones\u63d2\u56fe\" \/><figcaption class=\"wp-caption-text\">Topical Presence is as important as Share of Voice (Image from author, March 2026)<\/figcaption><\/figure> <p>The metric captures three dimensions. Depth measures how strongly the AI connects your brand to relevant topics, weighted by importance. Breadth measures how many of the core commercial topics in your market the AI associates with your brand. Concentration measures how evenly those associations are distributed, using a Herfindahl-Hirschman Index borrowed from competition economics.<\/p> <p>A brand with high depth but low breadth is known well for a few things but invisible for many others. A brand with wide coverage but high concentration is fragile: One model update could change its visibility significantly. The component breakdown tells you which problem you have and which lever to pull.<\/p> <p>In the chart above, we start to see how different brands are really competing with each other in a way we have not been able to see before. For example, Inlinks is competing much more closely with a product called Neuronwriter than previously understood. Neuronwriter has less share of voice (I probably helped them by writing this article\u2026 oops!), but they have a better topical presence around the prompt, \u201cWhat are the best semantic SEO tools?\u201d So all things being equal, a bit of marketing is all they need to take Inlinks. This, of course, assumes that Inlinks stands still. It won\u2019t. By contrast, the threat of Ahrefs is ever-present, but by being a full-service offering, they have to spread their \u201cshare of voice\u201d across all of their product offerings. So while their topical presence is high, the brand is not the natural choice for an LLM to choose for this prompt.<\/p> <p>This connects back to Petroni\u2019s framework. If your brand is in a 1-to-1 position for some topics but absent from others, topical presence shows you where the gaps are. If you are in an N-to-1 or N-to-M situation, it helps you identify which associations need strengthening and which topics competitors have already built dominant positions on.<\/p> <h2>From Ranking Pages To Building Associations<\/h2> <p>For 25 years, SEO has been about ranking pages. PageRank itself was a page-level algorithm; the clue was always in the name (IYKYK \u2026 No need to correct me\u2026). Even as Google moved towards entities and knowledge graphs, the practical work of SEO remained rooted in keywords, links, and on-page optimization.<\/p> <p>AI visibility requires something different. The models that generate brand recommendations are retrieving associations built during training, formed from patterns of co-occurrence across many contexts. A brand that publishes 500 blog posts about \u201czero trust\u201d will not build the same association strength as a brand that appears in NIST documentation, peer discussions, analyst reports, and technical integrations.<\/p> <p>This is fantastic news for brands that do good work in their markets. Content volume alone does not create strong relational associations. The model\u2019s training process works as a quality filter: It learns from patterns across the entire corpus, not from any single page. A brand with real expertise, discussed across many contexts by many voices, will build stronger associations than a brand that simply publishes more.<\/p> <p><em>The question to ask is not \u201cDo we have a page about this topic?\u201d It is: \u201cIf someone read everything the AI has absorbed about this topic, would our brand come across as a credible participant in the conversation?\u201d<\/em><\/p> <p>That is a harder question. But the research that began with Petroni\u2019s fill-in-the-blank tests in 2019 has given us enough understanding of the mechanism to measure it. And what you can measure, you can improve.<\/p> <p><strong>More Resources:<\/strong><\/p> <hr\/> <p><em>Featured Image: SvetaZi\/Shutterstock<\/em><\/p> <\/div> <p>SEO#Chooses #Brands #Recommend #Relational #Knowledge #Topical #Presence #sejournal #Dixon_Jones1776094473<\/p> ","protected":false},"excerpt":{"rendered":"<p>Ask ChatGPT or Claude to recommend a product in your market. If your brand does not appear, you have a problem that no amount of keyword optimization will fix. Most SEO professionals, when faced with this, immediately think about content. More pages, more keywords, better on-page signals. But the reason your brand is absent from [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":6398,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[103,23962,23964,2260,18064,10206,23963,80,9103],"class_list":["post-6397","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-accessibility","tag-brands","tag-chooses","tag-dixon_jones","tag-knowledge","tag-presence","tag-recommend","tag-relational","tag-sejournal","tag-topical"],"acf":[],"_links":{"self":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/6397","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6397"}],"version-history":[{"count":0,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/6397\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/media\/6398"}],"wp:attachment":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6397"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6397"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6397"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}