{"id":3678,"date":"2026-02-17T22:57:22","date_gmt":"2026-02-17T14:57:22","guid":{"rendered":"http:\/\/longzhuplatform.com\/?p=3678"},"modified":"2026-02-17T22:57:22","modified_gmt":"2026-02-17T14:57:22","slug":"the-science-of-how-ai-pays-attention-via-sejournal-kevin_indig","status":"publish","type":"post","link":"http:\/\/longzhuplatform.com\/?p=3678","title":{"rendered":"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig"},"content":{"rendered":"<p><\/p> <div id=\"narrow-cont\"> <p><em>Boost your skills with Growth Memo\u2019s weekly expert insights. Subscribe for free!<\/em><\/p> <p>This week, I share my findings from analyzing 1.2 million ChatGPT responses to answer the question of how to improve your chances of getting cited.<\/p> <figure id=\"attachment_567608\" class=\"wp-caption aligncenter\" style=\"width: 1536px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/ai-attention-334.jpg\"  width=\"1536\" height=\"1024\" class=\"size-full wp-image-567608\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/ai-attention-334-384x256.jpg 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/ai-attention-334-425x283.jpg 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/ai-attention-334-480x320.jpg 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/ai-attention-334-680x453.jpg 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/ai-attention-334-768x512.jpg 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/ai-attention-334-850x567.jpg 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/ai-attention-334-1024x683.jpg 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/ai-attention-334-1280x720.jpg 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/ai-attention-334-1300x680.jpg 1300w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/ai-attention-334.jpg 1536w\" sizes=\"auto, (max-width: 1536px) 100vw, 1536px\" loading=\"lazy\" title=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe\" alt=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe\" \/><figcaption class=\"wp-caption-text\">Image Credit: Kevin Indig<\/figcaption><\/figure> <p>For 20 years, SEOs have written\u201dultimate guides\u201d designed to keep humans on the page. We write long intros. We drag insights all along through the draft and into the conclusion. We build suspense to the final call to action.<\/p> <p>The data shows that this style of writing is not ideal for AI visibility.<\/p> <p>After analyzing 1.2 million verified ChatGPT citations, I found a pattern so consistent it has a P-Value of 0.0: the \u201cski ramp.\u201d ChatGPT pays disproportionate attention to the top 30% of your content. Furthermore, I found five clear characteristics of content that gets cited. To win in the AI era, you need to start writing like a journalist.<\/p> <p><iframe class=\"sej-iframe-auto-height\" id=\"in-content-iframe\" scrolling=\"no\" src=\"https:\/\/www.searchenginejournal.com\/wp-json\/sscats\/v2\/tk\/Middle_Post_Text\"><\/iframe><\/p> <h2><strong>1. Which Sections Of A Text Are Most Likely To Be Cited By ChatGPT?<\/strong><\/h2> <figure id=\"attachment_567601\" class=\"wp-caption aligncenter\" style=\"width: 1600px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900.png\"  width=\"1600\" height=\"1200\" class=\"size-full wp-image-567601\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900-384x288.png 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900-425x319.png 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900-480x360.png 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900-680x510.png 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900-768x576.png 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900-850x638.png 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900-1024x768.png 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900-1120x840.png 1120w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900-1280x720.png 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900-1300x680.png 1300w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900-1536x1152.png 1536w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citations-900.png 1600w\" sizes=\"auto, (max-width: 1600px) 100vw, 1600px\" loading=\"lazy\" title=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe1\" alt=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe1\" \/><figcaption class=\"wp-caption-text\">Image Credit: Kevin Indig<\/figcaption><\/figure> <p>There isn\u2019t much known about which parts of a text LLMs cite. We analyzed 18,012 citations and found a \u201cski ramp\u201d distribution.<\/p> <ol> <li><strong>44.2% of all citations come from the first 30% of text (the intro).<\/strong> The AI reads like a journalist. It grabs the \u201cWho, What, Where\u201d from the top. If your key insight is in the intro, the chances it gets cited are high.<\/li> <li><strong>31.1% of citations come from the 30-70% of a text (the middle).<\/strong> If you bury your key product features in paragraph 12 of a 20-paragraph post, the AI is 2.5x less likely to cite it.<\/li> <li><strong>24.7% of citations come from the last third of an article (the conclusion).<\/strong> It proves the AI does wake up at the end (much like humans). It skips the actual\u00a0<em>footer<\/em>\u00a0(see the 90-100% drop-off), but it loves the \u201cSummary\u201d or \u201cConclusion\u201d section right before the footer.<\/li> <\/ol> <p>Possible explanations for the ski ramp pattern are training and efficiency:<\/p> <ul> <li>LLMs are trained on journalism and academic papers, which follow the \u201cBLUF\u201d (Bottom Line Up Front) structure. The model learns that the most \u201cweighted\u201d information is always at the top.<\/li> <li>While modern models can read up to 1 million tokens for a single interaction (~700,000-800,000 words), they aim to establish the frame as fast as possible, then interpret everything else through that frame.<\/li> <\/ul> <figure id=\"attachment_567598\" class=\"wp-caption aligncenter\" style=\"width: 1600px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823.jpg\"  width=\"1600\" height=\"1200\" class=\"size-full wp-image-567598\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823-384x288.jpg 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823-425x319.jpg 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823-480x360.jpg 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823-680x510.jpg 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823-768x576.jpg 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823-850x638.jpg 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823-1024x768.jpg 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823-1120x840.jpg 1120w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823-1280x720.jpg 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823-1300x680.jpg 1300w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823-1536x1152.jpg 1536w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/cited-text-823.jpg 1600w\" sizes=\"auto, (max-width: 1600px) 100vw, 1600px\" loading=\"lazy\" title=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe2\" alt=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe2\" \/><figcaption class=\"wp-caption-text\">Image Credit: Kevin Indig<\/figcaption><\/figure> <p>18,000 out of 1.2 million citations gives us all the insight we need. The P-Value of this analysis is 0.0, meaning it\u2019s statistically indisputable. I split the data into batches (randomized validation splits) to demonstrate the stability of the results.<\/p> <ul> <li>Batch 1 was slightly flatter, but batches 2, 3, and 4 are almost identical.<\/li> <li>Conclusion: Because batches 2, 3, and 4 locked onto the exact same pattern, the data is stable across all 1.2 million citations.<\/li> <\/ul> <p>While these batches confirm the macro-level stability of where ChatGPT looks across a document, they raise a new question about its granular behavior: Does this top-heavy bias persist even within a single block of text, or does the AI\u2019s focus change when it reads more deeply? Having established that the data is statistically indisputable at scale, I wanted to \u201czoom in\u201d to the paragraph level.<\/p> <figure id=\"attachment_567600\" class=\"wp-caption aligncenter\" style=\"width: 1548px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924.png\"  width=\"1548\" height=\"1200\" class=\"size-full wp-image-567600\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924-384x298.png 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924-425x329.png 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924-480x372.png 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924-680x527.png 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924-768x595.png 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924-850x659.png 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924-1024x794.png 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924-1280x720.png 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924-1300x680.png 1300w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924-1536x1191.png 1536w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/chatgpt-citations-924.png 1548w\" sizes=\"auto, (max-width: 1548px) 100vw, 1548px\" loading=\"lazy\" title=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe3\" alt=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe3\" \/><figcaption class=\"wp-caption-text\">Image Credit: Kevin Indig<\/figcaption><\/figure> <p>A deep analysis of 1,000 pieces of content with a high amount of citations shows 53% of citations come from the middle of a paragraph. Only 24.5% come from the first and 22.5% from the last sentence of a paragraph. ChatGPT is not \u201clazy\u201d and only reads the first sentence of every paragraph. It reads deeply.<\/p> <p><strong>Takeaway:<\/strong> You don\u2019t need to force the answer into the first sentence of every paragraph. ChatGPT seeks the sentence with the highest \u201cinformation gain\u201d (the most complete use of relevant entities and additive, expansive information), regardless of whether that sentence is first, second, or fifth in the paragraph. Combined with the ski ramp pattern, we can conclude that the highest chances for citations come from the paragraphs in the first 20% of the page.<\/p> <h2><strong>2. What Makes ChatGPT More Likely To Cite Chunks?<\/strong><\/h2> <p>We know <em>where<\/em>\u00a0in content ChatGPT likes to cite from, but what are the characteristics that influence citation likelihood?<\/p> <p>The analysis shows five winning characteristics:<\/p> <ol> <li>Definitive language.<\/li> <li>Conversational question-answer structure.<\/li> <li>Entity richness.<\/li> <li>Balanced sentiment.<\/li> <li>Simple writing.<\/li> <\/ol> <h3>1. Definitive Vs. Vague Language<\/h3> <figure id=\"attachment_567603\" class=\"wp-caption aligncenter\" style=\"width: 1548px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387.png\"  width=\"1548\" height=\"1200\" class=\"size-full wp-image-567603\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387-384x298.png 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387-425x329.png 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387-480x372.png 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387-680x527.png 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387-768x595.png 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387-850x659.png 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387-1024x794.png 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387-1280x720.png 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387-1300x680.png 1300w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387-1536x1191.png 1536w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/definitive-texts-387.png 1548w\" sizes=\"auto, (max-width: 1548px) 100vw, 1548px\" loading=\"lazy\" title=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe4\" alt=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe4\" \/><figcaption class=\"wp-caption-text\">Image Credit: Kevin Indig<\/figcaption><\/figure> <p>Citation winners are almost 2x more likely (36.2% vs 20.2%) to contain definitive language (<em>\u201cis defined as,\u201d \u201crefers to\u201d<\/em>). The language citation doesn\u2019t have to be a definition verbatim, but the relationships between concepts have to be clear.<\/p> <p>Possible explanations for the impact of direct, declarative writing:<\/p> <ul> <li>In a vector database, the word \u201cis\u201d acts as a strong bridge connecting a subject to its definition. When a user asks \u201cWhat is X?\u201d the model searches for the strongest vector path, which is almost always a direct \u201cX is Y\u201d sentence structure.<\/li> <li>The model tries to answer the user immediately. It prefers a text that allows it to resolve the query in a single sentence (Zero-Shot) rather than synthesizing an answer from five paragraphs.<\/li> <\/ul> <p><strong>Takeaway:<\/strong> Start your articles with a direct statement.<\/p> <ul> <li>Bad: \u201cIn this fast-paced world, automation is becoming key\u2026\u201d<\/li> <li>Good: \u201cDemo automation is the process of using software to\u2026\u201d<\/li> <\/ul> <h3>2. Conversational Writing<\/h3> <figure id=\"attachment_567605\" class=\"wp-caption aligncenter\" style=\"width: 1600px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743.png\"  width=\"1600\" height=\"1200\" class=\"size-full wp-image-567605\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743-384x288.png 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743-425x319.png 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743-480x360.png 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743-680x510.png 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743-768x576.png 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743-850x638.png 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743-1024x768.png 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743-1120x840.png 1120w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743-1280x720.png 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743-1300x680.png 1300w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743-1536x1152.png 1536w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/text-with-citations-743.png 1600w\" sizes=\"auto, (max-width: 1600px) 100vw, 1600px\" loading=\"lazy\" title=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe5\" alt=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe5\" \/><figcaption class=\"wp-caption-text\">Image Credit: Kevin Indig<\/figcaption><\/figure> <p>Text that gets cited is 2x more likely (18% vs. 8.9%) to contain a question mark. When we talk about conversational writing, we mean the interplay between questions and answers.<\/p> <p>Start with the user\u2019s query as a question, then answer it immediately. For example:<\/p> <ul> <li><em>Winner Style:<\/em>\u00a0\u201cWhat is Programmatic SEO? It is\u2026\u201d<\/li> <li><em>Loser Style:<\/em>\u00a0\u201cIn this article, we will discuss the various nuances of\u2026\u201d<\/li> <\/ul> <p>78.4% of citations with questions come from headings. The AI is treating your H2 tag as the user prompt and the paragraph immediately following it as the generated response.<\/p> <p>Example loser structure:<\/p> <p>Example winner structure (The 78%):<\/p> <ul> <li> <h2>When did SEO start?<\/h2> <p> (Literal Query)<\/li> <li> <p>SEO started in\u2026<\/p> <p> (Direct Answer)<\/li> <\/ul> <p>The reason that specific example wins is because of what I call \u201centity echoing\u201d: The header asks about SEO, and the very first word of the answer is SEO.<\/p> <h3>3. Entity Richness<\/h3> <figure id=\"attachment_567602\" class=\"wp-caption aligncenter\" style=\"width: 1600px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21.png\"  width=\"1600\" height=\"1200\" class=\"size-full wp-image-567602\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21-384x288.png 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21-425x319.png 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21-480x360.png 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21-680x510.png 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21-768x576.png 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21-850x638.png 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21-1024x768.png 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21-1120x840.png 1120w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21-1280x720.png 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21-1300x680.png 1300w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21-1536x1152.png 1536w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/citrd-text-21.png 1600w\" sizes=\"auto, (max-width: 1600px) 100vw, 1600px\" loading=\"lazy\" title=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe6\" alt=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe6\" \/><figcaption class=\"wp-caption-text\">Image Credit: Kevin Indig<\/figcaption><\/figure> <p>Normal English text has an \u201centity density\u201d (that is, contains proper nouns like brands, tools, people) of ~5-8%. Heavily cited text has an entity density of 20.6%!<\/p> <ul> <li>The 5-8% figure is a linguistic benchmark derived from standard corpora like the Brown Corpus (1 million words of representative English text) and the Penn Treebank (<em>Wall Street Journal<\/em>\u00a0text).<\/li> <\/ul> <p>Example:<\/p> <ul> <li><em>Loser sentence:<\/em>\u00a0\u201cThere are many good tools for this task.\u201d (0% Density)<\/li> <li><em>Winner sentence:<\/em>\u00a0\u201cTop tools include Salesforce, HubSpot, and Pipedrive.\u201d (30% Density)<\/li> <\/ul> <p>LLMs are probabilistic. Generic advice (\u201dchoose a good tool\u201d) is risky and vague, but a specific entity (\u201dchoose Salesforce\u201d) is grounded and verifiable. The model prioritizes sentences that contain \u201canchors\u201d (entities) because they lower the perplexity (confusion) of the answer.<\/p> <p>A sentence with three entities carries more \u201cbits\u201d of information than a sentence with 0 entities. So, don\u2019t be afraid of namedropping (yes, even your competitors).<\/p> <h3>4. Balanced Sentiment<\/h3> <figure id=\"attachment_567604\" class=\"wp-caption aligncenter\" style=\"width: 1600px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52.png\"  width=\"1600\" height=\"1200\" class=\"size-full wp-image-567604\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52-384x288.png 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52-425x319.png 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52-480x360.png 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52-680x510.png 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52-768x576.png 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52-850x638.png 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52-1024x768.png 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52-1120x840.png 1120w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52-1280x720.png 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52-1300x680.png 1300w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52-1536x1152.png 1536w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/subjectivity-52.png 1600w\" sizes=\"auto, (max-width: 1600px) 100vw, 1600px\" loading=\"lazy\" title=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe7\" alt=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe7\" \/><figcaption class=\"wp-caption-text\">Image Credit: Kevin Indig<\/figcaption><\/figure> <p>In my analysis, the cited text has a balanced subjectivity score of 0.47. The subjectivity score is a standard metric in natural language processing (NLP) that measures the amount of personal opinion, emotion, or judgment in a piece of text.<\/p> <p>The score runs on a scale from 0.0 to 1.0:<\/p> <ul> <li>0.0 (Pure Objectivity): The text contains only verifiable facts. No adjectives, no feelings.\u00a0<em>Example: \u201cThe iPhone 15 was released in September 2023.\u201d<\/em><\/li> <li>1.0 (Pure Subjectivity): The text contains only personal opinions, emotions, or intense descriptors.\u00a0<em>Example: \u201cThe iPhone 15 is an absolutely stunning masterpiece that I love.\u201d<\/em><\/li> <\/ul> <p>AI doesn\u2019t want dry Wikipedia text (0.1), nor does it want unhinged opinion (0.9). It wants the \u201canalyst voice.\u201d It prefers sentences that explain\u00a0<em>how<\/em>\u00a0a fact applies, rather than just stating the stat alone.<\/p> <p>The \u201cwinning\u201d tone looks like this (Score ~0.5): \u201c<em>While the iPhone 15 features a standard A16 chip (fact), its performance in low-light photography makes it a superior choice for content creators (analysis\/opinion).<\/em>\u201c<\/p> <h3>5. Business-Grade Writing<\/h3> <figure id=\"attachment_567599\" class=\"wp-caption aligncenter\" style=\"width: 1600px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977.png\"  width=\"1600\" height=\"1200\" class=\"size-full wp-image-567599\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977-384x288.png 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977-425x319.png 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977-480x360.png 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977-680x510.png 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977-768x576.png 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977-850x638.png 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977-1024x768.png 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977-1120x840.png 1120w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977-1280x720.png 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977-1300x680.png 1300w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977-1536x1152.png 1536w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/02\/business-grade-977.png 1600w\" sizes=\"auto, (max-width: 1600px) 100vw, 1600px\" loading=\"lazy\" title=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe8\" alt=\"The Science Of How AI Pays Attention via @sejournal, @Kevin_Indig\u63d2\u56fe8\" \/><figcaption class=\"wp-caption-text\">Image Credit: Kevin Indig<\/figcaption><\/figure> <p>Business-grade writing (think\u00a0<em>The Economist\u00a0<\/em>or\u00a0<em>Harvard Business Review<\/em>) gets more citations. \u201cWinners\u201d have a Flesch-Kincaid score of 16 (college level) compared to the \u201closers\u201d with 19.1 (Academic\/PhD level).<\/p> <p>Even for complex topics, complexity can hurt. A grade 19 score means sentences are long, winding, and filled with multisyllable jargon. The AI prefers simple subject-verb-object structures with short to moderately long sentences, because they are easier to extract facts from.<\/p> <h2><strong>Conclusion<\/strong><\/h2> <p>The \u201cski ramp\u201d pattern quantifies a misalignment between narrative writing and information retrieval. The algorithm interprets the slow reveal as a lack of confidence. It prioritizes the immediate classification of entities and facts.<\/p> <p>High-visibility content functions more like a structured briefing than a story.<\/p> <p>This imposes a \u201cclarity tax\u201d on the writer. The winners in this dataset rely on business-grade vocabulary and high entity density, disproving the theory that AI rewards \u201cdumbing down\u201d content (with\u00a0exceptions).<\/p> <p>We\u2019re not only writing robots \u2026 yet. But the gap between human preferences and machine constraints is closing. In business writing, humans scan for insights. By front-loading the conclusion, we satisfy the algorithm\u2019s architecture and the human reader\u2019s scarcity of time.<\/p> <h2><strong>Methodology<\/strong><\/h2> <p>To understand exactly <em>where<\/em>\u00a0and\u00a0<em>why<\/em>\u00a0AI cites content, we analyzed the code.<\/p> <p>All data in this research comes from Gauge.<\/p> <ul> <li>Gauge provided roughly 3 million AI answers from ChatGPT, alongside 30 million citations. Each citation URL\u2019s web content was scraped at the time of answer to provide direct correlation between the true web content and the answer itself. Both raw HTML and plaintext were scraped.<\/li> <\/ul> <h3><strong>1. The Dataset<\/strong><\/h3> <p>We started with a universe of 1.2 million search results and AI-generated answers. From this, we isolated 18,012 verified citations for positional analysis and 11,022 citations for \u201clinguistic DNA\u201d analysis.<\/p> <ul> <li><strong>Significance:<\/strong> This sample size is large enough to produce a P-Value of 0.0 (<em>p &lt; 0.0001<\/em>), meaning the patterns we found are statistically indisputable.<\/li> <\/ul> <h3><strong>2. The \u201cHarvester\u201d Engine<\/strong><\/h3> <p>To find exactly which sentence the AI was quoting, we used semantic embeddings (a Neural Network approach).<\/p> <ul> <li><strong>The Model:<\/strong> We used all-MiniLM-L6-v2, a sentence-transformer model that understands meaning, not just keywords.<\/li> <li><strong>The Process:<\/strong> We converted every AI answer and every sentence of the source text into 384-dimensional vectors. We then matched them using cosine similarity.<\/li> <li><strong>The Filter:<\/strong> We applied a strict similarity threshold (0.55) to discard weak matches or hallucinations, ensuring we only analyzed high-confidence citations.<\/li> <\/ul> <h3><strong>3. The Metrics<\/strong><\/h3> <p>Once we found the exact match, we measured two things:<\/p> <ul> <li><strong>Positional Depth:<\/strong> We calculated exactly where the cited text appeared in the HTML (e.g., at the 10% mark vs. the 90% mark).<\/li> <li><strong>Linguistic DNA:<\/strong> We compared \u201cwinners\u201d (cited intros) vs. \u201closers\u201d (skipped intros) using Natural Language Processing (NLP) to measure: <ul> <li><strong>Definition Rate:<\/strong> Presence of definitive verbs (<em>is, are, refers to<\/em>).<\/li> <li><strong>Entity Density:<\/strong> Frequency of proper nouns (brands, tools, people).<\/li> <li><strong>Subjectivity:<\/strong> A sentiment score from 0.0 (Fact) to 1.0 (Opinion).<\/li> <\/ul> <\/li> <\/ul> <hr\/> <p><em>Featured Image: Paulo Bobita\/Search Engine Journal<\/em><\/p> <\/div> <p>Analytics &amp; Data,Generative AI,SEO#Science #Pays #Attention #sejournal #Kevin_Indig1771340242<\/p> ","protected":false},"excerpt":{"rendered":"<p>Boost your skills with Growth Memo\u2019s weekly expert insights. Subscribe for free! This week, I share my findings from analyzing 1.2 million ChatGPT responses to answer the question of how to improve your chances of getting cited. Image Credit: Kevin Indig For 20 years, SEOs have written\u201dultimate guides\u201d designed to keep humans on the page. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1425,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[10886,555,8243,11184,80],"class_list":["post-3678","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-accessibility","tag-attention","tag-kevin_indig","tag-pays","tag-science","tag-sejournal"],"acf":[],"_links":{"self":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/3678","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3678"}],"version-history":[{"count":0,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/3678\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/media\/1425"}],"wp:attachment":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3678"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3678"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3678"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}