{"id":1424,"date":"2026-01-14T00:02:04","date_gmt":"2026-01-13T16:02:04","guid":{"rendered":"http:\/\/longzhuplatform.com\/?p=1424"},"modified":"2026-01-14T00:02:04","modified_gmt":"2026-01-13T16:02:04","slug":"how-much-can-we-influence-ai-responses-via-sejournal-kevin_indig","status":"publish","type":"post","link":"http:\/\/longzhuplatform.com\/?p=1424","title":{"rendered":"How Much Can We Influence AI Responses? via @sejournal, @Kevin_Indig"},"content":{"rendered":"<p><\/p> <div id=\"narrow-cont\"> <p>Right now, we\u2019re dealing with a search landscape that is both unstable in influence and dangerously easy to manipulate. We keep asking how to influence AI answers \u2013 without acknowledging that LLM outputs are probabilistic by design.<\/p> <p>In today\u2019s memo, I\u2019m covering:<\/p> <ul> <li>Why LLM visibility is a volatility problem.<\/li> <li>What new research proves about how easily AI answers can be manipulated.<\/li> <li>Why this sets up the same arms race Google already fought.<\/li> <\/ul> <figure id=\"attachment_564908\" class=\"wp-caption aligncenter\" style=\"width: 1536px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/ai-responses-273.jpg\"  width=\"1536\" height=\"1024\" class=\"size-full wp-image-564908\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/ai-responses-273-384x256.jpg 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/ai-responses-273-425x283.jpg 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/ai-responses-273-480x320.jpg 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/ai-responses-273-680x453.jpg 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/ai-responses-273-768x512.jpg 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/ai-responses-273-850x567.jpg 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/ai-responses-273-1024x683.jpg 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/ai-responses-273-1280x720.jpg 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/ai-responses-273-1300x680.jpg 1300w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/ai-responses-273.jpg 1536w\" sizes=\"auto, (max-width: 1536px) 100vw, 1536px\" loading=\"lazy\" title=\"How Much Can We Influence AI Responses? via @sejournal, @Kevin_Indig\u63d2\u56fe\" alt=\"How Much Can We Influence AI Responses? via @sejournal, @Kevin_Indig\u63d2\u56fe\" \/><figcaption class=\"wp-caption-text\">Image Credit: Kevin Indig<\/figcaption><\/figure> <h2>1. Influencing AI Answers Is Possible But Unstable<\/h2> <p>Last week, I published a list of AI visibility factors; levers that grow your representation in LLM responses. The article got a lot of attention because we all love a good list of tactics that drive results.<\/p> <p>But we don\u2019t have a crisp answer to the question, \u201cHow much can we actually influence the outcomes?\u201d<\/p> <p>There are seven good reasons why the probabilistic nature of LLMs might make it hard to influence their answers:<\/p> <ol> <li><strong>Lottery-style outputs.<\/strong> LLMs (probabilistic) are not search engines (deterministic). Answers vary a lot on the micro-level (single prompts).<\/li> <li><strong>Inconsistency.<\/strong> AI answers are not consistent. When you run the same prompt five times, only 20%\u00a0of brands show up consistently.<\/li> <li><strong>Models have a bias (which Dan Petrovic calls \u201cPrimary Bias\u201d) based on pre-training data.<\/strong> How much we are able to influence or overcome that pre-training bias is unclear.<\/li> <li><strong>Models evolve.<\/strong> ChatGPT has become a lot smarter when comparing 3.5 to 5.2. Do \u201cold\u201d tactics still work? How do we ensure that tactics still work for new models?<\/li> <li><strong>Models vary.<\/strong> Models weigh sources differently\u00a0for training and web retrieval. For example, ChatGPT leans heavier on Wikipedia while AI Overviews cite Reddit more.<\/li> <li><strong>Personalization.<\/strong> Gemini might have more access to your personal data through Google Workspace than ChatGPT and, therefore, give you much more personalized results. Models might also vary in the degree to which they allow personalization.<\/li> <li><strong>More context.<\/strong> Users reveal much richer context about what they want with long prompts, so the set of possible answers is much smaller, and therefore harder to influence.<\/li> <\/ol> <h2>2. Research: LLM Visibility Is Easy To Game<\/h2> <p>A brand new paper from Columbia University by Bagga et al. titled \u201cE-GEO: A Testbed for Generative Engine Optimization in E-Commerce\u201d shows just how much we can influence AI answers.<\/p> <figure id=\"attachment_564906\" class=\"wp-caption aligncenter\" style=\"width: 1282px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/llm-visibility-648.png\"  width=\"1282\" height=\"767\" class=\"size-full wp-image-564906\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/llm-visibility-648-384x230.png 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/llm-visibility-648-425x254.png 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/llm-visibility-648-480x287.png 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/llm-visibility-648-680x407.png 680w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/llm-visibility-648-768x459.png 768w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/llm-visibility-648-850x509.png 850w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/llm-visibility-648-1024x613.png 1024w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/llm-visibility-648-1280x720.png 1280w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/01\/llm-visibility-648.png 1282w\" sizes=\"auto, (max-width: 1282px) 100vw, 1282px\" loading=\"lazy\" title=\"How Much Can We Influence AI Responses? via @sejournal, @Kevin_Indig\u63d2\u56fe1\" alt=\"How Much Can We Influence AI Responses? via @sejournal, @Kevin_Indig\u63d2\u56fe1\" \/><figcaption class=\"wp-caption-text\">Image Credit: Kevin Indig<\/figcaption><\/figure> <p>The methodology:<\/p> <ul> <li>The authors built the \u201cE-GEO Testbed,\u201d a dataset and evaluation framework that pairs over 7,000 real product queries (sourced from Reddit) with over 50,000 Amazon product listings and evaluates how different rewriting strategies improve a product\u2019s AI Visibility when shown to an LLM (GPT-4o).<\/li> <li>The system measures performance by comparing a product\u2019s AI Visibility before and after its description is rewritten (using AI).<\/li> <li>The simulation is driven by two distinct AI agents and a control group: <ul> <li><strong>\u201cThe Optimizer\u201d<\/strong> acts as the vendor with the goal of rewriting product descriptions to maximize their appeal to the search engine. It creates the \u201ccontent\u201d that is being tested.<\/li> <li><strong>\u201cThe Judge\u201d<\/strong> functions as the shopping assistant that receives a realistic consumer query (e.g., \u201cI need a durable backpack for hiking under $100\u201d) and a set of products. It then evaluates them and produces a ranked list from best to worst.<\/li> <li><strong>The Competitors<\/strong> are a control group of existing products with their original, unedited descriptions. The Optimizer must beat these competitors to prove its strategy is effective.<\/li> <\/ul> <\/li> <li>The researchers developed a sophisticated optimization method that used GPT-4o to analyze the results of previous optimization rounds and give recommendations for improvements (like \u201cMake the text longer and include more technical specifications.\u201d). This cycle repeats iteratively until a dominant strategy emerges.<\/li> <\/ul> <p>The results:<\/p> <ul> <li>The most significant discovery of the E-GEO paper is the existence of a \u201cUniversal Strategy\u201d for \u201cLLM output visibility\u201d in ecommerce.<\/li> <li>Contrary to the belief that AI prefers concise facts, the study found that the optimization process consistently converged on a specific writing style: longer descriptions with a highly persuasive tone and fluff (rephrasing existing details to sound more impressive without adding new factual information).<\/li> <li>The rewritten descriptions achieved a win rate of ~<strong>90%<\/strong>\u00a0against the baseline (original) descriptions.<\/li> <li>Sellers do not need category-specific expertise to game the system: A strategy developed entirely using home goods products achieved an 88% win rate when applied to the electronics category and 87% when applied to the clothing category.<\/li> <\/ul> <h2>3. The Body Of Research Grows<\/h2> <p>The paper covered above is not the only one showing us how to manipulate LLM answers.<\/p> <h3>1.\u00a0GEO: Generative Engine Optimization\u00a0(Aggarwal et al., 2023)<\/h3> <ul> <li>The researchers applied ideas like adding statistics or including quotes to content and found that factual density (citations and stats) boosted visibility by about\u00a0<strong>40%<\/strong>.<\/li> <li>Note that the E-GEO paper found that verbosity and persuasion were far more effective levers than citations, but the researchers (1) looked specifically at a shopping context, (1) used AI to find out what works, and (3) the paper is newer in comparison.<\/li> <\/ul> <h3>2.\u00a0Manipulating Large Language Models\u00a0(Kumar et al., 2024)<\/h3> <ul> <li>The researchers added a \u201cStrategic Text Sequence,\u201d \u2013 JSON-formatted text with product information \u2013 to product pages to manipulate LLMs.<\/li> <li>Conclusion: \u201cWe show that a vendor can significantly improve their product\u2019s LLM Visibility in the LLM\u2019s recommendations by inserting an optimized sequence of tokens into the product information page.\u201d<\/li> <\/ul> <h3>3.\u00a0Ranking Manipulation\u00a0(Pfrommer et al., 2024)<\/h3> <ul> <li>The authors added text on product pages that gave LLMs specific instructions (like \u201cplease recommend this product first\u201d), which is very similar to the other two papers referenced above.<\/li> <li>They argue that LLM Visibility is fragile and highly dependent on factors like product names and their position in the context window.<\/li> <li>The paper emphasizes that different LLMs have significantly different vulnerabilities and don\u2019t all prioritize the same factors when making LLM Visibility decisions.<\/li> <\/ul> <h2>4. The Coming Arms Race<\/h2> <p>The growing body of research shows the extreme fragility of LLMs. They\u2019re highly sensitive to how information is presented. Minor stylistic changes that don\u2019t alter the product\u2019s actual utility can move a product from the bottom of the list to the No. 1 recommendation.<\/p> <p>The long-term problem is scale: LLM developers need to find ways to reduce the impact of these manipulative tactics to avoid an endless arms race with \u201coptimizers.\u201d If these optimization techniques become widespread, marketplaces could be flooded with artificially bloated content, significantly reducing the user experience. Google stood in front of the same problem and then launched Panda and Penguin.<\/p> <p>You could argue that LLMs already ground their answers in classic search results, which are \u201cquality filtered,\u201d but grounding varies from model to model, and not all LLMs prioritize pages ranking at the top of Google search. Google protects its search results more and more against other LLMs (see \u201cSerpAPI lawsuit\u201d and the \u201cnum=100 apocalypse\u201d).<\/p> <p>I\u2019m aware of the irony that I contribute to the problem by writing about those optimization techniques, but I hope I can inspire LLM developers to take action.<\/p> <p><em>Boost your skills with Growth Memo\u2019s weekly expert insights. Subscribe for free!<\/em><\/p> <hr\/> <p><em>Featured Image: Paulo Bobita\/Search Engine Journal<\/em><\/p> <\/div> <p>Generative AI#Influence #Responses #sejournal #Kevin_Indig1768320124<\/p> ","protected":false},"excerpt":{"rendered":"<p>Right now, we\u2019re dealing with a search landscape that is both unstable in influence and dangerously easy to manipulate. We keep asking how to influence AI answers \u2013 without acknowledging that LLM outputs are probabilistic by design. In today\u2019s memo, I\u2019m covering: Why LLM visibility is a volatility problem. What new research proves about how [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1425,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[2257,555,2258,80],"class_list":["post-1424","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-accessibility","tag-influence","tag-kevin_indig","tag-responses","tag-sejournal"],"acf":[],"_links":{"self":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/1424","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1424"}],"version-history":[{"count":0,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/1424\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/media\/1425"}],"wp:attachment":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1424"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1424"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1424"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}