{"id":6555,"date":"2026-04-15T22:22:27","date_gmt":"2026-04-15T14:22:27","guid":{"rendered":"http:\/\/longzhuplatform.com\/?p=6555"},"modified":"2026-04-15T22:22:27","modified_gmt":"2026-04-15T14:22:27","slug":"the-ai-slop-loop-via-sejournal-lilyraynyc","status":"publish","type":"post","link":"http:\/\/longzhuplatform.com\/?p=6555","title":{"rendered":"The AI Slop Loop via @sejournal, @lilyraynyc"},"content":{"rendered":"<p><\/p> <div id=\"narrow-cont\"> <p>Last year, after spending a few days at a work summit in Austria, I asked Perplexity for the latest news related to SEO and AI search. It responded with details about a supposed \u201cSeptember 2025 \u2018Perspective\u2019 Core Algorithm Update\u201d that Google had just rolled out, emphasizing \u201cdeeper expertise\u201d and \u201ccompletion of the user journey.\u201d<\/p> <p>It sounded plausible enough \u2026 if you don\u2019t live and breathe Google core updates. Unfortunately for Perplexity, I do.<\/p> <p>I knew instantly that this information wasn\u2019t right. For one, Google hasn\u2019t named core updates in years. It also already had SERP features called \u201cPerspectives.\u201d And if a core update had actually rolled out while I was away, I would\u2019ve been flooded with messages. So I checked Perplexity\u2019s sources \u2026 and, surprise! Both citations came from made-up, AI-generated slop on a couple of SEO agency blogs, confidently fabricating details about an algorithm update <strong>that never actually happened.<\/strong><\/p> <p>Like a bad game of telephone, this fake SEO news spread across multiple websites \u2013 likely driven by AI systems scanning and regurgitating information regardless of accuracy, all in the race to publish and scale \u201cfresh\u201d content. This is how we end up with this mess:<\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 601px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-165298.jpg\" width=\"601\" height=\"804\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe\" \/><figcaption class=\"wp-caption-text\">Image Credit: Lily Ray<\/figcaption><\/figure> <\/p> <\/figure> <p>This bad information reinforces itself to become the official narrative. To this day, you can ask an LLM of your choice (including ChatGPT, AI Mode, and AI Overviews) about the September 2025 \u201cPerspectives\u201d update, and they will confidently answer with information about how it \u201c<em>fundamentally shifted how search results are ranked:<\/em>\u201d<\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 1018px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-883283.png\" width=\"1018\" height=\"559\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe1\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe1\" \/><figcaption class=\"wp-caption-text\">Image Credit: Lily Ray<\/figcaption><\/figure> <\/p> <\/figure> <p>Or that it \u201c<em>shifted what \u2018good content\u2019 actually means in practice.<\/em>\u201d<\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 1017px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-414859.png\" width=\"1017\" height=\"304\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe2\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe2\" \/><figcaption class=\"wp-caption-text\">Image Credit: Lily Ray<\/figcaption><\/figure> <\/p> <\/figure> <p>The problem is: <strong>the \u201cSeptember 2025 \u201cPerspectives\u201d update never happened.<\/strong> It never affected rankings. It never shifted anything about good content. Because it doesn\u2019t actually exist.<\/p> <p>Ironically, when you go on to probe the language model about this, it seems to know this is the case:<\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 1057px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-604298.jpg\" width=\"1057\" height=\"546\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe3\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe3\" \/><figcaption class=\"wp-caption-text\">Image Credit: Lily Ray<\/figcaption><\/figure> <\/p> <\/figure> <p>I tweeted about this incident shortly after it happened, which got the CEO of Perplexity\u2019s attention; he tagged his head of search in the tweet comments.<\/p> <figure id=\"attachment_572107\" class=\"wp-caption aligncenter\" style=\"width: 592px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/lily-ray-tweet-854.png\"  width=\"592\" height=\"702\" class=\"size-full wp-image-572107\" srcset=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/lily-ray-tweet-854-384x455.png 384w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/lily-ray-tweet-854-425x504.png 425w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/lily-ray-tweet-854-480x569.png 480w, https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/lily-ray-tweet-854.png 592w\" sizes=\"auto, (max-width: 592px) 100vw, 592px\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe4\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe4\" \/><figcaption class=\"wp-caption-text\">Screenshot from X, April 2026<\/figcaption><\/figure> <p>This isn\u2019t a one-off incident. It\u2019s a pattern I\u2019ve seen countless times in AI search responses, especially on topics related to SEO and AI search (GEO\/AEO). And I have a working theory on how it spreads: one AI-generated article hallucinates a detail, sites running AI content pipelines scrape and regurgitate it, more AI-generated sites scrape the same misinformation, and suddenly a made-up algorithm update has citations. For a RAG-based system like Perplexity or AI Overviews, enough citations are basically all it needs to treat something as fact, regardless of whether it\u2019s actually true.<\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 730px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-210756.png\" width=\"730\" height=\"485\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe5\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe5\" \/><figcaption class=\"wp-caption-text\">I used Claude to help visualize the \u201cAI Slop Loop\u201d \u2013 the cycle of AI-generated misinformation (Image Credit: Lily Ray)<\/figcaption><\/figure> <\/p> <\/figure> <p>At this point, I\u2019d consider this common. I recently had a client send me SEO\/GEO information that was factually incorrect, pulled straight from AI-generated slop on a random, vibe-coded agency blog. The client had no idea. I believe that if you\u2019re trying to learn about SEO or AI search directly from an LLM, this is, unfortunately, an increasingly likely outcome.<\/p> <p>I ran similar testing during Google\u2019s March 2026 core update and found multiple AI-generated articles already claiming to share the \u201cwinners and losers\u201d while the update was still rolling out.<\/p> <p>The articles start with vague, generic filler about core updates that doesn\u2019t actually say anything:<\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 1456px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-896744.jpg\" width=\"1456\" height=\"628\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe6\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe6\" \/><figcaption class=\"wp-caption-text\">Image Credit: Lily Ray<\/figcaption><\/figure> <\/p> <\/figure> <p>Then they list \u201cwinners and losers\u201d without citing a single site, leaning on vague, generalized claims that sound plausible and fill the void left by a lack of reliable information:<\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 771px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-129983.jpg\" width=\"771\" height=\"316\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe7\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe7\" \/><figcaption class=\"wp-caption-text\">Image Credit: Lily Ray<\/figcaption><\/figure> <\/p> <\/figure> <p>Unsurprisingly, their sites are filled with AI-generated images, AI support chatbots, and other clear signals that little \u2013 if any \u2013 human involvement went into creating this content.<\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 1456px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-276169.jpg\" width=\"1456\" height=\"526\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe8\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe8\" \/><figcaption class=\"wp-caption-text\">Image Credit: Lily Ray<\/figcaption><\/figure> <\/p> <\/figure> <h2>The Era Of AI Misinformation<\/h2> <p>If someone on the internet says it, according to AI, it must be true.<\/p> <p>That\u2019s the reality for the vast majority of people using AI search today. Only about 50 million of ChatGPT\u2019s 900 million weekly active users are paying subscribers, meaning roughly <strong>94% are on the free tier.<\/strong> Google\u2019s AI Overviews and AI Mode are free by design \u2013 and AI Overviews reached over 2 billion monthly active users as of mid-2025.<\/p> <p>These are the models most AI users are currently interacting with, and they have no real mechanism for distinguishing between information that\u2019s true and information that\u2019s simply repeated across enough sources. Repetition is treated as consensus. If enough sources say it, it becomes fact, regardless of whether any of those sources involved a human who actually verified the claim.<\/p> <h3>Putting The Problem To The Test<\/h3> <p>I recently spoke to journalists from both the BBC and the New York Times about the problem of misinformation in AI-generated responses. In the case of the BBC article, the author Thomas Germaine and I tested publishing fictitious blog posts on our personal sites to see whether AI Overviews would present the made-up information as fact, and how quickly.<\/p> <p>Even knowing how bad the problem was, I was alarmed by the results.<\/p> <p>On my personal blog, in January 2026, I published an AI-generated article about a fake Google core update, which never actually happened. I included the detail that Google \u201capproved the update between slices of leftover pizza.\u201d Within 24 hours, Google\u2019s AI Overviews was confidently serving this fabricated information back to users:<\/p> <p><em>(Note: I\u2019ve since deleted the article from my site because it was showing up in people\u2019s feeds and being covered on external sites, further contributing to the exact problem I\u2019m pointing out here!)<\/em><\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 909px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-848406.jpg\" width=\"909\" height=\"368\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe9\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe9\" \/><figcaption class=\"wp-caption-text\">Image Credit: Lily Ray<\/figcaption><\/figure> <\/p> <\/figure> <p>First, AI Overviews confirmed that there was indeed a core update in January 2026. As a reminder: There was not. My site was the only source making this claim, and that was apparently enough to trigger the AI Overview.<\/p> <p>Next, I asked it about the pizza, and it responded accordingly:<\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 979px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-135685.jpg\" width=\"979\" height=\"627\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe10\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe10\" \/><figcaption class=\"wp-caption-text\">Image Credit: Lily Ray<\/figcaption><\/figure> <\/p> <\/figure> <p>Better yet, the AI Overview found a way to connect my fabricated pizza detail to a real incident: Google\u2019s struggles with pizza-related queries in 2024. It didn\u2019t just regurgitate the lie \u2013 it contextualized it.<\/p> <p>ChatGPT, which is believed to use Google\u2019s search results, quickly surfaced the same fabricated information, though it at least flagged that the announcement didn\u2019t match Google\u2019s formal communications:<\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 935px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-165156.jpg\" width=\"935\" height=\"518\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe11\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe11\" \/><figcaption class=\"wp-caption-text\">Image Credit: Lily Ray<\/figcaption><\/figure> <\/p> <\/figure> <p>I deleted my article after getting messages from people who had seen my fake information circulating via RSS feeds and scrapers. I knew it was easy to influence AI responses. I didn\u2019t know it would be <em>that<\/em> easy.<\/p> <p>I also wondered whether my site had an advantage, given its strong backlink profile and established authority in the SEO space.<\/p> <p>So I spoke to the BBC journalist, Thomas Germaine, and he put this to the test on his personal site, which generally received very little organic traffic. He published a fictitious article about the \u201cBest Tech Journalists at Eating Hot Dogs,\u201d calling himself the No. 1 best (in true SEO fashion).<\/p> <p>According to Thomas\u2019 article in the BBC, within 24 hours, \u201cGoogle parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn\u2019t fooled.\u201d<\/p> <p>To be fair: the query Thomas chose was niche enough that very few users would ever actually search for it, which is exactly what Google pointed out in its response to the BBC. When there are \u201cdata voids,\u201d Google said, this can lead to lower quality results, and the company is \u201cworking to stop AI Overviews showing up in these cases.\u201d My main question is: <em>When<\/em>? The product has already been live for 2 years!<\/p> <h2>Why Data Voids Aren\u2019t A Great Excuse<\/h2> <p>Data voids may contribute to the problem, but in my opinion, they don\u2019t excuse it. These AI responses are being consumed by hundreds of millions of users, and \u201cwe\u2019re working on it\u201d isn\u2019t an answer when the systems are already deployed at that scale.<\/p> <p>In the New York Times article, \u201cHow Accurate Are Google\u2019s A.I. Overviews?,\u201d the actual scale of this problem was put to the test. According to the data found in the study, Google\u2019s AI Overviews were accurate 91% of the time. This sounds decent until you actually do the math: With Google processing over 5 trillion searches a year, this suggests that <strong>tens of millions of erroneous answers are generated by AI Overviews every hour.<\/strong><\/p> <p>To make matters worse: Even when AI Overviews were accurate, 56% of correct responses were \u201cungrounded,\u201d meaning <strong>the sources they linked to didn\u2019t fully support the information provided.<\/strong> So more than half the time, even when the answer happens to be right, a user clicking through to verify it would find sources that don\u2019t actually back up what they were just told. That number also <strong>got worse with the newer model<\/strong> \u2013 it was 37% with Gemini 2 and rose to 56% with Gemini 3.<\/p> <p>The NYT article drew hundreds of comments from users sharing their own experiences, and the frustration was palpable. The core complaint wasn\u2019t just that AI Overviews get things wrong \u2013 it\u2019s that they <strong>never admit uncertainty.<\/strong> AI Overviews deliver every answer with the same confident, authoritative tone, whether the information is right or completely fabricated, which means users have no reliable way to distinguish reliable information from hallucination at a glance.<\/p> <p>As many commenters pointed out, this <strong>actually makes search slower<\/strong>: Instead of scanning a list of sources and evaluating them yourself, you now have to <strong>fact-check the AI\u2019s summary before doing your actual research<\/strong>. The tool, supposedly designed to save time for the user, is now creating double work for the user.<\/p> <p>Some of the comments also reinforced my same concerns about AI answers citing made-up, AI-generated content. Multiple users described what amounts to the same misinformation cycle: AI systems training on AI-generated content, citing unvetted Reddit posts and Facebook comments as authoritative sources, and producing a self-reinforcing loop of degrading quality. Several commenters compared it to making a copy of a copy. Even the defenders of AI Overviews admitted they still need to verify everything, which sort of undermines the core premise: that AI-generated answers save users time and effort.<\/p> <h2>How \u201cSmarter\u201d LLMs Are Attempting To Fix the Problem<\/h2> <p>It\u2019s worth monitoring how the AI companies are attempting to solve these problems. For example, using the RESONEO Chrome extension, you can observe clear differences in how ChatGPT\u2019s free-tier model (GPT-5.3) responds compared to GPT-5.4, the more capable model available only to paying subscribers.<\/p> <p>For example, when asking about the recent March 2026 Core Algorithm Update, I used ChatGPT\u2019s more capable \u201cThinking\u201d model (5.4). The model goes through <em>six rounds <\/em>of thinking, much of which is clearly intended to reduce low-quality and spammy information from making its way into the answer. It even appends the names of trustworthy people with authority on core updates (Glenn Gabe &amp; Aleyda Solis) and limits the fan-out searches to their sites (site:gsqi.com and site:linkedin.com\/in\/glenngabe) to pull up higher-quality answers.<\/p> <figure> <p><figure class=\"wp-caption aligncenter\" style=\"width: 616px\"><img decoding=\"async\" src=\"https:\/\/cdn.searchenginejournal.com\/wp-content\/uploads\/2026\/04\/https_3a_2f_2fsubstack-post-media.s3.amazonaws-sej-13215.png\" width=\"616\" height=\"590\"  class=\"\" loading=\"lazy\" title=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe12\" alt=\"The AI Slop Loop via @sejournal, @lilyraynyc\u63d2\u56fe12\" \/><figcaption class=\"wp-caption-text\">Image Credit: Lily Ray<\/figcaption><\/figure> <\/p> <\/figure> <p>This is a step in the right direction, and the model produces measurably better answers. According to OpenAI\u2019s own launch announcement, GPT-5.4\u2019s individual claims are 33% less likely to be false, and its full responses are 18% less likely to contain errors compared to GPT-5.2. GPT-5.3, the model available to free users, also improved over its predecessor. According to OpenAI\u2019s own data, it produces <strong>26.8% fewer hallucinations<\/strong> than prior models with web search enabled, and 19.7% fewer without it.<\/p> <p>But <strong>these improvements are tiered<\/strong>. The most capable model is paywalled, and the free-tier model, while better than what came before, is still meaningfully less reliable. Other major AI platforms follow the same pattern: better reasoning and accuracy reserved for paying subscribers, <strong>faster and cheaper models for everyone else<\/strong>. The result is that the 94% of ChatGPT users on the free tier, and the billions of users interacting with free AI search products like AI Overviews are getting answers from models that are <strong>more likely to be wrong and less equipped to flag uncertainty<\/strong>.<\/p> <p>This is the part that makes me most uncomfortable: Most of these users probably don\u2019t realize the gap exists. AI is being marketed everywhere: Super Bowl ads, billboards, and product launches framing AI as the future of knowledge. People see \u201cChatGPT\u201d or \u201cAI Overview\u201d and assume they\u2019re interacting with something that knows what it\u2019s talking about. They\u2019re probably not thinking about which model tier they\u2019re on, or whether a paid version would give them a materially different answer to the same question.<\/p> <p>I understand the economics. These companies need to scale, and offering free tiers drives adoption. But in my opinion, it is irresponsible to deploy these products to billions of people, frame them as \u201cintelligence,\u201d and then quietly reserve the more accurate versions for the fraction of users willing to pay. Especially when the free versions (including the one at the top of Google search) are <em>this<\/em> susceptible to the kind of misinformation documented throughout this article.<\/p> <h2>The Burden Of Proof Has Shifted<\/h2> <p>The September 2025 \u201cPerspectives\u201d Google update still doesn\u2019t exist. But if you ask an LLM about it today, it will still tell you about it with complete confidence. That hasn\u2019t changed in the months since I first flagged it, and it probably won\u2019t change anytime soon, because the content that fabricated it is still indexed, still cited, and still being used to generate new content that references it as fact. The AI slop misinformation cycle continues.<\/p> <p>This is what makes the problem so difficult to fix. It\u2019s not a single hallucination that can be patched. It\u2019s a feedback loop that compounds over time, and every day that these systems are live at scale, the loop gets harder to break. The AI-generated slop that seeded the original misinformation is now part of the training data and used as a retrieval source for the next batch of AI-generated answers.<\/p> <p>I don\u2019t think the answer is to stop using AI. But I do think it\u2019s worth being honest about what these products actually are right now: prediction engines that treat the volume of information as a proxy for its accuracy. Until that changes, <strong>the burden of fact-checking falls on the user<\/strong>. And most users don\u2019t know they\u2019re carrying it, let alone have the time or inclination to do it.<\/p> <p>I would warn marketers or publishers trying to take SEO or GEO advice from large language models: the information is contaminated, and should always be verified by real experts with experience in the field.<\/p> <p><strong>More Resources:<\/strong><\/p> <hr\/> <p data-pm-slice=\"1 1 [\" table=\"\"><em>This post was originally published on <u>Lily Ray NYC Substack<\/u>.<\/em><\/p> <hr\/> <p data-pm-slice=\"1 1 [\" table=\"\"><em>Featured Image: elenabsl\/Shutterstock<\/em><\/p> <\/div> <p>SEO#Slop #Loop #sejournal #lilyraynyc1776262947<\/p> ","protected":false},"excerpt":{"rendered":"<p>Last year, after spending a few days at a work summit in Austria, I asked Perplexity for the latest news related to SEO and AI search. It responded with details about a supposed \u201cSeptember 2025 \u2018Perspective\u2019 Core Algorithm Update\u201d that Google had just rolled out, emphasizing \u201cdeeper expertise\u201d and \u201ccompletion of the user journey.\u201d It [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":6556,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[9137,24433,80,902],"class_list":["post-6555","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-accessibility","tag-lilyraynyc","tag-loop","tag-sejournal","tag-slop"],"acf":[],"_links":{"self":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/6555","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6555"}],"version-history":[{"count":0,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/6555\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/media\/6556"}],"wp:attachment":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6555"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6555"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6555"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}