{"id":1123,"date":"2026-01-09T08:38:23","date_gmt":"2026-01-09T00:38:23","guid":{"rendered":"http:\/\/longzhuplatform.com\/?p=1123"},"modified":"2026-01-09T08:38:23","modified_gmt":"2026-01-09T00:38:23","slug":"the-guardian-google-ai-overviews-gave-misleading-health-advice-via-sejournal-mattgsouthern","status":"publish","type":"post","link":"http:\/\/longzhuplatform.com\/?p=1123","title":{"rendered":"The Guardian: Google AI Overviews Gave Misleading Health Advice via @sejournal, @MattGSouthern"},"content":{"rendered":"<p><\/p> <div id=\"narrow-cont\"> <p>The Guardian published an investigation claiming health experts found inaccurate or misleading guidance in some AI Overview responses for medical queries. Google disputes the reporting and says many examples were based on incomplete screenshots.<\/p> <p>The Guardian said it tested health-related searches and shared AI Overview responses with charities, medical experts, and patient information groups. Google told The Guardian the \u201cvast majority\u201d of AI Overviews are factual and helpful.<\/p> <h2>What The Guardian Reported Finding<\/h2> <p>The Guardian said it tested a range of health queries and asked health organizations to review the AI-generated summaries. Several reviewers said the summaries included misleading or incorrect guidance.<\/p> <p>One example involved pancreatic cancer. Anna Jewell, director of support, research and influencing at Pancreatic Cancer UK, said advising patients to avoid high-fat foods was \u201ccompletely incorrect.\u201d She added that following that guidance \u201ccould be really dangerous and jeopardise a person\u2019s chances of being well enough to have treatment.\u201d<\/p> <p>The reporting also highlighted mental health queries. Stephen Buckley, head of information at Mind, said some AI summaries for conditions such as psychosis and eating disorders offered \u201cvery dangerous advice\u201d and were \u201cincorrect, harmful or could lead people to avoid seeking help.\u201d<\/p> <p>The Guardian cited a cancer screening example too. Athena Lamnisos, chief executive of the Eve Appeal cancer charity, said a pap test being listed as a test for vaginal cancer was \u201ccompletely wrong information.\u201d<\/p> <p>Sophie Randall, director of the Patient Information Forum, said the examples showed \u201cGoogle\u2019s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people\u2019s health.\u201d<\/p> <p>The Guardian also reported that repeating the same search could produce different AI summaries at different times, pulling from different sources.<\/p> <h2>Google\u2019s Response<\/h2> <p>Google disputed both the examples and the conclusions.<\/p> <p>A spokesperson told The Guardian that many of the health examples shared were \u201cincomplete screenshots,\u201d but from what the company could assess they linked \u201cto well-known, reputable sources and recommend seeking out expert advice.\u201d<\/p> <p>Google told The Guardian the \u201cvast majority\u201d of AI Overviews are \u201cfactual and helpful,\u201d and that it \u201ccontinuously\u201d makes quality improvements. The company also argued that AI Overviews\u2019 accuracy is \u201con a par\u201d with other Search features, including featured snippets.<\/p> <p>Google added that when AI Overviews misinterpret web content or miss context, it will take action under its policies.<\/p> <p><strong>See also<\/strong>: Google AI Overviews Impact On Publishers &amp; How To Adapt Into 2026<\/p> <h2>The Broader Accuracy Context<\/h2> <p>This investigation lands in the middle of a debate that\u2019s been running since AI Overviews expanded in 2024.<\/p> <p>During the initial rollout, AI Overviews drew attention for bizarre results, including suggestions involving glue on pizza and eating rocks. Google later said it would reduce the scope of queries that trigger AI-written summaries and refine how the feature works.<\/p> <p>I covered that launch,\u00a0and the early accuracy problems quickly became part of the public narrative around AI summaries. The question then was whether the issues were edge cases or something more structural.<\/p> <p>More recently, data from Ahrefs suggests medical YMYL queries are more likely than average to trigger AI Overviews. In its analysis of 146 million SERPs, Ahrefs reported that 44.1% of medical YMYL queries triggered an AI Overview. That\u2019s more than double the overall baseline rate in the dataset.<\/p> <p>Separate research on medical Q&amp;A in LLMs has pointed to citation-support gaps in AI-generated answers. One evaluation framework, SourceCheckup, found that many responses were not fully supported by the sources they cited, even when systems provided links.<\/p> <h2>Why This Matters<\/h2> <p>AI Overviews appear above ranked results. When the topic is health, errors carry more weight.<\/p> <p>Publishers have spent years investing in documented medical expertise to meet. This investigation puts the same spotlight on Google\u2019s own summaries when they appear at the top of results.<\/p> <p>The Guardian\u2019s reporting also highlights a practical problem. The same query can produce different summaries at different times, making it harder to verify what you saw by running the search again.<\/p> <h2>Looking Ahead<\/h2> <p>Google has previously adjusted AI Overviews after viral criticism. Its response to The Guardian indicates it expects AI Overviews to be judged like other Search features, not held to a separate standard.<\/p> <\/div> <p>Generative AI,News#Guardian #Google #Overviews #Gave #Misleading #Health #Advice #sejournal #MattGSouthern1767919103<\/p> ","protected":false},"excerpt":{"rendered":"<p>The Guardian published an investigation claiming health experts found inaccurate or misleading guidance in some AI Overview responses for medical queries. Google disputes the reporting and says many examples were based on incomplete screenshots. The Guardian said it tested health-related searches and shared AI Overview responses with charities, medical experts, and patient information groups. Google [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1124,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[543,540,75,539,542,90,541,456,80],"class_list":["post-1123","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-accessibility","tag-advice","tag-gave","tag-google","tag-guardian","tag-health","tag-mattgsouthern","tag-misleading","tag-overviews","tag-sejournal"],"acf":[],"_links":{"self":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/1123","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1123"}],"version-history":[{"count":0,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/1123\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/media\/1124"}],"wp:attachment":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1123"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1123"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1123"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}