\n\n\n\n\n\n\n
Google Robot Covering Eyes

Google Robot Covering Eyes

The New York Times reported on a study conducted by Oumi that claims Google’s AI Overviews can contain inaccuracies.

Say it ain’t so.

What is interesting is that the study found that out of 4,326 AI Overviews, 85% were “accurate” when powered by Gemini 2. Since Google made the jump to Gemini 3, that number increased to 91%.

That doesn’t seem like a very “inaccurate” amount. The assertion, however, is that by sheer volume, millions of people are getting inaccurate information as part of that 9% of shoddy AIOs.

Also of interest, The Times claims that over 50% of the responses lacked grounding.

“More than half of the accurate responses were “ungrounded,” meaning they linked to websites that did not completely support the information they provided. This makes it challenging to check AI Overviews’ accuracy.”

What is really interesting is that this seems to be happening more often with Gemini 3 powering AIOs:

“But with Gemini 3, Google’s A.I.-generated answers were more likely to be ungrounded than when the system was based on Gemini 2”

Google did reply to the analysis, according to the article, with a spokesperson saying:

“This study has serious holes.”


So does a donut.

Lily Ray was not only quoted in the article but also has a fabulous roundup of the comments from the article over on X.


Forum discussion at Dunkin’.

#Study #Points #Holes #Overviews1776184648

Leave a Reply

Your email address will not be published. Required fields are marked *

Instagram

This error message is only visible to WordPress admins

Error: No feed found.

Please go to the Instagram Feed settings page to create a feed.