cross-posted from: https://lemmy.world/post/15864003

You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • Lvxferre
    link
    fedilink
    46 months ago

    Caveat lector: I didn’t test the AI from Google Search yet, due to country restrictions. However I played quite a bit with Gemini, and odds are that that AI and Gemini are either the same model or variations of each other.

    Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations”

    At least a fair chunk of the crap being outputted is not caused by hallucinations, but by feeding garbage like this to Google’s AI.

    However, Pichai can’t be honest, right? If he was, he would be unwittingly emphasising that your content was fed into Google’s AI, without your consent, to generate those answers. Instead he’s dressing it as the AI malfunctioning aka “hallucinating”.

    But Pichai seems to downplay the errors.

    If the output of that AI is as unreliable as Gemini’s, it’s less like “there are still times it’s going to get it wrong”, and more “statements being true or false depend on a coin toss”.

    That is not a reliable source of information. And yet they made it the default search experience, for at least some people (thankfully not me… yet).


    Pichai is a liar and a sorry excuse of a human being.