• @chonglibloodsport@lemmy.world
    link
    fedilink
    English
    415 months ago

    Everything these AIs output is a hallucination. Imagine if you were locked in a sensory deprivation tank, completely cut off from the outside world, and only had your brain fed the text of all books and internet sites. You would hallucinate everything about them too. You would have no idea what was real and what wasn’t because you’d lack any epistemic tools for confirming your knowledge.

    That’s the biggest reason why AIs will always be bullshitters as long as their disembodied software programs running on a server. At best they can be a brain in a vat which is a pure hallucination machine.

    • @Voroxpete@sh.itjust.works
      link
      fedilink
      English
      105 months ago

      Yeah, I try to make this point as often as I can. The notion that AI hallucinates only wrong answers really misleads people about how these programs actually work. It couches it in terms of human failings rather than really getting at the underlying flaw in the whole concept.

      LLMs are a really interesting area of research, but they never should have made it out of the lab. The fact that they did is purely because all science operates in the service of profit now. Imagine if OpenAI were able to rely on government funding instead of having to find a product to sell.

    • @Excrubulent@slrpnk.net
      link
      fedilink
      English
      9
      edit-2
      5 months ago

      First of all I agree with your point that it is all hallucination.

      However I think a brain in a vat could confirm information about the world with direct sensors like cameras and access to real-time data, as well as the ability to talk to people and determine things like who was trustworthy. In reality we are brains in vats, we just have a fairly common interface that makes consensus reality possible.

      The thing that really stops LLMs from being able to make judgements about what is true and what is not is that they cannot make any judgements whatsoever. Judging what is true is a deeply contextual and meaning-rich question. LLMs cannot understand context.

      I think the moment an AI can understand context is the moment it begins to gain true sentience, because a capacity for understanding context is definitionally unbounded. Context means searching beyond the current information for further information. I think this context barrier is fundamental, and we won’t get truth-judging machines until we get actually-thinking machines.