• @Xerxos
    link
    16 days ago

    Well, the LLMs got a lot better since the first release. I would guess that the main problem with this AI (probably not one of the bleeding edge LLMs, guessing from the timeline) is that they have piss poor micros - even humans have problems getting your order right.

    • They have become a lot more convincing, not a lot better.

      They’re still misinformation amplifiers with a feedback loop. There’s more misinformation on most topics out there (whether intentional, via simplification, or accidental) than there is information. LLMs, which have no model of reality and thus cannot really assess the credibility of sources, just hoover it all up and mix it all together to return it to you.

      You (the generic you, not you in specific … necessarily) then take the LLM’s hallucinated garbage (which is increasingly subtle in its hallucinations) and post it. Which the LLMs hoover up in the next round of model updates and …