• @Amoeba_Girl@awful.systems
    link
    fedilink
    English
    24
    edit-2
    23 hours ago

    To be honest, as someone who’s very interested in computer generated text and poetry and the like, I find generic LLMs far less interesting than more traditional markov chains because they’re too good at reproducing clichés at the exclusion of anything surprising or whimsical. So I don’t think they’re very good for the unfactual either. Probably a homegrown neural network would have better results.

    • bitwolf
      link
      fedilink
      English
      1122 hours ago

      Agreed, our chat server ran a Markov chain bot for fun.

      In comparison to ChatGPT on a 2nd server I frequent it had much funnier and random responses.

      ChatGPT tends to just agree with whatever it chose to respond to.

      As for real world use. ChatGPT 90% of the time produces the wrong answer. I’ve enjoyed Circuit AI however. While it also produces incorrect responses, it shares its sources so I can more easily get the right answer.

      All I really want from a chatbot is a gremlin that finds the hard things to Google on my behalf.

    • David GerardOPM
      link
      fedilink
      English
      1322 hours ago

      GPT-2 was peak LLM because it was bad enough to be interesting, it was all downhill from there

      • @Amoeba_Girl@awful.systems
        link
        fedilink
        English
        1022 hours ago

        Absolutely, every single one of these tools has got less interesting as they refine it so it can only output the platonic ideal of kitsch.