• Donkter@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    8 hours ago

    This is a really interesting paragraph to me because I definitely think these results shouldn’t be published or we’ll only get more of these “whoopsie” experiments.

    At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.

    • Dasus@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 minutes ago

      I’m pretty sure that only applies due to a majority of people being morons. There’s a vast gap between the 2% most intelligent, 1/50, and the average intelligence.

      Also please put digital text on white on black instead of the other way around

    • FourWaveforms@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 hours ago

      This is certainly not the first time this has happened. There’s nothing to stop people from asking ChatGPT et al to help them argue. I’ve done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.

      I also had a guy post a ChatGPT response at me (he said that’s what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it’s AI.

      To say nothing of state actors, “think tanks,” influence-for-hire operations, etc.

      The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?