• Possibly linux
    link
    fedilink
    English
    20
    edit-2
    8 months ago

    Yep, we are screwed. They have successfully create something with more potential harm than the iPad.

    I also find it crazy that it is addicting. I’m not against using an AI for mental help but you should use a local AI instead.

    Just imagine a world where all the people talk like LLMs

    • @Evil_incarnate@lemm.ee
      link
      fedilink
      English
      78 months ago

      Indeed, the potential consequences are concerning. Addiction to AI-driven technologies is a real issue, especially when they become ubiquitous. Utilizing local AI for mental health support could indeed mitigate some risks. However, we must proceed cautiously to navigate the complexities of integrating AI into our daily lives. And yes, envisioning a world where everyone converses like language models is both fascinating and slightly eerie.

    • @chakan2@lemmy.world
      link
      fedilink
      English
      48 months ago

      Just imagine a word where all the people talk like LLMs

      Have you talked to real people lately? I thinking I’m leaning towards the dystopian version of humanity.

  • @tal@lemmy.today
    link
    fedilink
    English
    15
    edit-2
    8 months ago

    I kind of wonder what the character.ai privacy policy is for all these conversations.

    I would vaguely imagine that a number of people might not want a full log of their conversations with their psychologist and/or friend to leak or have information extracted from it for arbitrary purposes.

  • @LemoineFairclough@sh.itjust.works
    link
    fedilink
    English
    18 months ago

    The end of the article does try to take a hopeful tone:

    “I definitely prefer talking with people in real life, though,” he added.

    I don’t necessarily agree with everything though:

    While some of the culture around Character.AI is concerning, it also mimics the internet activity of previous generations who, for the most part, have turned out just fine.