ThisIsFine.gif

  • nesc@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    121
    ·
    edit-2
    9 months ago

    "Open"ai tells fairy tales about their “ai” being so smart it’s dangerous since inception. Nothing to see here.

    In this case it looks like click-bate from news site.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      arrow-up
      76
      ·
      9 months ago

      The idea that GPT has a mind and wants to self-preserve is insane. It’s still just text prediction, and all the literature it’s trained on is written by humans with a sense of self preservation, of course it’ll show patterns of talking about self preservation.

      It has no idea what self preservation is, even then it only knows it’s an AI because we told it it is. It doesn’t even run continuously anyway, it literally shuts down after every reply and its context fed back in for the next query.

      I’m tired of this particular kind of AI clickbait, it needlessly scares people.

    • Moonrise2473@feddit.it
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      9 months ago

      news site? BGR hasn’t posted actual news in at least two decades, only clickbait and apple fanservice

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      This is from mid-2023:

      https://en.m.wikipedia.org/wiki/AutoGPT

      OpenAI started testing it by late 2023 as project “Q*”.

      Gemini partially incorporated it in early 2024.

      OpenAI incorporated a broader version in mid 2024.

      The paper in the article was released in late 2024.

      It’s 2025 now.

      • nesc@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Tool calling is cool funcrionality, agreed. How does it relate to openai blowing its own sails?

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          There are several separate issues that add up together:

          • A background “chain of thoughts” where a system (“AI”) uses an LLM to re-evaluate and plan its responses and interactions by taking into account updated data (aka: self-awareness)
          • Ability to call external helper tools that allow it to interact with, and control other systems
          • Training corpus that includes:
            • How to program an LLM, and the system itself
            • Solutions to programming problems
            • How to use the same helper tools to copy and deploy the system or parts of it to other machines
            • How operators (humans) lie to each other

          Once you have a system (“AI”) with that knowledge and capabilities… shit is bound to happen.

          When you add developers using the AI itself to help in developing the AI itself… expect shit squared.

      • nesc@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        It works as expected, they give it system prompt that conflicts with subsequent prompts. Everything else looks like typical llm behaviour, as in gaslightning and doubling down. At least that’s what Iu see in tweets.