ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

  • @Coreidan@lemmy.world
    link
    fedilink
    English
    59
    edit-2
    10 months ago

    We call just about anything “AI” these days. There is nothing intelligent about large language models. They are terrible at being right because their only job is to predict what you’ll say next.

    • @EnderMB@lemmy.world
      link
      fedilink
      English
      1910 months ago

      (Disclosure: I work on LLM’s)

      While you’re not wrong, how is this different to many existing techniques and compositional models that are used practically everywhere in tech?

      Similarly, it’s probably safe to assume that the LLM’s prediction isn’t the only system in use. There will be lots of auxiliary services giving an orchestrator information to reason with. In this instance, if you have a system that is trying to figure out what to say next, with several knowledge stores and feedback services telling you “you were just discussing this” or “you can access the weather from here” is that all that different from “intelligence”?

      At a given point, it’s arguing semantics. Are any AI techniques true intelligence? Probably not, but then again, we don’t really know what true intelligence is.

      • @Coreidan@lemmy.world
        link
        fedilink
        English
        510 months ago

        how is this different to many existing techniques and compositional models that are used practically everywhere in tech?

        It’s not. LLM is just a statistical model. Nothing special about it. Nothing different what we’ve already been doing for a while. This only validates my statement that we call just about anything “AI” these days.

        We don’t even know what true intelligence is, yet we are quick to make claims that this is “AI”. There is no consciousness here. There is no self awareness. No emotion. No ability to reason or deduct. Anyone who thinks otherwise is just fooling themselves.

        It’s a buzz word to get people riled up. It’s completely disingenuous.

        • @sailingbythelee@lemmy.world
          link
          fedilink
          English
          810 months ago

          I think the point of the Turing test is to avoid thorny questions about the definition of intelligence. We cant precisely define intelligence, but we know that normally functioning humans are intelligent. Therefore, if we talk to a computer and it is indistinguishable from a human in a conversation, then it is intelligent by definition.

        • @EnderMB@lemmy.world
          link
          fedilink
          English
          410 months ago

          So, by your definition, no AI is AI, and we don’t know what AI is, since we don’t know what the I is?

          While I hate that AI is just a buzzword for scam artists and tech influencers nowadays, dismissing a term seems a bit overkill. It also seems overkill when it’s not something that academics/scholars seem particularly bothered by.

        • @QuaternionsRock@lemmy.world
          link
          fedilink
          English
          310 months ago

          There is no consciousness here. There is no self awareness. No emotion. No ability to reason or deduct.

          Of all of these qualities, only the last one—the ability to reason or deduct—is a widely-accepted prerequisite for intelligence.

          I would also argue that contemporary LLMs demonstrate the ability to reason by correctly deriving mathematical proofs that do not appear in the training datasets. How would you be able to accomplish such a feat without some degree of reasoning?

      • @fidodo@lemmy.world
        link
        fedilink
        English
        310 months ago

        The worrisome thing is that LLMs are being given access to controlling more and more actions. With traditional programming sure there are bugs all but at least they’re consistent. The context may make the bug hard to track down, but at the end of the day, the code is being interpreted by the processor exactly as it was written. LLMs could just go haywire for impossible to diagnose reasons. Deploying them safely in utilities where they have control over external systems will require a lot of extra non LLM safe guards that I do not see getting added enough, and that is concerning.

    • @platypus_plumba@lemmy.world
      link
      fedilink
      English
      11
      edit-2
      10 months ago

      What is intelligence?

      Even if we don’t know what it is with certainty, it’s valid to say that something isn’t intelligence. For example, a rock isn’t intelligent. I think everyone would agree with that.

      Despite that, LLMs are starting to blur the lines and making us wonder if what matters of intelligence is really the process or the result.

      A LLM will give you much better results in many areas that are currently used to evaluate human intelligence.

      For me, humans are a black box. I give them inputs and they give me outputs. They receive inputs from reality and they generate outputs. I’m not aware of the “intelligent” process of other humans. How can I tell they are intelligent if the only perception I have are their inputs and outputs? Maybe all we care about are the outputs and not the process.

      If there was a LLM capable of simulating a close friend of yours perfectly, would you say the LLM is not intelligent? Would it matter?

        • @platypus_plumba@lemmy.world
          link
          fedilink
          English
          8
          edit-2
          10 months ago

          Things we know so far:

          • Humans can train LLMs with new data, which means they can acquire knowledge.

          • LLMs have been proven to apply knowledge, they are acing examns that most humans wouldn’t dream of even understanding.

          • We know multi-modal is possible, which means these models can acquire skills.

          • We already saw that these skills can be applied. If it wasn’t possible to apply their outputs, we wouldn’t use them.

          • We have seen models learn and generate strategies that humans didn’t even conceive. We’ve seen them solve problems that were unsolvable to human intelligence.

          … What’s missing here in that definition of intelligence? The only thing missing is our willingness to create a system that can train and update itself, which is possible.

          • @Coreidan@lemmy.world
            link
            fedilink
            English
            -2
            edit-2
            10 months ago

            Can a LLM learn to build a house and then actually do it?

            LLMs are proven to be wrong about a lot of things. So I would argue these aren’t “skills” and they aren’t capable of acting on those “skills” effectively.

            At least with human intelligence you can be wrong and understand quickly that you are wrong. LLMs have no clue if they are right or not.

            There is a big difference between actual skill and just a predictive model based on statistics.

            • @platypus_plumba@lemmy.world
              link
              fedilink
              English
              8
              edit-2
              10 months ago

              Is an octopus intelligent? Can an octopus build an airplane?

              Why do you expect these models to have human skills if they are not humans?

              How can they build a house if they don’t even have vision or a physical body? Can a paralized human that can only hear and speak build a house? Is that human intelligent?

              This is clearly not human intelligence, it clearly lacks human skills. Does it mean it isn’t intelligent and it has no skills?

              • @Coreidan@lemmy.world
                link
                fedilink
                English
                -3
                edit-2
                10 months ago

                Exactly. They are just “models”. There is nothing intelligent about them.

                Yes octopus are very intelligent. They can think themselves out of a box without relying on curated data to train them.

                Logic, reasoning, and deduction. LLMs have zero ability to reject data based on their understanding of reality. Big diff.

                • @platypus_plumba@lemmy.world
                  link
                  fedilink
                  English
                  5
                  edit-2
                  10 months ago

                  You keep asking questions like “can a model build a house” but keep ignoring questions like “can an octopus build a house”. Then asking “can a model learn in seconds how to escape from a complex enclosure” and then ignoring “can a newborn human baby do that?”

                  Can an octopus write a poem? Can a baby write an essay? Can an adult human speak every human language, including fictional languages?

                  Just because it isn’t as intelligent as a human doesn’t mean this isn’t some type if intelligence.

                  Go and check what we call AI in videogames. Do you think that’s a simulated human? Go see what we’ve been calling AI in chess. Is that a simulated human being playing chess? No.

                  We’ve been calling Artificial intelligence things that are waaaaaay dumber than GPTs for decades. Even in the academia. Suddenly a group of people decided “artificial intelligence must be equal to human intelligence”. Nope.

                  Intelligence doesn’t need to be the same type of human intelligence.

                • @QuaternionsRock@lemmy.world
                  link
                  fedilink
                  English
                  210 months ago

                  LLMs have zero ability to reject data based on their understanding of reality.

                  Me:

                  I just found a proof that 2+2=5!

                  ChatGPT:

                  The statement “2 + 2 = 5” is a classic example of a mathematical fallacy or incorrect reasoning, often used for humorous or illustrative purposes. In standard arithmetic, of course, 2 + 2 = 4. Proofs claiming to show otherwise rely on some form of invalid manipulation of algebraic properties or deliberate arithmetic mistakes.

                  A common “proof” of 2 + 2 = 5 might go something like this:

                  1. Start with an incorrect assumption or manipulate the algebraic steps in a way that hides a division by zero or another illegal operation.
                  2. Through these manipulations, arrive at the conclusion that 2 + 2 = 5.

                  However, each step of a valid mathematical proof must adhere strictly to the rules and properties of mathematics. Any proof leading to 2 + 2 = 5 will contain an error or a misapplication of those rules.

                  If you have a specific “proof” in mind, I can help identify the error in it.

    • @lanolinoil@lemmy.world
      link
      fedilink
      English
      6
      edit-2
      10 months ago

      If you look at efficacy though on academic tests or asking it some fact question and you compare that to asking a random person instead of always getting the ‘right’ answer, which we expect computers/calculators to do, would LLMs be comparable or better? Surely someone has some data on that.

      E: It looks like in certain domains at least LLMs beat out human counterparts. https://stanfordmimi.github.io/clin-summ/

    • @shaman1093@lemmy.ml
      link
      fedilink
      English
      210 months ago

      The person that commented below kinda has a point. While I agree that there’s nothing special about LLMs an argument can be made that consciousness (or maybe more ego?) is in itself an emergent mechanism that works to keep itself in predictable patterns to perpetuate survival.

      Point being that being able to predict outcomes is a cornerstone of current intelligence (socially, emotionally and scientifically speaking).

      If you were to say that LLMs are unintelligible as they operate to provide the most likely and therefore most predictable outcome then I’d agree completely.

      • Liz
        link
        fedilink
        English
        210 months ago

        The ability to make predictions is not sufficient for evidence of consciousness. Practically anything that’s alive can do that to one degree or another.