• Paid_in_cheese
    link
    fedilink
    English
    arrow-up
    5
    ·
    22 hours ago

    There’s a kind of scam that some in the AI industry have foisted on others. The scam is “this is so good, it’s going to destroy us. Therefore, we need regulation to prevent Roko’s Basilisk or Skynet.” LLMs have not gotten better in any significant degree. We have the illusion that they have because LLM companies keep adding shims that, for example, use Python libraries to correctly solve math problems or use an actual search engine on your behalf.

    LLM development of new abilities has stagnated. Instead, the innovation seems to be in making models that don’t require absurd power draws to run. (Deep Seek being a very notable, very recent example.)

    I watched this video all the way through hoping they would turn things around but it’s just the same fluff for a new audience.

      • Sergio@slrpnk.net
        link
        fedilink
        arrow-up
        3
        ·
        20 hours ago

        We have had AI for about 75 years. What we don’t have is AGI.

        I can’t believe people are downvoting this statement. You can get textbooks and journals titled “Artificial Intelligence”, accredited universities teach the subject, and researchers meet at conferences to discuss the latest research, but apparently that isn’t real because… other people use the term differently?

        I dislike OpenAI and LLMs as much as anyone else, but we can still be clear about our terminology.

      • We can’t even agree on a definition for “intelligence” so it’s pretty obvious we haven’t got an artificial version of it yet.

        Can’t make what you can’t even define, after all. “Artificial intelligence” is about as meaningful a term as “artificial geflugelschnitz”.

        • drspod@lemmy.ml
          link
          fedilink
          arrow-up
          3
          arrow-down
          3
          ·
          24 hours ago

          “Artificial Intelligence” refers to a sub-discipline of computer science, not an anthropological or neurological study of human capability, and it has been well-defined since the 1960s-70s.

          • Ah. So your argument is “we have defined ‘intelligence’ in a way that is literally not accepted by anybody but us, therefore we have made an artificial version of it”.

            Anselm’s ontological proof of the existence of AI.

            Bravo.

            You’ve managed to recreate one of the most famous 11th century tautologies.

          • technocrit@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            edit-2
            23 hours ago

            If it’s about computer science, then use terms from computer science instead of misleading and dishonest terms from biology.

            “Data processing” is fine.

            • From biology. Or psychology. Or neurology. Or philosophy, even.

              It’s pretty clear from their writing that the original AI researchers thought they were on the path to the “intelligence” talked of in these other disciplines and that it only became the very narrowly-defined field mentioned above years after their abject failure at actually capturing what anybody else would call intelligence.

              And now the term “artificial intelligence” is essentially just a marketing term, with as much meaning as any other random pair of words used for marketing purposes.

      • MotoAsh@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        16 hours ago

        Nah, biology has a ton of systems that all interconnect. Pain feedback itself is a tiny fraction of what makes a real brain tick, and “AI” doesn’t have afraction of an equivalent. Of one solitary system.

        No, brains are far, far more than statistical inference. Not that they cannot be reproduced, but they are far, far more than math machines.

        • BussyGyatt@feddit.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 hours ago

          negative feedback reinforcement systems are one of the key features of machine learning algorithms.

          they are far, far more than math machines.

          can you be more specific?

    • Ummdustry@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      arrow-down
      6
      ·
      1 day ago

      I’m not sure why that’s a relevant distinction to make here. A statistical model is just as capable of designing (for instance) an atom bomb as a human mind is. If anything, I would much rather the machines destined to supplant me actually could think and have internal worlds of their own, that is far less depressing.

      • It’s relevant in the sense of its capability of actually becoming smarter. The way these models are set up at the moment puts a mathematical upper limit to what they can achieve. We don’t quite know exactly where, but we know that each step taken will take significantly more effort and data than the last.

        Without some kind of breakthrough w.r.t. how we model these things (so something other than LLMs), we’re not going to see AI intelligence skyrocket.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        edit-2
        23 hours ago

        A statistical model is just as capable of designing (for instance) an atom bomb as a human mind is.

        No. A statistical model is designed by a human mind. It doesn’t design anything on its own.

      • Sneezycat@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 day ago

        If it got smarter it could tell you step by step how an AI would take control over the world, but wouldn’t have the conscience to actually do it.

        Humans are the dangerous part of the equation in this case.

  • MBM
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    rational animations

    Figures