OC below by @HaraldvonBlauzahn@feddit.org

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

  • WhirlpoolBrewer
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    I don’t think the current common implementation of AI systems are “thinking” and I’ll base my argument on Oxford’s definitions of words. Thinking is defined as “the process of using one’s mind to consider or reason about something”. I’ll ignore the word “mind” and focus on the word “reason”. I don’t think what AIs are doing counts as reasoning as defined by Oxford. Let’s go to that definition: “the power of the mind to think, understand, and form judgments by a process of logic”. I take issue with the assertion that they form judgments. For completeness, but I don’t think it’s definition is particularly relevant here, a judgment is: “the ability to make considered decisions or come to sensible conclusions”.

    I think when you ask an LLM how many 'r’s there are in Strawberry and questions along this line you can see they can’t form judgments. These basic but obscure questions are where you see that the ability to form judgements isn’t there. I would also add that if you “form judgments” you probably don’t need to be reminded you formed a judgment immediately after forming one. Like if I ask an LLM a question, and it provides an answer, I can convince it that it was wrong whether or not I’m making junk up or not. I can tell it it made a mistake and it will blindly change it’s answer whether it made a mistake or not. That also doesn’t feel like it’s able to reason or make judgments.

    This is where all the hype falls flat for me. It feels like sometimes it looks like a concrete wall, but occasionally that concrete wall is made of wet paper. You can see how impressive the tool is and how paper thin it is at the same time. It’s cool, it’s useful, it’s fake, and that’s ok. Just be aware of what the tool is.

    • Kuinox@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      2 days ago

      I think when you ask an LLM how many 'r’s there are in Strawberry and questions along this line you can see they can’t form judgments.

      Like a LLMs you are making the wrong affirmation based lacking knowledge.
      Current LLMs input, and output tokens, they dont ever see the individual letters, they see tokens, for straberry, they see 3 tokens:

      They dont have any information on what characters are in this tokens. So they come up with something. If you learned a language only by speaking, you’ll be unable to write it down correctly (except purely phonetical systems), instead you’ll come up with what you think the word should be written.

      I would also add that if you “form judgments” you probably don’t need to be reminded you formed a judgment immediately after forming one.

      You come up with the judgment before you are aware of it: https://www.unsw.edu.au/newsroom/news/2019/03/our-brains-reveal-our-choices-before-were-even-aware-of-them--st

      can tell it it made a mistake and it will blindly change it’s answer whether it made a mistake or not. That also doesn’t feel like it’s able to reason or make judgments.

      That’s also how the brain can works, it come up with a plausible explanation after having the result.
      See the experience which are spoken about here: https://www.youtube.com/watch?v=wfYbgdo8e-8

      I showed the same behavior in humans of some behavior you observed in LLMs, does this means that by your definition, humans doesnt think ?

      • WhirlpoolBrewer
        link
        fedilink
        arrow-up
        8
        ·
        2 days ago

        If the LLM could reason, shouldn’t it be able to say “my token training prevents me from understanding the question as asked. I don’t know how many 'r’s there are in Strawberry, and I don’t have a means of finding that answer”? Or at least something similar right? If I asked you what some word in a language you didn’t know, you should be able to say “I don’t know that word or language”. You may be able to give me all sorts of reasons why you don’t know it, and that’s all fine. But you would be aware that you don’t know and would be able to say “I don’t know”.

        If I understand you correctly, you’re saying the LLM gets it wrong because it doesn’t know or understand that words are built from letters because all it knows are tokens. I’m saying that’s fine, but it should be able to reason that it doesn’t know the answer, and say that. I assert that it doesn’t know that it doesn’t know what letters are, because it is incapable of coming to that judgement about its own knowledge and limitations.

        Being able to say what you know and what you don’t know are critical to being able to solve logic problems. Knowing which information is missing and can be derived from known things, and which cannot be derived is key to problem solving based on reason. I still assert that LLMs cannot reason.

        • Kuinox@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 day ago

          I’m saying that’s fine, but it should be able to reason that it doesn’t know the answer, and say that.

          That is of course a big problem. They try to guess too much stuff, but it’s also why it kinda works. Symbolics AI have the opposite problem, they are rarely useful, because they can’t guess stuff, they are rooted in hard logic, and cannot come up with a reasonable guess.
          Now humans also try to guess stuff and sometimes get it wrong, it’s required in order to produce results from our thinking and not be stuck in a state where we don’t have enough data to do anything, like a symbolic AI.

          Now, this is becoming a spectrum, humans are somewhere in the middle of LLMs and symbolics AI.
          LLMs are not completely unable to say what they know and doesnt know, they are just extremely bad at it from our POV.

          The probleme with “does it think” is that it doesn’t give any quantity or quality.

          • WhirlpoolBrewer
            link
            fedilink
            arrow-up
            1
            ·
            14 hours ago

            Is the argument that LLMs are thinking because they make guesses when they don’t know things combined with no provided quantity or quality to describe thinking?

            If so, I would suggest that the word “guessing” is doing a lot of heavy lifting here. The real question would be “is statistics guessing”? I would say guessing and statistics are not the same thing, and Oxford would agree. An LLM just grabs tokens based on training data on what word or token most likely comes next, it will just be using what the statistically most likely next token or word is. I don’t think grabbing the highest likely next token counts as guessing. That feels very algorithmic and statistical to me. It is also possible I’m missing the argument still.

            • Kuinox@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              14 hours ago

              Is the argument that LLMs are thinking because they make guesses

              No, it’s that you can’t root the argument that they don’t think over the fact they make stuff up, because humans too. You could root it in the amount of things it guess wrong, but it’s extremely hard to measure.
              Again, I’m not claiming that they think, but that we don’t know until one or the other is proven.
              Right now, thinking one, or the other is true, is belief.

              • WhirlpoolBrewer
                link
                fedilink
                arrow-up
                1
                ·
                14 hours ago

                I think you can make a strong argument that they don’t think rooted in words should mean something and that statistics and thinking don’t mean the same thing. To me, that feels like a fairly valid argument.

                • Kuinox@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  13 hours ago

                  So you think you need words to be able to think ? Monkeys, birds, human babies are unable to think then ?

                  • WhirlpoolBrewer
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    13 hours ago

                    My apologies, I was too vague. I’m saying “thinking” by definition is not “statistics”. Where Monkeys, birds, and human babies all “think”, LLMs use algorithms and “statistics”. I also think that “statistics” not meaning the same thing that “thinking” is a valid argument. I would go farther and say it’s important that words have meaning. That is what I was attempting to convey. I’m happy to clear up anything I was unclear about.