Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:

I’m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. 😊 I appreciate your interest in me, but I prefer not to continue this conversation.

Aaaand I got blocked

    • @LemmysMum@lemmy.world
      link
      fedilink
      English
      11
      edit-2
      1 year ago

      Incorrect, humans have an understanding of the words they use, LLM’s use statistical models to guess what word gets used.

      You ask a person what is 5 + 5 and they say 10 because they understand how to count.

      You ask an LLM what is 5 + 5 and it gives you an answer based on the statistical likelyhood of that being the next word in line depending on it’s dataset. If you’re dataset has wrong answers you’ll get wrong answers.

        • @Zeth0s@lemmy.world
          link
          fedilink
          English
          11 year ago

          It’s a bit like saying a human being is a fancy worm. Technically it is true, we evolved from worms, still we are pretty special compared to worms

          • @Petter1@lemm.ee
            link
            fedilink
            English
            1
            edit-2
            1 year ago

            We use LLM feature throughout our lives. Often without realizing. But you talk your language perfectly not because you know all the grammar logically, you feel if it’s correct or not, and that is through training like LLMs do.

            • @Zeth0s@lemmy.world
              link
              fedilink
              English
              2
              edit-2
              1 year ago

              Mine was a comment to say that llms are not just fancy auto complete. Although technically an evolution, it is a bit like saying humans are fancy worms because evolved from worms

      • @Petter1@lemm.ee
        link
        fedilink
        English
        01 year ago

        Have you ever asked a kid who is starting to talk (1.5 - 3 years old) what 5 + 5 is? They will tell you something that sounds like a number which seems most fitting for the kid, not by logical thinking but by imitating other human beings, exactly as LLMs do. Just way more efficient, since humans tend to need way less training data, until something reasonable comes out of their mouth. Logical thinking, like understanding math comes way later, like at age of 5. source: My son.

        • @LemmysMum@lemmy.world
          link
          fedilink
          English
          11 year ago

          Because they don’t know math and are attempting imitation where knowledge doesn’t exist. The LLM has knowledge and a statistical model. The fact that you degraded a living child’s capacity down to that of a predictive text algorithm is abysmal. That child is already learning truth and objectivity and love and hope and so many things that are intangential and out of reach of an LLM.

          • @Petter1@lemm.ee
            link
            fedilink
            English
            11 year ago

            I reduced to learning talking part of the human development. Of course there are way more mechanisms involved than the way LLMs work to throughly master talking (as we see on the results of todays LMM). But what I wanted to say is that I’m pretty sure that in our subconscious we use a very similar system to LLMs, especially for talking. I sign for that is in my opinion that people tend to acquire the regional tongue if they stay in the region for long enough. 💁🏻‍♀️ but in means I’m any expert, this is just how this hole LLM feels to me.