• @Ottomateeverything@lemmy.world
    link
    fedilink
    0
    edit-2
    9 months ago

    You’re just rephrasing the same approach, over, and over, and over. It’s like you’re not even reading what I’m saying.

    The answer is no. This is not a feasible approach. LLMs are just parrots and they don’t understand anything. They were essentially a “shortcut” that gets something that acts intelligent without actually having to build something intelligent. You’re not going to convince it to be intelligent. You’re not going to solve all it’s short comings by shoe horning something in. It’s just more work than building actual intelligence.

    It’s like if a costal town got overrun by flooding from a hurricane. And some guy shows up and is like “hey, I’ve got a bucket, I’ll just pull all the water to the sea”. And I’m like “that’s infeasible, we need a different solution, your bucket even has fucking holes in it”. And you’re over here saying “well, what if we got some duct tape? And then we can patch the holes. And then we can call our friends, and we can all bucket the water”.

    It’s just not happening.

    Eh I really need to learn more about AI to understand the limits

    Yeah. This. You just keep repeating the same approach over and over without understanding or listening to the basic failings of these chat bots. It’s just not happening. You’re just perpetuating nonsense.

    These things are basically slightly more complicated versions of the auto complete in your phone keyboard. Except that they’re fed hug amounts of the internet. They get really good at parroting sentences, but they have no sense of “intelligence” or what they’re actually doing. You’re better off trying to convince your auto correct to sound like Shakespeare than you are to remove the failings like racial bias from things like Gemini and ChatGPT. You can chip at small corners here and there but this is just not the path forward.

    • @Flumpkin@slrpnk.net
      link
      fedilink
      19 months ago

      You’re just rephrasing the same approach, over, and over, and over. It’s like you’re not even reading what I’m saying.

      No I read what you are saying. I just think that you are something that “acts intelligent without actually being intelligent”. Here is why: All that you’ve written is based on very simple primitive brain cells and synapses and synaptic connections. It’s self evident that this is not really something that is designed to be intelligent. You’re just “really good at parroting sentences”. And you clearly agree that I’m doing the same 😄

      Clearly LLMs are not intelligent and don’t understand, and it would need many other systems to make them so. But what they do show is that the “creative spark” even though they are very mediocre in their quality, can be created by using a critical mass of quantity. It’s like it’s just one small part of our mind, the “creative writing center” without intelligence. But it’s there, just because we added more data and processing.

      Quality through quantity, that is what we seem to be and what is so shocking. And it’s obvious that there is a kind of disgust or bias against such a notion. A kind of embarrassment of the brain to just be thinking meat.

      Now you might be absolutely right that my specific suggestion for an approach is bullshit, I don’t know enough about it. But I am pretty sure we’ll get there without understanding exactly how it works.