As a brand new user of ChatGPT, I have never been so incredibly impressed and rage-inducing frustrated at exactly the same time with any new tech I’ve ever tried.

I was using it to help create some simple javascript functions and debug some code. It could come up with working functions almost immediately that took a really interesting approach that I wouldn’t have thought of. “Boom,” I thought, “this is great! Let’s keep going!” Then, immediately afterwards, it would provide absolute shit that couldn’t and wouldn’t work at all. It couldn’t remember the very code it just outputted to me on multiple occasions, and when asked to make a few minor changes it constantly spouted brand new very different functions, usually omitting half the functionality it had before. But, when the code was directly typed in by me in a message, every time, it did much better.

Seems with every question like that I had to start from scratch every time, or else it would work from clearly wrong (not even close, usually) newly generated, code. For example, if I asked it to print exactly the same function it printed a moment ago, it would excitedly proclaim, “Of course! Here’s the exact same function!” and then print a completely different function.

I spent so much time carefully wording my question to get it to correctly help me debug something that I ended up finding the bug myself, just because I was being so careful in examining my code so I could ask it a question that would give me a relevant answer. So…I guess that’s a win? Lol. Then, just for fun, I told ChatGPT that I had found and corrected the bug, and it took responsibility for the fix.

And yet, when it does get it right, it’s really quite impressive.

  • CynAq
    link
    fedilink
    141 year ago

    LLMs are “generate something that sounds like it would answer the prompt” machines. Nothing more and nothing less.

    Through that lens, they are a lot less impressive, a lot less frustrating and also a lot more fun.

        • Saganastic
          link
          fedilink
          4
          edit-2
          1 year ago

          Humans also generate something that sounds like it would answer the prompt. If I ask you “What country is Machu Picchu in?”, you’ll ponder for a moment, and give me what you think the answer to the prompt is. You might answer Peru, or you might answer with something else that seems reasonable to you, like Argentina.

          Humans answer questions incorrectly all the time. And they also try to come up with a reasonable response to prompts when questioned.

          • CynAq
            link
            fedilink
            -11 year ago

            Humans can do something, doesn’t mean humans only do that thing and nothing else.

            Humans have many models of the world running in different modes in parallel, enabling us to make sense of things other than just process language and come up with plausible sounding answers within the rules of a given language.

            Our understanding of concepts is different than how we process language, demonstrated in that there are perfectly intelligent people who can’t communicate using spoken or written language (including sign language) but can do so using other methods which demonstrate language processing isn’t essential to our intelligence.

            The way we learn information and integrate it into our neural network is vastly different than how we train our artificial models using machine learning. Even if we just take language processing, we definitely don’t learn by reading the entirety of written human language many times over regardless of what language it’s written in, until we can understand how it’s underlying mechanics work so that we can form plausible structures of word-chunk strings without necessarily understanding the concepts behind the word-chunks.

            • Saganastic
              link
              fedilink
              11 year ago

              I agree, there’s more going on in a human brain. But fundamentally both humans and LLMs use neural networks. The design of the neural network in a LLM is much simpler than the neural network in a human.

              But they both “think” to come up with an answer. They both cross reference learned information. They both are able to come up with an answer that is statically likely to be correct based on their learned information.

              There’s a ton of potential to take the neural networks in LLMs beyond just language. To have then conceptualize abstract ideas the way a human would. To add specialized subsections to the model for math and logic. I think we’re going to see a ton of development in this area.

              And I think you’re right, they’re not exactly the same as humans. But fundamentally there is a lot of similarity. At the end of the day, they are modeled after human brains.