• @gosling@lemmy.world
    link
    fedilink
    5711 months ago

    Does it really though? It seems to me that once you nail the general intelligence, you’ll just need to provide the supplemental information (e.g. new documentations) for it to give an accurate response.

    Bing already somewhat does this by connecting their bot to internet searches

      • @gosling@lemmy.world
        link
        fedilink
        5011 months ago

        I can think of four aspects needed to emulate human response: basic knowledge on various topics, logical reasoning, contextual memory, and ability to communicate; and ChatGPT seems to possess all four to a certain degree.

        Regardless of what you think is or isn’t intelligent, for programming help you just need something to go through tons of text and present the information most likely to help you, maybe modify it a little to fit your context. That doesn’t sound too far fetched considering what we have today and how much information are available on the internet

        • @gnus_migrate@programming.dev
          link
          fedilink
          611 months ago

          I can think of four aspects needed to emulate human response: basic knowledge on various topics, logical reasoning, contextual memory, and ability to communicate; and ChatGPT seems to possess all four to a certain degree.

          LLM’s cannot reason, nor can they communicate. They can give the illusion of doing so, and that’s if they have enough data in the domain you’re prompting them with. Try to go into topics that aren’t as popular on the internet, the illusion breaks down pretty quickly. This isn’t “we’re not there yet”, it’s a fundamental limitation of the technology. LLM’s are designed to mimick the style of a human response, they don’t have any logical capabilities.

          Regardless of what you think is or isn’t intelligent, for programming help you just need something to go through tons of text and present the information most likely to help you, maybe modify it a little to fit your context. That doesn’t sound too far fetched considering what we have today and how much information are available on the internet.

          You’re the one who brought up general intelligence not me, but to respond to your point: The problem is that people had an incentive to contribute that text, and it wasn’t necessarily monetary. Whether it was for internet points or just building a reputation, people got something in return for their time. With LLM’s, that incentive is gone, because no matter what they contribute it’s going to be fed to a model that won’t attribute those contributions back to them.

          Today LLM’s are impressive because they use information that was contributed by millions of people. The more people rely on ChatGPT, the less information will be available to train it on, and the less impressive these models are going to be over time.

    • @MBM
      link
      411 months ago

      What if the documentation is lacking? Experienced users will still know how a library works because they’ve tried some things, but that information won’t be available if they never talk about it online

    • 🐱TheCat
      link
      fedilink
      211 months ago

      how do people still have this much faith in the tools humans build after seeing the climate change caused by the industrial revolution.

    • @alokir@lemmy.world
      link
      fedilink
      111 months ago

      I was working on a hobby project where I used a niche framework in a somewhat uncommon way. I was stuck on a concept that I think the documentation didn’t explain well enough, at least for me, and I couldn’t find any resource on it aside from the docs.

      I asked Bing to write a piece of code that does what I wanted and explain each line. It was perfectly working and the explanation was also understandable. All it did was search for its official documentation. It really blew my mind.