• @ricecake@sh.itjust.works
      link
      fedilink
      176 months ago

      Yeah, llms are a really great advancement in language processing, and the ability to let them hook into other systems after sussing out what the user means is legitimately pretty cool.
      The issue is that people keep mistaking articulate mimicry of confidence and knowledge as actual knowledge and capability.

      It’s doubly frustrating at the moment because people keep thinking that llms are what AI is, and not just a type of AI. It’s like how now people hear “crypto” and assume you’re talking about the currency scheme, which is needlessly frustrating if you work in the security sector.

      Making a system that looked at your purchase history (no real other way to get that data reliably otherwise), identified the staple goods you bought often and then tried to predict the cadence that you buy them at would be a totally feasible AI problem. Wouldn’t be even remotely appropriate for an llm until the system found the price by (probably) crudely scraping grocery store websites and then wanted to tell you where to go, because they’re good at things like "turn this data into a friendly shopping list message "

      • @Laereht@lemmy.world
        link
        fedilink
        36 months ago

        To be completely fair, the confusion is because of the marketing. You and I both know that Tesla cars can’t really drive themselves for the same reasons you outlined but the typical person sees “autonomous mode” or “self-driving” applied to what they are buying.

        People treat llms like something out of a super hero movie because they’re led to believe it to be the case. The people shoveling in the money based on promises and projections are the root cause.

      • kamenLady.
        link
        fedilink
        1
        edit-2
        6 months ago

        I would even say llms is an important part of what eventually will become an AI and not a type of AI in itself.

        • @ricecake@sh.itjust.works
          link
          fedilink
          36 months ago

          There’s a conflation of terms.

          One sense of AI is as artificial intelligence: a huge swath of computer algorithms, techniques and study relating to machines measuring inputs, pulling information from them, and making decisions based on what they deduce. Sometimes it’s little more than a handful of equations that capture how to group things together by similarity. What matters is that it’s demonstrating demonstrating intelligence or some manner of operating on knowledge.

          The other sense of AI is as a synonym for “a general purpose intelligent system of at least human level”.

          Your phones auto complete is an example of the first sense of AI. The second sense doesn’t exist.

          There’s a tendency for people to want to remove the AI label from anything they’re used to, or that isn’t like that second sense.

      • @Specal@lemmy.world
        link
        fedilink
        16 months ago

        People are just really bad at prompt engineering and so they aren’t good at getting LLM’s like gemeni and GPT to do what they want

        You can train it, within conversations to get good at specific tasks. They’re very useful, you just gotta know how to talk to them

        • @ricecake@sh.itjust.works
          link
          fedilink
          16 months ago

          The issue is that it’s a language model. You can go a long way by manipulating language to get useful results but it’s still fundamentally limited by languages inability to perform reason, only to mimic it.

          Syntax can only take you so far, and it won’t always take you to the right place. Eventually you need something that can reason about the underlying meaning.

          • @Specal@lemmy.world
            link
            fedilink
            16 months ago

            It’s still a computer at the end of the day, just use logic. It responds well to it, you remove it’s ability to be creative and tell it what you want to accomplish