Avram Piltch is the editor in chief of Tom’s Hardware, and he’s written a thoroughly researched article breaking down the promises and failures of LLM AIs.

  • @FlapKap@feddit.dk
    link
    fedilink
    241 year ago

    I like the point about LLMs interpolating data while humans extrapolate. I think that’s sums up a key difference in “learning”. It’s also an interesting point that we anthropomorphise ML models by using words such as learning or training, but I wonder if there are other better words to use. Fitting?

    • Amju Wolf
      link
      fedilink
      101 year ago

      Isn’t interpolation and extrapolation the same thing effectively, given a complex enough system?

      • @CanadaPlus@lemmy.sdf.org
        link
        fedilink
        31 year ago

        Depending on the geometry of the state space, very literally yes. Think about a sphere, there’s a straight line passing from Denver to Guadalajara, roughly hitting Delhi on the way. Is Delhi in between them (interpolation), or behind one from the other (extrapolation)? Kind of both, unless you move the goalposts to add distance limits on interpolation, which could themselves be broken by another geometry

      • @maynarkh@feddit.nl
        link
        fedilink
        21 year ago

        No, repeated extrapolation results in eventually making everything that ever could be made, constant interpolation would result in creating the same “average” work over and over.

        The difference is infinite vs zero variety.

        • @CanadaPlus@lemmy.sdf.org
          link
          fedilink
          1
          edit-2
          1 year ago

          Fun fact, an open interval is topologically isomorphic the the entire number line. In practice they’re often different but you started talking about limits (“eventually”), where that will definitely come up.

    • brie
      link
      fedilink
      61 year ago

      What about tuning, to align with “finetuning?”

    • frog 🐸
      link
      fedilink
      English
      61 year ago

      I also like the point about interpolation vs extrapolation. It’s demonstrated when you look at art history (or the history of any other creative field). Humans don’t look at paintings and create something that’s predictable based on those paintings. They go “what happens when I take that idea and go even further?” An LLM could never have invented Cubism after looking at Paul Cezanne’s paintings, but Pablo Picasso did.