• dudeOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      The gap between the models that you can run locally and those actually large language models is huge though

      • mindbleach@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        Narrowing every year.

        The high end for video is still going nuts, but the high end for LLMs seems to be petering out.

        • dudeOPM
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          I would love to run some LLMs on my laptop but I am not aware of any that would run on it and could, let’s say, summarize long news articles that I read accurately. The gap is still huge, maybe a bit smaller if you have some GPUs with a lot of VRAM or run a data center to run SOTA open-source models like Deepseek

  • CrayonDevourer@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    Which is why I run my AI models locally. I get most of the information that the internet has available to it, and it allows me to basically run a Google-class search engine without letting Google or others know what I’m looking up. Better yet, you can get the uncensored models and it’ll tell you anything you need to know without getting you put on watch lists. :D