• @boonhet@lemm.ee
        link
        fedilink
        7
        edit-2
        9 months ago

        Imagine a standardized API where you provide either your own LLM running locally, your own LLM running in your server (for enthusiasts or companies), or a 3rd party LLM service over the Internet, for your optional AI assistant that you can easily disable.

        Regardless of your DE, you could choose if you want an AI assistant and where you want the model to run.

        • @hackris@lemmy.ml
          link
          fedilink
          49 months ago

          I’ve had this idea for a long time now, but I don’t know shit about LLMs. GPT can be run locally though, so I guess only the API part is needed.

          • @boonhet@lemm.ee
            link
            fedilink
            39 months ago

            I’ve run LLMs locally before, it’s the unified API for digital assistants that would be interesting to me. Then we’d just need an easy way to acquire LLMs that laymen could use, but probably any bigger DE or distro can create a setup wizard.

      • @superguy@lemm.ee
        link
        fedilink
        49 months ago

        Yeah. I’m really annoyed by this trend of having programs that could function offline require connecting to a server.

        • @AccidentalLemming@lemmy.world
          link
          fedilink
          18 months ago

          It’s not as much of a trend as much as it is companies loving software-as-a-service because it gives them a lot of control and in many cases a recurring income.

      • Not just hypothetically but practically too. A foss program called koboldai let’s you run LLMs locally on your computer and a project that takes advantage of this is the koboldassistant project. You can essentially make your own Alexa,Cortana,Siri whatever that doesn’t collect your data and belongs to you

    • taanegl
      link
      fedilink
      19 months ago

      Open source locally run LLM that runs on GPU or dedicated PCIe open hardware that doesn’t touch the cloud…