Lee Duna to Technology@beehaw.orgEnglish • 1 year agoApple wants AI to run directly on its hardware instead of in the cloudarstechnica.comexternal-linkmessage-square34fedilinkarrow-up1113arrow-down10cross-posted to: technology@lemmy.worldhackernews@derp.foo
arrow-up1113arrow-down1external-linkApple wants AI to run directly on its hardware instead of in the cloudarstechnica.comLee Duna to Technology@beehaw.orgEnglish • 1 year agomessage-square34fedilinkcross-posted to: technology@lemmy.worldhackernews@derp.foo
minus-squareQuokkalinkfedilinkEnglish9•1 year agoYou can already run a llm natively on Android devices.
minus-square@Vibrose@programming.devlinkfedilink10•1 year agoYou can on iOS as well! https://apps.apple.com/us/app/private-llm/id6448106860
minus-squaresnowelinkfedilink7•1 year agoThe hard part isn’t running ai on a device… it’s doing so while retaining battery life, performance, and privacy.
minus-square@Amaltheamannen@lemmy.mllinkfedilink4•1 year agoPrivacy is also easy with a local LLM. Performance and battery not so much.
minus-squareJackGreenEarthlinkfedilinkEnglish3•1 year agoWhich one do you use? I tried MLCChat, but all 3 times it either showed a java error or generated giberrish, what’s worked for you?
You can already run a llm natively on Android devices.
You can on iOS as well!
https://apps.apple.com/us/app/private-llm/id6448106860
The hard part isn’t running ai on a device… it’s doing so while retaining battery life, performance, and privacy.
Privacy is also easy with a local LLM. Performance and battery not so much.
Which one do you use? I tried MLCChat, but all 3 times it either showed a java error or generated giberrish, what’s worked for you?