“He constantly asked AI to identify pictures of bugs.”
Whatever you’re doing, do it locally.
The cloud is someone else’s computer.
The gap between the models that you can run locally and those actually large language models is huge though
Narrowing every year.
The high end for video is still going nuts, but the high end for LLMs seems to be petering out.
I would love to run some LLMs on my laptop but I am not aware of any that would run on it and could, let’s say, summarize long news articles that I read accurately. The gap is still huge, maybe a bit smaller if you have some GPUs with a lot of VRAM or run a data center to run SOTA open-source models like Deepseek
Which is why I run my AI models locally. I get most of the information that the internet has available to it, and it allows me to basically run a Google-class search engine without letting Google or others know what I’m looking up. Better yet, you can get the uncensored models and it’ll tell you anything you need to know without getting you put on watch lists. :D