Which is funny since that does solve a lot of the problems.
If it’s completely open source at least.
Like OS data sets and model that can be ran locally means it’s not trained on stolen data and it’s not spying on people for more data.
And if it runs locally on a GPU, it’s no worse for the environment than gaming. Really the big problem with the data center compute is the infrastructure of getting that data around.
Blockchain mining, completely open source and run on local GPUs, is no worse for the environment than gaming via this logic. I think that’s easily disprovable.
Not yet, not for some time, and certainly not at single local GPU running at minimal use. Both you and the commenter I was responding to seem to forget how massive the @home projects were.
For simply productivity like Copilot or Text Gen like ChatGPT.
It absolutely is doable on a local GPU.
Source: I do it.
Sure I can’t do auto running simulations to find new drugs and protein sequencing or whatever. But it helps me code. It helps me digest software manuals. That’s honestly all I want
Also, massive compute projects for the @home project are good?
Local LLMs runs fine on a 5 year old GPU, a 3060 12 gig. I am getting performance on par with cloud ran models. I’m upgrading to a 5060ti just because I wanted to play with image Gen.
AI?
Which is funny since that does solve a lot of the problems.
If it’s completely open source at least.
Like OS data sets and model that can be ran locally means it’s not trained on stolen data and it’s not spying on people for more data.
And if it runs locally on a GPU, it’s no worse for the environment than gaming. Really the big problem with the data center compute is the infrastructure of getting that data around.
Blockchain mining, completely open source and run on local GPUs, is no worse for the environment than gaming via this logic. I think that’s easily disprovable.
except Blockchain mining has no practical use that can’t be solved using simpler tech, AI does have those applications
Not yet, not for some time, and certainly not at single local GPU running at minimal use. Both you and the commenter I was responding to seem to forget how massive the @home projects were.
For simply productivity like Copilot or Text Gen like ChatGPT.
It absolutely is doable on a local GPU.
Source: I do it.
Sure I can’t do auto running simulations to find new drugs and protein sequencing or whatever. But it helps me code. It helps me digest software manuals. That’s honestly all I want
Also, massive compute projects for the @home project are good?
Local LLMs runs fine on a 5 year old GPU, a 3060 12 gig. I am getting performance on par with cloud ran models. I’m upgrading to a 5060ti just because I wanted to play with image Gen.