Yes, thankfully the reasonable tech companies offering these services have decided to stop the training process after it was done once. The insane increase in energy consumption and hardware manufacturing for datacenter components and accelerators is purely coincidental and has nothing to do with demand for gimmicky generative AI services. Let’s also conveniently ignore the increasing inference cost of more complex models, while we’re at it.
That’s the training process. After that you can just run it with a single GPU, in a few seconds.
Yes, thankfully the reasonable tech companies offering these services have decided to stop the training process after it was done once. The insane increase in energy consumption and hardware manufacturing for datacenter components and accelerators is purely coincidental and has nothing to do with demand for gimmicky generative AI services. Let’s also conveniently ignore the increasing inference cost of more complex models, while we’re at it.
I wonder how many artists they could just pay for what they spend on all that…