I’m not sure why that’s a relevant distinction to make here. A statistical model is just as capable of designing (for instance) an atom bomb as a human mind is. If anything, I would much rather the machines destined to supplant me actually could think and have internal worlds of their own, that is far less depressing.
It’s relevant in the sense of its capability of actually becoming smarter. The way these models are set up at the moment puts a mathematical upper limit to what they can achieve. We don’t quite know exactly where, but we know that each step taken will take significantly more effort and data than the last.
Without some kind of breakthrough w.r.t. how we model these things (so something other than LLMs), we’re not going to see AI intelligence skyrocket.
I’m not sure why that’s a relevant distinction to make here. A statistical model is just as capable of designing (for instance) an atom bomb as a human mind is. If anything, I would much rather the machines destined to supplant me actually could think and have internal worlds of their own, that is far less depressing.
It’s relevant in the sense of its capability of actually becoming smarter. The way these models are set up at the moment puts a mathematical upper limit to what they can achieve. We don’t quite know exactly where, but we know that each step taken will take significantly more effort and data than the last.
Without some kind of breakthrough w.r.t. how we model these things (so something other than LLMs), we’re not going to see AI intelligence skyrocket.
No. A statistical model is designed by a human mind. It doesn’t design anything on its own.
If it got smarter it could tell you step by step how an AI would take control over the world, but wouldn’t have the conscience to actually do it.
Humans are the dangerous part of the equation in this case.