That’s good nuance, but I feel we’re missing the necessary conversation in corporate ethics. Will the suits see dangerously low quality or poor safety standards as merely a part of legal overhead costs? Will shareholders support that decision, and isn’t there profound moral hazard in letting it be ultimately a matter of profit motives? And to equivocate, are existing regulators equipped to protect consumers from an AI-developing tech sector that still wants to move fast and break things? Industries who act this way cannot self-regulate, and they are moving faster than ever.
Oh don’t get me wrong, I totally agree with you there. I was not trying to argue for the ethical dilemna at all here - I was just stating the original comment was objectively wrong in their analysis of “we don’t have anywhere near the tech to be able to even begin to get near a workable solution here”.
But the ethics and morality questions are still extremely unanswered right now.
IMO, the answers to all your questions are that companies are jumping on this way too fast (some more than others) and not doing this safely, and the collateral damage is becoming way too high. Our government and regulators are no where near equipped to solve this problem either. And our entire financial system that pushes for constantly increasing profits is not equipped to make sure this plays out safely, which would require losses and slow evolution now in order to safely reach a long term goal.
An argument could be made that the “collateral damage” is warranted since autonomous vehicles will save so many lives in the long term, but that’s a hard question to answer. But I generally think there’s too much “firing from the hip” going on at the moment. Tesla and Cruise are currently demonstrating just how much we shouldn’t be trusting these companies. I think Waymo has generally been “acceptable” in terms of their risk and safety, but not everyone is running the way they are.
That is kind of the point of capitalism. Not only when it comes to AI. Amazon didn’t care if it destroyed small bookshops or mom and pop stores. They found a way to sell things more efficiently and made money with it. The way economy works dictates you want to expand and grow as fast as possible. Or your competition will do it. Same goes for self-driving cars and AI.
The way we migitate for that in capitalism is regulations and laws.
Someone owning a roller coaster will also maybe not have the same balance in mind when it comes to fatalities vs operating cost.
While I agree with your assessment, I just don’t think capitalism, at least in it’s current form, is equipped to handle this at all. You could say this is due to our government ineptitude, but we are not addressing these problems appropriately.
Our regulatory bodies are being constantly undermined by out of control presidents and congress. And the people making the laws about these things do not even begin to understand the things they’re making laws about (see: “does tiktok use wifi?” etc).
Regulatory bodies were made to fill this gap and fix this problem, but they are actively being meddled with, strong armed, and twisted into political entities.
Hehe, that’s not the country I live in but I get it. I think there is quite some amount of difference in how proactive or reactive for example the US and the EU are regulating stuff. But it kinda doesn’t matter if the system is theroretically equipped, but just rotten in practice. And it doesn’t make it better that AI is advancing crazy fast. Wile our government just managed to phase outthe fax machines… I mean it’s going to happen anyways.
That’s good nuance, but I feel we’re missing the necessary conversation in corporate ethics. Will the suits see dangerously low quality or poor safety standards as merely a part of legal overhead costs? Will shareholders support that decision, and isn’t there profound moral hazard in letting it be ultimately a matter of profit motives? And to equivocate, are existing regulators equipped to protect consumers from an AI-developing tech sector that still wants to move fast and break things? Industries who act this way cannot self-regulate, and they are moving faster than ever.
Oh don’t get me wrong, I totally agree with you there. I was not trying to argue for the ethical dilemna at all here - I was just stating the original comment was objectively wrong in their analysis of “we don’t have anywhere near the tech to be able to even begin to get near a workable solution here”.
But the ethics and morality questions are still extremely unanswered right now.
IMO, the answers to all your questions are that companies are jumping on this way too fast (some more than others) and not doing this safely, and the collateral damage is becoming way too high. Our government and regulators are no where near equipped to solve this problem either. And our entire financial system that pushes for constantly increasing profits is not equipped to make sure this plays out safely, which would require losses and slow evolution now in order to safely reach a long term goal.
An argument could be made that the “collateral damage” is warranted since autonomous vehicles will save so many lives in the long term, but that’s a hard question to answer. But I generally think there’s too much “firing from the hip” going on at the moment. Tesla and Cruise are currently demonstrating just how much we shouldn’t be trusting these companies. I think Waymo has generally been “acceptable” in terms of their risk and safety, but not everyone is running the way they are.
That is kind of the point of capitalism. Not only when it comes to AI. Amazon didn’t care if it destroyed small bookshops or mom and pop stores. They found a way to sell things more efficiently and made money with it. The way economy works dictates you want to expand and grow as fast as possible. Or your competition will do it. Same goes for self-driving cars and AI.
The way we migitate for that in capitalism is regulations and laws.
Someone owning a roller coaster will also maybe not have the same balance in mind when it comes to fatalities vs operating cost.
While I agree with your assessment, I just don’t think capitalism, at least in it’s current form, is equipped to handle this at all. You could say this is due to our government ineptitude, but we are not addressing these problems appropriately.
Our regulatory bodies are being constantly undermined by out of control presidents and congress. And the people making the laws about these things do not even begin to understand the things they’re making laws about (see: “does tiktok use wifi?” etc).
Regulatory bodies were made to fill this gap and fix this problem, but they are actively being meddled with, strong armed, and twisted into political entities.
Hehe, that’s not the country I live in but I get it. I think there is quite some amount of difference in how proactive or reactive for example the US and the EU are regulating stuff. But it kinda doesn’t matter if the system is theroretically equipped, but just rotten in practice. And it doesn’t make it better that AI is advancing crazy fast. Wile our government just managed to phase outthe fax machines… I mean it’s going to happen anyways.