Archive link so you can skip the paywall.
AI safety can be interpreted two ways:
- Preventing AI from killing everyone. In this sense, the safeguards are imagined, but so are the current risks.
- Preventing AI from saying stupid shit. The risk is real, but it is not as bad as it looks like; we got confidently incorrect people since the dawn of time, and yet we’re still here.
Due to context “AI safety” likely means #1. Seriously, for how long are they (OpenAI employees and the media circling OpenAI) going to roll on wishful belief (like a pig rolls on mud), as if OpenAI actually developed artificial intelligence? Show Q* then we talk.
If anything, the recent events made the risk of AI killing everyone even less probable. Because guess what: if you put profit before tech development, tech development slows down. The likelihood that OpenAI will develop an intelligent system actually decreased.
In my opinion, it’s far more likely for people to use AI as a weapon to kill people than for AI to “go rogue” and destroy humanity.
While humans are doing a fairly good job on their own of people psychopathic freaks, imagine a world where police robots are laying siege to neighborhoods, where corporations use AI to maximize efficiency without regard for human suffering.
The real danger of AI is the lack of liability. If a cop kills an innocent person, you can put him on trial. If a robot kills an innocent person, this will get written off as the unfortunate collateral of technological progress (and maybe the department will have to pay the family a fine, a fine that is just coming out of tax dollars anyways).
The real danger of AI is the lack of liability.
Yup. However, good news - people might use AI in the future for that, but the scummy tactic itself is not new, so we [humans and our societies] got already a bunch of mechanisms against it. We’re pretty good to find someone to blame when this sort of thing happens, and AI won’t change it.
This is the best summary I could come up with:
The palace intrigue and boardroom drama of Sam Altman’s ousting by the OpenAI board (and his victorious reinstatement earlier today) will doubtlessly go down in history as one of the most explosive episodes to ever befall Silicon Valley.
Maybe you’ve seen the startup’s org chart floating around the web but, in case you haven’t, here’s a quick recap: Unlike pretty much every other technology business that exists, OpenAI is actually a non-profit, governed wholly by its board, that operates and controls a for-profit company.
Alternate theories have swirled in the meantime, with some suggesting that the real reasons for Altman’s aggressive exit were decidedly more colorful—like accusations he pursued additional funding via autocratic Mideast regimes.
But to get too bogged down in speculating about the specific catalysts for OpenAI’s drama is to ignore what the whole episode has revealed: as far as the real world is concerned, “AI safety” in Silicon Valley is pretty much null and void.
They don’t seem to have understood that Sam had all the industry connections, the friends in high places, was well-liked, and that moving against him in a world where that kind of social capital is everything amounted to career suicide.
The “effective accelerationists” (abbreviated “e/acc”) believe that stuff like additional government regulations, “tech ethics” and “AI safety” are all cumbersome obstacles to true technological development and exponential profit.
The original article contains 1,464 words, the summary contains 226 words. Saved 85%. I’m a bot and I’m open source!