Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this, and happy new year in advance.)
noodling on a blog post - does anyone with more experience of LW/EA than me know if “AI safety” people are referencing the invention of nuclear weapons as a template for regulating/forbidding “AGI”?
just after end of manhattan project there was an idea coming from some of manhattan project scientists to dispose american nukes and ban development of nukes in any other country. that’s why we live in era of lasting peace without nuclear weapons. /s
some EAs had similar idea wrt spicy autocomplete development, which comes with implied assumption that spicy autocomplete is dangerous or at least useful (as in nuclear power, civilian or military)
Yeah, my starting position would be that it was obvious to any competent physicist at the time (although there weren’t that many) that the potential energy release from nuclear fission was a real thing - the “only” thing to do to weaponise it or use it for peaceful ends was engineering.
The analogy to “runaway X-risk AGI” is there’s a similar straight line from ELIZA to Acausal Robot God, all that’s required is a bit of elbow grease and good ole fashioned American ingenuity. But my point is that apart from Yud and a few others, no serious person believes this.
I don’t think it was obvious from first principles in 30s that fission works or releases energy, but if provided experimental evidence there was no other way to interpret it. also people had general sense that nuclear materials can be a source of energy because there were attempts at controlling decay, i think in interbellum. the other part is cult thinking and i don’t have links for this particular one
Yeah it’s been decades since I read Rhodes’ history about the atom bomb, so I missed the years a bit. My point is that even if we couldn’t explain exactly what was happening there was something physically there, and we knew enough about it that Oppenheimer and co. could convince the US Army to build Oak Ridge and many other facilities at massive expense.
We can’t say the same about “AI”.
I’d be surprised if Eliezer hasn’t mentioned it at some point, maybe more in the way that you’re after. Can’t find any examples though.
In his Times article the only place he mentions nukes is what we should do to countries that have too many GPUs: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Edit: Not Mr. Yudkowski but see https://futureoflife.org/document/policymaking-in-the-pause/
*jerking off motion*
When they mention AI guardrails, they mean so it does become racist, spamming, abusive and based on the largest abuse of the cultural sector since spotify right?
Right?
A notable article from our dear friend Nick Bostrom mentions the atmospheric auto-ignition story:
https://nickbostrom.com/papers/vulnerable.pdf
(this goes on for a number of paragraphs)
This whole article has some wild stuff if you haven’t seen it before BTW, so buckle up. He also mentions this story in https://nickbostrom.com/existential/risks and https://existential-risk.com/concept.pdf if you want older examples.