

I’m at least enjoying the many comments calling her out, but damn she just doubles down even after being given many many examples of him being a far-right nationalist monster who engaged in attempts to outright subvert democracy.
I’m at least enjoying the many comments calling her out, but damn she just doubles down even after being given many many examples of him being a far-right nationalist monster who engaged in attempts to outright subvert democracy.
The Oracle deal seemed absurd, but I didn’t realize how absurd until I saw Ed’s compilation of the numbers. Notably, it means even if OpenAI meets its projected revenue numbers (which are absurdly optimistic, like bigger than Netflix and Spotify and several other services combined) paying Oracle (along with everyone else it has promised to buy compute from) will put it net negative on revenue until 2030, meaning it has to raise even more money.
I’ve been assuming Sam Altman has absolutely no real belief that LLMs would lead to AGI and has instead been cynically cashing in on the sci-fi hype, but OpenAI’s choices don’t make any long term sense if AGI isn’t coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment. And to even ask what his “real beliefs” are gives him too much credit.
Just to remind everyone: the market can stay irrational longer than you can stay solvent!
This feels like a symptom of liberals having a diluted incomplete understanding of what made past movements that utilized protest succeed or fail.
It is pretty good as a source for science fiction ideas. I mean, lots of their ideas originate from science fiction, but their original ideas would make fun fantasy sci-fi concepts. Like looking off their current front page… https://www.lesswrong.com/posts/WLFRkm3PhJ3Ty27QH/the-cats-are-on-to-something cat’s deliberately latching on to humans as the most lazy way of advancing their own value across the future seems like a solid point of fantasy worldworldbuilding…
To add to blakestacey’s answer, his fictional worldbuilding concept, dath ilan (which he treats like rigorous academic work to the point of citing it in tweets), uses prediction markets in basically everything, from setting government policy to healthcare plans to deciding what restaurant to eat at.
Every tweet in that thread is sneerable. Either from failing to understand the current scientific process, vastly overestimating how easily cutting edge can be turned into cleanly resolvable predictions, or assuming prediction markets are magic.
He’s the one that used the phrase “silent gentle rape”? Yeah, he’s at least as bad as the worst evo-psych pseudoscience misogyny posted on lesswrong, with the added twist he has a position in academia to lend him more legitimacy.
He had me in the first half, I thought he was calling out rationalist’s problems (even if dishonestly disassociating himself from then). But then his recommended solution was prediction markets (a concept which rationalists have in fact been trying to play around with, albeit at a toy model level with fake money).
The author occasionally posts to slatestarcodex, we kind of tried to explain what was wrong with Scott Alexander and I think she halfway got it… I also see her around the comments in sneerclub occasionally, so at least she is staying aware of things…
Poor historical accuracy in favor of meme potential is why our reality is so comically absurd. You can basically use the simulation hypothesis to justify anything you want by proposing some weird motive or goals of the simulators. It almost makes God-of-the-gaps religious arguments seem sane and well-founded by comparison!
Within the world-building of the story, the way the logic is structured makes sense in a ruthless utilitarian way (although Scott’s narration and framing is way too sympathetic to the murderously autistic angel that did it), but taken in the context outside the story of the sort of racism Scott likes to promote, yeah it is really bad.
We had previous discussion of Unsong on the old site. (Kind of cringing about the fact that I liked the story at one point and only gradually noticed all the problematic stuff and poor writing quality stuff.)
I’ve seen this concept mixed with the simulation “hypothesis”. The logic goes that if future simulators are running a “rescue simulation” but only cared (or at least cared more) about the interesting or more agentic people (i.e. rich/white/westerner/lesswronger), they might only fully simulate those people and leave simpler nonsapient scripts/algorithms piloting the other people (i.e. poor/irrational/foreign people).
So basically literally positing a mechanism by which they are the only real people and other people are literally NPCs.
Chiming in to agree your prediction write-ups aren’t particularly good. Sure they spark discussion, but the whole forecasting/prediction game is one we’ve seen the rationalists play many times, and it is very easy to overlook or at least undercount your misses and over hype your successes.
In general… I think your predictions are too specific and too optimistic…
Every time I see a rationalist bring up the term “Moloch” I get a little angrier at Scott Alexander.
I use the term “inspiring” loosely.
Depends what you mean by “steelman”. If you take their definition at it’s word, then they fail to try all the time, just look at any of their attempts at understanding leftist writing or thought. Of course, it often actually means “entirely rebuild the opposing argument into something different” (because they don’t have a basic humanities education or don’t want to actually properly read leftist thought) and they can’t resist doing that!
Putting this into the current context of LLMs… Given how Eliezer still repeats the “diamondoid bacteria” line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.
Lesswronger notices all of the rationalist’s attempts at making an “aligned” AI company keep failing: https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate
Notably, the author doesn’t realize Capitalism is the root problem in misaligning the incentives, and it takes a comment directly point it out for them to get as far as noticing as link to the cycle of enshittification.
I brought this up right when it came out: https://awful.systems/post/5244605/8335074
(Not demanding credit on keeping better up to date on hate-reading the EA forums, just sharing the previous discussion)
Highlights from the previous discussion… I had thought Thiel was entirely making up his own wacky theology (because it was a distinctly different flavor of insanity from the typical right-wing Fundamentalist/Evangelical), but actually there is a “theologian” (I use that term loosely) who developed, René Girard, who developed the theology he is describing.
I hadn’t heard of Black Lotus. Also, the article fails to mention rationalist/lesswrong ties to that AI-doom-focused Zen Buddhism cult that was discussed on Lesswrong recently (looking it up, the name is Maple), so you can add that to the cult count.