• 5 Posts
  • 309 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle


  • The Oracle deal seemed absurd, but I didn’t realize how absurd until I saw Ed’s compilation of the numbers. Notably, it means even if OpenAI meets its projected revenue numbers (which are absurdly optimistic, like bigger than Netflix and Spotify and several other services combined) paying Oracle (along with everyone else it has promised to buy compute from) will put it net negative on revenue until 2030, meaning it has to raise even more money.

    I’ve been assuming Sam Altman has absolutely no real belief that LLMs would lead to AGI and has instead been cynically cashing in on the sci-fi hype, but OpenAI’s choices don’t make any long term sense if AGI isn’t coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment. And to even ask what his “real beliefs” are gives him too much credit.

    Just to remind everyone: the market can stay irrational longer than you can stay solvent!









  • Poor historical accuracy in favor of meme potential is why our reality is so comically absurd. You can basically use the simulation hypothesis to justify anything you want by proposing some weird motive or goals of the simulators. It almost makes God-of-the-gaps religious arguments seem sane and well-founded by comparison!


  • Within the world-building of the story, the way the logic is structured makes sense in a ruthless utilitarian way (although Scott’s narration and framing is way too sympathetic to the murderously autistic angel that did it), but taken in the context outside the story of the sort of racism Scott likes to promote, yeah it is really bad.

    We had previous discussion of Unsong on the old site. (Kind of cringing about the fact that I liked the story at one point and only gradually noticed all the problematic stuff and poor writing quality stuff.)


  • I’ve seen this concept mixed with the simulation “hypothesis”. The logic goes that if future simulators are running a “rescue simulation” but only cared (or at least cared more) about the interesting or more agentic people (i.e. rich/white/westerner/lesswronger), they might only fully simulate those people and leave simpler nonsapient scripts/algorithms piloting the other people (i.e. poor/irrational/foreign people).

    So basically literally positing a mechanism by which they are the only real people and other people are literally NPCs.





  • Depends what you mean by “steelman”. If you take their definition at it’s word, then they fail to try all the time, just look at any of their attempts at understanding leftist writing or thought. Of course, it often actually means “entirely rebuild the opposing argument into something different” (because they don’t have a basic humanities education or don’t want to actually properly read leftist thought) and they can’t resist doing that!