

It’s possible someone specifically picked the highest IQ that wouldn’t need a second planet earth to make the statistics work.
It’s not always easy to distinguish between existentialism and a bad mood.
It’s possible someone specifically picked the highest IQ that wouldn’t need a second planet earth to make the statistics work.
Siskind did a review too, basically gives it the ‘their hearts in the right place but… [read AI2027 instead]’ treatment. Then they go at it a bit with Yud in the comments where Yud comes off as a bitter dick, but their actual disagreements are just filioque shit. Also they both seem to agree that a worldwide moratorium on AI research that will give us time to breed/genetically engineer superior brained humans to fix our shit is the way to go.
https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154920454
https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154927504
Also notable that apparently Siskind thinks nuclear non-proliferation sorta worked because people talked it out and decided to be mature about it rather than being scared shitless of MAD, so AI non-proliferation by presumably appointing a rationalist Grand Inquisitor in charge of all human scientific progress is an obvious solution.
All the stuff about ASI is basically theology, or trying to do armchair psychology to Yog-Sothoth. If autonomous ASI ever happens it’s kind of definitionally impossible to know what it’ll do, it’s beyond us.
The simulating synapses is hard stuff I can take or leave. To argue by analogy, it’s not like getting an artificial feather exactly right was ever a bottleneck to developing air travel once we got the basics of aerodynamics down.
Nice. Here’s the bluesky account as well.
Some quality wordsmithing found in the wild:
@MosesSternstein (quote-twitted): AI-Capex is the everything cycle, now.
Just under 50% of GDP growth is attributable to AI Capex
@bigblackjacobin: Almost certainly the greatest misallocation of capital you or I will ever see. There’s no justification for this however you cut it but the beatings will continue until a stillborn god is born.
Remember, when your code doesn’t compile, it might mean you made a mistake in coding, or your code is about to become selfaware.
Good analogy actually.
The arguments made against the book in the review are that it doesn’t make the case for LLMs being capable of independent agency, it reduces all material concerns of an AI takeover to broad claims of ASI being indistinguishable from magic and that its proposed solutions are dumb and unenforceable (again with the global GPU prohibition and the unilateral bombing of rogue datacenters).
That towards the end they note that the x-risk framing is a cognitive short-circuit that causes the faithful to ignore more pressing concerns like the impending climate catastrophe in favor of a mostly fictitious problem like AI doom isn’t really a part of their core thesis against the book.
They also seem to broadly agree with the ‘hey, humans are pretty shit at thinking too, you know’ line of LLM apologetics.
“LLMs and humans are both sentence-producing machines, but they were shaped by different processes to do different work,” say the pair – again, I’m in full agreement.
But judging from the rest of the review I can see how you kind of have to be at least somewhat rationalist-adjacent to have a chance of actually reading the thing to the end.
The pair also suggest that signs of AI plateauing, as seems to be the case with OpenAI’s latest GPT-5 model, could actually be the result of a clandestine superintelligent AI sabotaging its competitors.
copium-intubation.tiff
Also this seems like the natural progression of that time Yud embarrassed himself by cautioning actual ML researchers to be weary of ‘sudden drops in loss function during training’, which was just an insanely uninformed thing to say out loud.
the only people who like prediction markets […]
Apparently Donald Trump Jr. has found his way into the payroll of a couple of the bigger prediction markets, so they seem to be doing their darndest to change that.
assuming prediction markets are magic
Bet it’s more like assuming it will incentivize people with magical predicting genes to reproduce more so we can get a kwisatz haderach to fight AI down the line.
It’s always dumber than expected.
Apparently the hacker who publicized a copy of the no fly list was leaked an article containing Yarvin’s home address, which she promptly posted on bluesky. Won’t link because I don’t think we’ve had the doxxing discussion but It’s easily findable now.
I’m mostly posting this because the article featured this photo:
I figure eventually some proprietary work would make it into the wild via autocomplete. Copilot used to be cool with inserting other programmer’s names and emails in author notes for instance, though they seem to have started filtering that out in the mean time.
Copilot licenses let you specifically opt out from your prompts and your code being used to train new models, so it would be a big deal.
We should be so lucky, the ensuing barrage of lawsuits about illegally cribbing company IP would probably make the book author class action damages pale in comparison.
This is too corny and overdramatic for my tastes. It reads a bit like satire, complete with piling on the religious undertones there at the end.
Getting love bombed in that rationalist con he went to recently probably didn’t help matters.
The common clay of the new west:
ChatGPT has become worthless
[Business & Professional]
I’m a paid member and asked it to help me research a topic and write a guide and it said it needed days to complete it. That’s a first. Usually it could do this task on the spot.
It missed the first deadline and missed 5 more. 3 weeks went by and it couldn’t get the task done. Went to Claude and it did it in 10 minutes. No idea what is going on with ChatGpt but I cancelled the pay plan.
Anyone else having this kind of issue?
if one person came out and spilled the beans, it’d suggest that there might be more people who didn’t
I mean, after his full throated defense of Lynn’s IQ map (featuring disgraced nazi college dropout Cremieux/TP0 as a subject matter expert) what other beans might be interesting enough to spill? Did he lie about becoming a kidney donor?
I think the emails are important because a) they make a case that for all his performative high-mindedness and deference to science and whinging about polygenic selection he came to his current views through the same white supremacist/great replacement milieu as every other pretentious gutter racist out there and b) he is so consistently disingenuous that the previous statement might not even matter much… he might honestly believe that priming impressionable well-off techies towards blood and soil fascism precursors was worth it if we end up allowing unchecked human genetic experimentation to come up with 260IQ babies that might have a fighting chance against shAItan.
I guess it could come out that despite his habit of including conflict of interest disclosures, his public views may be way more for sale than is generally perceived.
190IQ is when you verb asymptote to avoid saying ‘almost’.