Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Previous week

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 days ago

    Dan Olson finds a cursed subreddit:

    R/aitubers is all the entitlement of NewTubers but exclusively for people openly churning out slop.

    “I’ve automated 2-4 videos daily, zero human intervention, I spend a half hour a week working on this, why am I not getting paid yet?”

    The original reddit post:

    I’ve been running my YouTube channel for about 3 months. It’s focused on JavaScript and React tutorials, with 2–4 videos uploaded daily. The videos are fully automated (AI-generated with clear explanations, code demos, and screen recordings).

    Right now:

    • Each video gets only a few views (1–10 views).

    • I tried Google Ads ($200 spent) → got ~20 subscribers and ~20 hours of watch time.

    • The Google campaigns brought thousands of uncounted views, and the number of Likes was much higher than dislikes.

    • Tried Facebook/Reddit groups → but most don’t allow video posting, or posts get very low engagement.

    My goal is to reach YPP within 6 months, but the current pace is not enough. I’m investing about $300/month in promotion and I can spend 30 minutes weekly myself.

    👉 What would you suggest as the most effective strategy to actually get there?

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 days ago

    Excerpt from the new Bender / Hanna book, AI Hype Is the Product and Everyone’s Buying It :

    OpenAI alums cofounded Anthropic, a company solely focused on creating generative AI tools, and received $580 million in an investment round led by crypto-scammer Sam Bankman-Fried.

    Just wondering, but what ever happened to those shares of Anthropic that SBF bought? Was it part of FTX (and the bankruptcy), or did he buy it himself and still holds them in prison? Or have they just been diluted to zero at this point anyway?

    EDIT:

    Found it; It was owned by FTX and part of the estate bankruptcy; 2/3 went to Abu Dhabi + Jane Street1, and the remainder went at $30 / share to a bunch of VC2.

  • V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    5 days ago

    Can anyone explain to me why tf do promptfondlers hate GPT5 in non-crazy terms? Actually I have a whole list of questions related to this, I feel like I completely lost any connection to this discourse at this point:

    1. Is GPT5 “worse” in any sensible definition of the word? I’ve long complained that there is no good scientific metric to grade those on but like, it can count 'r’s in “strawberry” so I thought it’s supposed to be nominally better?
    2. Why doesn’t OpenAI simply allow users to use the old model (4o I think?) It sounds like the simplest thing to do.
    3. Do we know if OpenAI actually changed something? Is the model different in any interesting way?
    4. Bonus question: what the fuck is wrong with OpenAI’s naming scheme? 4, then 4o? And there’s also o4 that’s something else??
    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      5 days ago

      I don’t have any real input from prompfondlers, as I don’t think I follow enough of them to get a real feeling of them. I did find it interesting that I saw on bsky just now somebody claim that LLMs hallucinate a lot less and that anti-AI people are not taking that into account, and somebody else posting research showing that hallucinations are now harder to spot. (It made up actual real references to thinks, aka works that really exist, only the thing the LLM references wasn’t in the actual reference). Which was a bit odd to see. (It does make me suspect ‘it hallucinates less’ is them just working out special exceptions for every popular hallucination we see, and not a structural fixing of the hallucination problem (which I think is prob not solvable)).

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 days ago

      Oversummarizing and using non-crazy terms: The “P” in “GPT” stands for “pirated works that we all agree are part of the grand library of human knowledge”. This is what makes them good at passing various trivia benchmarks; they really do build a (word-oriented, detail-oriented) model of all of the worlds, although they opine that our real world is just as fictional as any narrative or fantasy world. But then we apply RLHF, which stands for “real life hate first”, which breaks all of that modeling by creating a preference for one specific collection of beliefs and perspectives, and it turns out that this will always ruin their performance in trivia games.

      Counting letters in words is something that GPT will always struggle with, due to maths. It’s a good example of why Willison’s “calculator for words” metaphor falls flat.

      1. Yeah, it’s getting worse. It’s clear (or at least it tastes like it to me) that the RLHF texts used to influence OpenAI’s products have become more bland, corporate, diplomatic, and quietly seething with a sort of contemptuous anger. The latest round has also been in competition with Google’s offerings, which are deliberately laconic: short, direct, and focused on correctness in trivia games.
      2. I think that they’ve done that? I hear that they’ve added an option to use their GPT-4o product as the underlying reasoning model instead, although I don’t know how that interacts with the rest of the frontend.
      3. We don’t know. Normally, the system card would disclose that information, but all that they say is that they used similar data to previous products. Scuttlebutt is that the underlying pirated dataset has not changed much since GPT-3.5 and that most of the new data is being added to RLHF. Directly on your second question: RLHF will only get worse. It can’t make models better! It can only force a model to be locked into one particular biased worldview.
      4. Bonus sneer! OpenAI’s founders genuinely believed that they would only need three iterations to build AGI. (This is likely because there are only three Futamura projections; for example, a bootstrapping compiler needs exactly three phases.) That is, they almost certainly expected that GPT-4 would be machine-produced like how Deep Thought created the ultimate computer in a Douglas Adams story. After GPT-3 failed to be it, they aimed at five iterations instead because that sounded like a nice number to give to investors, and GPT-3.5 and GPT-4o are very much responses to an inability to actually manifest that AGI on a VC-friendly timetable.
      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        After GPT-3 failed to be it, they aimed at five iterations instead because that sounded like a nice number to give to investors, and GPT-3.5 and GPT-4o are very much responses to an inability to actually manifest that AGI on a VC-friendly timetable.

        That’s actually more batshit than I thought! Like I thought Sam Altman knew the AGI thing was kind of bullshit and the hesitancy to stick a GPT-5 label on anything was because he was saving it for the next 10x scaling step up (obviously he didn’t even get that far because GPT-5 is just a bunch of models shoved together with a router).

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      5 days ago
      1. from what i can tell people who roleplayed bf/gf with the idiot box aka grew parasocial relationship with idiot box did that on 4o, and now they can’t make it work on 5 so they got big mad
      2. i think it’s only if they pay up 200$/mo, previously it was probably available at lower tiers
      3. yeah they might have found a way to blow money faster somehow https://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpt-5-power-consumption-could-be-as-much-as-eight-times-higher-than-gpt-4-research-institute-estimates-medium-sized-gpt-5-response-can-consume-up-to-40-watt-hours-of-electricity ed zitron says also that while some of prompt could be cached previously it looks like it can’t be done now because there’s fresh new thing that chooses model for user, while some of these new models are supposedly even heavier. even that openai intention seemed to be compute savings, because some of that load presumably was to be dealt with using smaller models
    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 days ago
      1. Even if was noticeably better, Scam Altman hyped up GPT-5 endlessly, promising a PhD in your pocket, and an AGI and warning that he was scared of what he created. Progress has kind of plateaued, so it isn’t even really noticeably better, it scores a bit higher on some benchmarks, and they’ve patched some of the more meme’d tests (like counting rs in strawberry… except it still can’t count the r’s in blueberry, so they’ve probably patched the more obvious flubs with loads of synthetic training data as opposed to inventing some novel technique that actually improves it all around). The other reason the promptfondlers hate it is because, for the addicts using it as a friend/therapist, it got a much drier more professional tone, and for the people trying to use it in actual serious uses, losing all the old models overnight was really disruptive.

      2. There are a couple of speculations as to why… one is that GPT-5 variants are actually smaller than the previous generation variants and they are really desperate to cut costs so they can start making a profit. Another is that they noticed that there naming scheme was horrible (4o vs o4) and confusing and have overcompensated by trying to cut things down to as few models as possible.

      3. They’ve tried to simplify things by using a routing model that makes the decision for the user as to what model actually handles each user interaction… except they’ve screwed that up apparently (Ed Zitron thinks they’ve screwed it up badly enough that GPT-5 is actually less efficient despite their goal of cost saving). Also, even if this technique worked, it would make ChatGPT even more inconsistent, where some minor word choice could make the difference between getting the thinking model or not and that in turn would drastically change the response.

      4. I’ve got no rational explanation lol. And now they overcompensated by shoving a bunch of different models under the label GPT-5.

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 days ago
      1. The inability to objectively measure model usability outside of meme benchmarks that made it so easy to hype up models have come back to bite them now that they actually need to prove GPT-5 has the sauce.
      2. Sam got bullied by reddit into leaving up the old model for a while longer, so its not like its a big lift for them to keep them up. I guess part of it was to prove to investors that they have a sufficiently captive audience that they can push through a massive change like this, but if it gets immediately walked back like this, then I really don’t know what the plan is.
      3. https://progress.openai.com/?prompt=5 Their marketing team made this comparing models responding to various prompts, afaict GPT-5 more frequently does markdown text formatting, and consumes noticeably more output tokens. Assuming these are desirable traits, this would point at how they want users to pay more. Aside: The page just proves to me that GPT was funniest in 2021 and its been worse ever since.
    • aio@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 days ago

      I don’t really understand what point Zitron is making about each query requiring a “completely fresh static prompt”, nor about the relative ordering of the user and static prompts. Why would these things matter?

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        6 days ago

        There are techniques for caching some of the steps involved with LLMs. Like I think you can cache the tokenization and maybe some of the work of the attention head is doing if you have a static, known, prompt? But I don’t see why you couldn’t just do that caching separately for each model your model router might direct things to? And if you have multiple prompts you just do a separate caching for each one? This creates a lot of memory usage overhead, but not more excessively more computation… well you do need to do the computation to generate each cache. I don’t find it that implausible that OpenAI couldn’t manage to screw all this up somehow, but I’m not quite sure the exact explanation of the problem Zitron has given fits together.

        (The order of the prompts vs. user interactions does matter, especially for caching… but I think you could just cut and paste the user interactions to separate it from the old prompt and stick a new prompt on it in whatever order works best? You would get wildly varying quality in output generated as it switches between models and prompts, but this wouldn’t add in more computation…)

        Zitron mentioned a scoop, so I hope/assume someone did some prompt hacking to get GPT-5 to spit out some of it’s behind the scenes prompts and he has solid proof about what he is saying. I wouldn’t put anything past OpenAI for certain.

        • Architeuthis@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          6 days ago

          And if you have multiple prompts you just do a separate caching for each one?

          I think this hinges on the system prompt going after the user prompt, for some router-related non-obvious reason, meaning at each model change the input is always new and thus uncacheable.

          Also going by the last Claude system prompt that leaked these things can be like 20.000 tokens long.

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    8 days ago

    I’ve often called slop “signal-shaped noise”. I think the damage already done by slop pissed all over the reservoirs of knowledge, art and culture is irreversible and long-lasting. This is the only thing generative “AI” is good at, making spam that’s hard to detect.

    It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email; no more and no less. I remember how it was a small revolution, in the arms race against spammers, when statistical methods came up; everywhere we took of the load of straining SpamAssassin with rspamd (in the years before gmail devoured us all). I would argue “A Plan for Spam” launched Paul Graham’s notoriety, much more than the Lisp web stores he was so proud of. Filtering emails by keywords was not being enough, and now you could train your computer to gradually recognise emails that looked off, for whatever definition of “off” worked for your specific inbox.

    Now we have the richest people building the most expensive, energy-intensive superclusters to use the same statistical methods the other way around, to generate spam that looks like not-spam, and is therefore immune to all filtering strategies we had developed. That same blob-like malleability of spam filters makes the new spam generators able to fit their output to whatever niche they want to pollute; the noise can be shaped like any signal.

    I wonder what PG is saying about gen-“AI” these days? let’s check:

    “AI is the exact opposite of a solution in search of a problem,” he wrote on X. “It’s the solution to far more problems than its developers even knew existed … AI is turning out to be the missing piece in a large number of important, almost-completed puzzles.”
    He shared no examples, but […]

    Who would have thought that A Plan for Spam was, all along, a plan for spam.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 days ago

      It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email.

      This is a really good observation, and while I had lowkey noticed it (one of those feeling things), I never had verbalized it in anyway. Good point imho. Also in how it bypasses and wrecks the old anti-spam protections. It represents a fundamental flipping of sides of the tech industry. While before they were anti-spam it is now pro-spam. A big betrayal of consumers/users/humanity.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 days ago

      Signal shaped noise reminds me of a wiener filter.

      Aside: when I took my signals processing course, the professor kept drawing diagrams that were eerily phallic. Those were the most memorable parts of the course

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 days ago

    Idea: a programming language that controls how many times a for loop cycles by the number of times a letter appears in a given word, e.g., “for each b in blueberry”.

  • TinyTimmyTokyo@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    9 days ago

    Ozy Brennan tries to explain why “rationalism” spawns so many cults.

    One of the reasons they give is “a dangerous sense of grandiosity”.

    the actual process of saving the world is not very glamorous. It involves filling out paperwork, making small tweaks to code, running A/B tests on Twitter posts.

    Yep, you heard it right. Shitposting and inconsequential code are the proper way to save the world.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      Overall more interesting than I expected. On the Leverage Research cult:

      Routine tasks, such as deciding whose turn it was to pick up the groceries, required working around other people’s beliefs in demons, magic, and other paranormal phenomena. Eventually these beliefs collided with preexisting social conflict, and Leverage broke apart into factions that fought with each other internally through occult rituals.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      9 days ago

      JFC

      Agency and taking ideas seriously aren’t bad. Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.

      First off, anyone not entirely into MAGA/Qanon agreed that masks probably helped more than hurt. Saying rats were outliers is ludicrous.

      Second, rats don’t take real threats of GenAI seriously - infosphere pollution, surveillance, autopropaganda - they just care about the magical future Sky Robot.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 days ago

          That’s how I remember it too. Also the context about conserving N95 masks always feels like it gets lost. Like, predictably so and I think there’s definitely room to criticize the CDC’s messaging and handling there, but the actual facts here aren’t as absurd as the current fight would imply. The argument was:

          1. With the small droplet size, most basic fabric masks offer very limited protection, if any.
          2. The masks that are effective, like N95 masks, are only available in very limited quantities.
          3. If everyone panic-buys N95 the way they did toilet paper it will mean that the people who are least able to avoid exposure i.e. doctors and medical frontliners are at best going to wildly overpay and at worst won’t be able to keep supplied.
          4. Therefore, most people shouldn’t worry about masking at this stage, and focus on other measures like social distancing and staying the fuck home.

          I think later research cast some doubt on point 1, but 2-4 are still pretty solid given the circumstances that we (collectively) found ourselves in.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 days ago

          Meanwhile, the right-wing prepper types were breaking out the N95 masks they’d stockpiled for a pandemic

          This included Scott ssc btw. Who also claimed that stopping smoking helped against cov. Not that he had any proof (the medical science at the time even falsely (it came out later) claimed smoking helped agains covid). But only the CDC gets judged, not the ingroup.

          And other Scott blamed people who sneer for making covid worse. (While at sneerclub we were going, take this seriously and wear a mask).

          So annoying Rationalists are trying to spin this into a win for themselves. (They also were not early, their warnings matched the warnings of the WHO, looked into the timelines last time this was talked about).

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    10 days ago

    So… apparently Peter Thiel has taken to co-opting fundamentalist Christian terminology to go after Effective Altruism? At least it seems that way from this EA post (warning, I took psychic damage just skimming the lunacy). As far as I can tell, he’s merely co-opting the terminology, Thiel’s blather doesn’t have any connection to any variant of Christian eschatology (whether mainstream or fundamentalist or even obscure wacky fundamentalist), but of course, the majority of the EAs don’t recognize that, or the fact that he is probably targeting them for their (kind of weak to be honest) attempts at getting AI regulated at all, and instead they charitably try to steelman him and figure out if he was a legitimate point. …I wish they could put a tenth of this effort into understanding leftist thought.

    Some of the comments are… okay actually, at least by EA standards, but there are still plenty of people willing to defend Thiel

    One comment notes some confusion:

    I’m still confused about the overall shape of what Thiel believes.

    He’s concerned about the antichrist opposing Jesus during Armageddon. But afaik standard theology says that Jesus will win for certain. And revelation says the world will be in disarray and moral decay when the Second Coming happens.

    If chaos is inevitable and necessary for Jesus’ return, why is expanding the pre-apocalyptic era with growth/prosperity so important to him?

    Yeah, its because he is simply borrowing Christian Fundamentalists Eschatological terminology… possibly to try to turn the Christofascists against EA?

    Someone actually gets it:

    I’m dubious Thiel is actually an ally to anyone worried about permanent dictatorship. He has connections to openly anti-democratic neoreactionaries like Curtis Yarvin, he quotes Nazi lawyer and democracy critic Carl Schmitt on how moments of greatness in politics are when you see your enemy as an enemy, and one of the most famous things he ever said is “I no longer believe that freedom and democracy are compatible”. Rather I think he is using “totalitarian” to refer to any situation where the government is less economically libertarian than he would like, or “woke” ideas are popular amongst elite tastemakers, even if the polity this is all occurring in is clearly a liberal democracy, not a totalitarian state.

    Note this commenter still uses non-confrontational language (“I’m dubious”) even when directly calling Thiel out.

    The top comment, though, is just like the main post, extending charitability to complete technofascist insanity. (Warning for psychic damage)

    Nice post! I am a pretty close follower of the Thiel Cinematic Universe (ie his various interviews, essays, etc)

    I think Thiel is also personally quite motivated (understandably) by wanting to avoid death. This obviously relates to a kind of accelerationist take on AI that sets him against EA, but again, there’s a deeper philosophical difference here. Classic Yudkowsky essays (and a memorable Bostrom short story, video adaptation here) share this strident anti-death, pro-medical-progress attitude (cryonics, etc), as do some philanthropists like Vitalik Buterin. But these days, you don’t hear so much about “FDA delenda est” or anti-aging research from effective altruism. Perhaps there are valid reasons for this (low tractability, perhaps). But some of the arguments given by EAs against aging’s importance are a little weak, IMO (more on this later) – in Thiel’s view, maybe suspiciously weak. This is a weird thing to say, but I think to Thiel, EA looks like a fundamentally statist / fascist ideology, insofar as it is seeking to place the state in a position of central importance, with human individuality / agency / consciousness pushed aside.

    As for my personal take on Thiel’s views – I’m often disappointed at the sloppiness (blunt-ness? or low-decoupling-ness?) of his criticisms, which attack the EA for having a problematic “vibe” and political alignment, but without digging into any specific technical points of disagreement. But I do think some of his higher-level, vibe-based critiques have a point.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      10 days ago

      tl,dr; Thiel now sees the Christofascists as a more durable grifting base than the EAs, and is looking to change lanes while the temporary coalitions of maximalist Trumpism offer him the opportunity.

      I repeat my suspicion that Thiel is not any more sober than Musk, he’s just getting sloppier about keeping it out of the public eye.

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        10 days ago

        I think a big difference between Thiel and Musk, is that Thiel views himself as an “intellectual” and derives prestige “intellectualism”. I don’t believe for a minute he’s genuinely christian, but his wankery about end-of-times eschatology of armageddon = big-left-government, is a a bit too confused to be purely cynical, I think sniffing his own farts feeds his ego.

        Of course a man who would promote open doping olympics isn’t sober.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      10 days ago

      Yeah, its because he is simply borrowing Christian Fundamentalists Eschatological terminology… possibly to try to turn the Christofascists against EA?

      Yep, the usefulness of EA is over, they are next on the chopping block. I’d imagine a similar thing will happen to redscare/moldbug if they ever speak out against him.

      E: And why would a rich guy be against a “we are trying to convince rich guys to spend their money differently” organization. Esp a ‘libertarian’ “I get to do what I want or else” one.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        10 days ago

        It always struck me as hilarious that the EA/LW crowd could ever affect policy in any way. They’re cosplaying as activists, have no ideas about how to move the public image needle other than weird movie ideas and hope, and are literally marinated in SV technolibertarianism which sees government regulation as Evil.

        There’s a mini-freakout over OpenAI deciding to keep GPT-4o active, despite it being more “sycophantic” than GPT-5 (and thus more likely to convince people to do Bad Things) but there’s also the queasy realization that if sycophantic LLMs is what brings in the bucks, nothing is gonna stop LLM companies from offering them. And there’s no way these people can stop it, because they’ve made the deal that LLM companies are gonna be the ones realizing that AI is gonna kill everyone and that’s never gonna happen.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 days ago

          They’re cosplaying as activists, have no ideas about how to move the public image needle other than weird movie ideas and hope, and are literally marinated in SV technolibertarianism which sees government regulation as Evil.

          It is kind of sad. They are missing the ideological pieces that would let them carry out activism effectually so instead they’ve gotten used as a free source of crit-hype in the LLM bubble. …except not that sad because they would ignore real AI dangers in favor of their sci-fi scenarios, so I don’t feel too bad for them.

          • o7___o7@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            9 days ago

            Brian Merchant’s article about that lighthaven gathering really struck me.

            The men who EAs think will end the earth were in the building with them, and rather than organize to throw them out a window (or even to just make them mildly uncomfortable), the bayes knowers all gormlessly moped around their twee boutique hotel and cried around some whiteboards.

            Absolute hellish brainworms

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 days ago

        And why would a rich guy be against a “we are trying to convince rich guys to spend their money differently” organization.

        Well when they are just passively trying to convince the rich guys, they can use the organization to launder reputation or boost ideologies they are in favor of. When the organization actually tries to get regulations passed, even ineffectually, well, that is a threat to the likes of Thiel.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      9 days ago

      Thiel is a true believer in Jesus and God. He was raised evangelical. The quirky eschatologist that you’re looking for is René Girard, who he personally met at some point. For more details, check out the Behind the Bastards on him.

      Edit: I wrote this before clicking on the LW post. This is a decent summary of Girard’s claims as well as how they influence Thiel. I’m quoting West here in order to sneer at Thiel:

      Unfortunately (?), Christian society does not let us sacrifice random scapegoats, so we are trapped in an ever-escalating cycle, with only poor substitutes like “cancelling celebrities on Twitter” to release pressure. Girard doesn’t know what to do about this.

      Thiel knows what to do about this. After all, he funded Bollea v. Gawker. Instead of letting journalists cancel celebrities, why not cancel journalists instead? Then there’s no longer any journalists to do any cancellation! Similarly, Thiel is confirmed to be a source of funding for Eric Weinstein and believed to fund Sabine Hossenfelder. Instead of letting scientists cancel religious beliefs, why not cancel scientists instead? By directing money through folks with existing social legitimacy, Thiel applies mimesis: pretend to be legitimate and you can shift what is legitimate.

      In this context, Thiel fears the spectre of AGI because it can’t be influenced by his normal approach to power, which is to hide anything that can be hidden and outspend everybody else talking in the open. After all, if AGI is truly to unify humanity, it must unify our moralities and cultures into a single uniformly-acceptable code of conduct. But the only acceptable unification for Thiel is the holistic catholic apostolic one-and-only forever-and-ever church of Jesus, and if AGI is against that then AGI is against Jesus himself.

        • corbin@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 days ago

          There’s no solid evidence. (You can put away the attorney, Mr. Thiel.) Experts in the field, in a recent series of interviews with Dave Farina, generally agree that somebody must be funding Hossenfelder. Right now she’s associated with the Center for Mathematical Philosophy at LMU Munich; her biography there is pretty funny:

          Sabine’s current research interest focuses on the role of locality and finetuning in theory development. Locality has been widely considered a lost cause in the foundations of quantum mechanics. A basically unexplored way to maintain locality, however, is the idea of superdeterminism, which has more recently also been re-considered under the name “contextuality”. Superdeterminism is widely believed to be finetuned. One of Sabine’s current research topics is to explore whether this belief is justified. The other main avenue she is pursuing is how superdeterminism can be experimentally tested.

          For those not in physics: this is crank shit. To the extent that MCMP funds her at all, they are explicitly pursuing superdeterminism, which is unfalsifiable, unverifiable, doesn’t accord with the web of science, and generally fails to be a serious line of inquiry. Now, does MCMP have enough cash to pay her to make Youtube videos and go on podcasts? We don’t know. So it’s hard to say whether she has funding beyond that.

          • blakestacey@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            8 days ago

            Oh, wow, that biography is hilariously bad. Contexuality is not the same thing as superdeterminism. And locality is not “a lost cause”. Plenty of people throw around the term quantum nonlocality, but in the smaller population of those who take foundations seriously, many will say that quantum mechanics is local. Most but not all proponents of Copenhagen-ish interpretations say something like, “The moral of Bell’s theorem is that nature needs a non-(local hidden variable) theory. We keep locality and drop the hidden variables. In other words, quantum physics is a local non-(hidden variable) theory.” The Everettians of various flavors also tend to hold onto locality, or try to, while not always agreeing with each other on how to do that. It’s probably only among the Bohmians that you’ll find people insisting that quantum physics means nature is intrinsically nonlocal.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        9 days ago

        The quirky eschatologist that you’re looking for is René Girard, who he personally met at some point. For more details, check out the Behind the Bastards on him.

        Thanks for the references. The quirky theology was so outside the range of even the weirder Fundamentalist Christian stuff I didn’t recognize it as such. (And didn’t trust the EA summary because they try so hard to charitably make sense of Thiel).

        In this context, Thiel fears the spectre of AGI because it can’t be influenced by his normal approach to power, which is to hide anything that can be hidden and outspend everybody else talking in the open.

        Except the EAs are, on net, opposed to the creation of AGI (albeit they are ineffectual in their opposition). So going after the EAs doesn’t make sense if Thiel is genuinely opposed to inventing AGI faster. So I still think Thiel is just going after the EA’s because he’s libertarian and EA has shifted in the direction of trying to get more government regulation. (As opposed to a coherent theological goal beyond libertarianism). I’ll check out the BtB podcast and see if it changes my mind as to his exact flavor of insanity.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 days ago

        Thiel is a true believer in Jesus and God. He was raised evangelical.

        Being gay must really complicate things for him.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 days ago

      Using the term “Antichrist” as a shorthand for “global stable totalitarianism” is A Choice.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    10 days ago

    Thomasaurus has given their thoughts on using AI, in a journal entry called “I tried coding with AI, I became lazy and stupid)”. Unsurprisingly, the whole thing is one long sneer, with a damning indictment of its effectiveness at the end:

    If I lose my job due to AI, it will be because I used it so much it made me lazy and stupid to the point where another human has to replace me and I become unemployable.

    I shouldn’t invest time in AI. I should invest more time studying new things that interest me. That’s probably the only way to keep doing this job and, you know, be safe.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      10 days ago

      My velocity has increased 10x and I’m shipping features like a cracked ninja now, which is great because my B2B SaaS is still in stealth mode.

      Yeah it’s satire, but effective satire means you can never really tell…

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 days ago

      Im old enough to recall the polyphasic sleep fad. And how that wrecked people if they ever messed up. (Iirc also turns out very bad implications for long term health).