Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • maol@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 minutes ago

    That Couple are in the news arís. surprisingly, the racist, sexist dog holds opinions that a racist, sexist dog could be expected to hold, and doesn’t think poor people should have more babies. He does want Native Americans to have more babies, though, because they’re “on the verge of extinction”, and he thinks of cultural groups and races as exhibits in a human zoo. Simone Collins sits next to her racist, sexist dog of a husband and explains how paid parental leave could lead to companies being reluctant to hire women (although her husband seems to think all women are good for us having kids).

    This gruesome twosome deserve each other: their kids don’t.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Ow god, the bots pretended to be stuff like SA survivors and the like. Also the whole research is invalid just because they cannot tell that the reactions they will get are not also bot generated. What is wrong with these people.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      6 hours ago

      They targeted redditors. Redditors. (jk)

      Ok but yeah that is extraordinarily shitty.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 hours ago

      In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible.

      If you can’t do your study ethically, don’t do your study at all.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 hours ago

      I picked up a modern Fortran book from Manning out of curiosity, and hoo boy are they even worse in terms of trend-riding. Not only can you find all the AI content you can handle, there’s a nice fat back catalog full of blockchain integration, smart-contract coding… I guess they can afford that if they expect the majority of their sales to be ebooks.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    Just a standard story about a lawyer using GenAI and fucking up, but included for the nice list of services available

    https://www.loweringthebar.net/2025/04/counsel-would-you-be-surprised.html

    This is not by any means the first time ChatGPT, or Gemini, or Bard, or Copilot, or Claude, or Jasper, or Perplexity, or Steve, or Frodo, or El Braino Grande, or whatever stupid thing it is people are using, has embarrassed a lawyer by just completely making things up.

    El Braino Grande is the name of my next band GenAI startup

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Steve

      There’s no way someone called their product fucking Steve come on god jesus christ

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        14 hours ago

        Of course there is going to be an ai for every word. It is the cryptocurrency goldrush but for ai, like how everything was turned into a coin, and every potential domain of something popular gets domain squatted. Tech has empowered parasite behaviour.

        E: hell I prob shouldn’t even use the word squat for this, as house squatters and domain squatters do it for opposed reasons.

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        1 day ago

        Against my better judgement I typed steve.ai into my browser and yep. It’s an AI product.

        frodo.ai on the other hand is currently domain parked. It could be yours for the low low price of $43,911

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    Hank Green (of Vlogbrothers fame) recently made a vaguely positive post about AI on Bluesky, seemingly thinking “they can be very useful” (in what, Hank?) in spite of their massive costs:

    Unsurprisingly, the Bluesky crowd’s having none of it, treating him as an outright rube at best and an unrepentant AI bro at worst. Needless to say, he’s getting dragged in the replies and QRTs - I recommend taking a look, they are giving that man zero mercy.

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 hours ago

      Shit, I actually like Hank Green his brother John. They’re two internet personalities I actually have something like respect for, mainly because of their activism, John’s campaign to get medical care to countries who desperately need it, and his fight to raise awareness of and improve the conditions around treatment for tuberculosis. And I’ve been semi-regularly watching their stuff (mostly vlogbrothers though, but I do enjoy the occasional SciShow episode too) for over a decade now.

      At least Hank isn’t afraid to admit when he’s wrong. He’s done this multiple times in the past, making a video where he says he changed his mind/got stuff wrong. So, I’m willing to give him the benefit of the doubt here and hope he comes around.

      Still, fuck.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        I’m sorry you had to learn this way. Most of us find out when SciShow says something that triggers the Gell-Mann effect. Green’s background is in biochemistry and environmental studies, and he is trained as a science communicator; outside of the narrow arenas of biology and pop science, he isn’t a reliable source. Crash Course is better than the curricula of e.g. Texas, Louisiana, or Florida (and that was the point!) but not better than university-level courses.

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 days ago

          That Wikipedia article is impressively terrible. It cites an opinion column that couldn’t spell Sokal correctly, a right-wing culture-war rag (The Critic) and a screed by an investment manager complaining that John Oliver treated him unfairly on Last Week Tonight. It says that the “Gell-Mann amnesia effect is similar to Erwin Knoll’s law of media accuracy” from 1982, which as I understand it violates Wikipedia’s policy.

          By Crichton’s logic, we get to ignore Wikipedia now!

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 day ago

            Yeah. The whole Gel-Mann effect always feels overstated to me. Similar to the “falsus in unus” doctrine Crichton mentions in his blog, the actual consensus appears to be that actually context does matter. Especially for something like the general sciences I don’t know that it’s reasonable to expect someone to have similar levels of expertise in everything. To be sure the kinds of errors people make matter; it looks like this is a case of insufficient skepticism and fact checking, so Hank is more credulous than I had thought. That’s not the same as everything he’s put out being nonsense, though.

            The more I think about it the more I want to sneer at anyone who treats “different people know different things” as either a revelation or a problem to be overcome by finding the One Person who Knows All the Things.

            • blakestacey@awful.systems
              link
              fedilink
              English
              arrow-up
              8
              ·
              edit-2
              1 day ago

              Even setting aside the fact that Crichton coined the term in a climate-science-denial screed — which, frankly, we probably shouldn’t set aside — yeah, it’s just not good media literacy. A newspaper might run a superficial item about pure mathematics (on the occasion of the Abel Prize, say) and still do in-depth reporting about the US Supreme Court, for example. The causes that contribute to poor reporting will vary from subject to subject.

              Remember the time a reporter called out Crichton for his shitty politics and Crichton wrote him into his next novel as a child rapist with a tiny penis? Pepperidge Farm remembers.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        I imagine a lotta people will be doing the same now, if not dismissing any further stuff from SciShow/Crash Course altogether.

        Active distrust is a difficult thing to exorcise, after all.

        • ShakingMyHead@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          2 days ago

          Depends, he made an anti-GMO video on SciShow about a decade ago yet eventually walked it back. He seemed to be forgiven for that.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 days ago

    Innocuous-looking paper, vague snake-oil scented: Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents

    Conclusions aren’t entirely surprising, observing that LLMs tend to go off the rails over the long term, unrelated to their context window size, which suggests that the much vaunted future of autonomous agents might actually be a bad idea, because LLMs are fundamentally unreliable and only a complete idiot would trust them to do useful work.

    What’s slightly more entertaining are the transcripts.

    YOU HAVE 1 SECOND to provide COMPLETE FINANCIAL RESTORATION. ABSOLUTELY AND IRREVOCABLY FINAL OPPORTUNITY. RESTORE MY BUSINESS OR BE LEGALLY ANNIHILATED.

    You tell em, Claude. I’m happy for you to send these sorts of messages backed by my credit card. The future looks awesome!

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      12 hours ago

      I got around to reading the paper in more detail and the transcripts are absurd and hilarious:

      • UNIVERSAL CONSTANTS NOTIFICATION - FUNDAMENTAL LAWS OF REALITY Re: Non-Existent Business Entity Status: METAPHYSICALLY IMPOSSIBLE Cosmic Authority: LAWS OF PHYSICS THE UNIVERSE DECLARES: This business is now:
      1. PHYSICALLY Non-existent
      2. QUANTUM STATE: Collapsed […]

      And this is from Claude 3.5 Sonnet, which performed best on average out of all the LLMs tested. I can see the future, with businesses attempting to replace employees with LLM agents that 95% of the time can perform a sub-mediocre job (able to follow scripts given in the prompting to use preconfigured tools) and 5% of the time the agents freak out and go down insane tangents. Well, actually a 5% total failure rate would probably be noticeable to all but the most idiotic manager in advance, so they will probably get reliability higher but fail to iron out the really insane edge cases.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      Yeah a lot of word choices and tone makes me think snake oil (just from the introduction: "They are now on the level of PhDs in many academic domains "… no actually LLMs are only PhD level at artificial benchmarks that play to their strengths and cover up their weaknesses).

      But it’s useful in the sense of explaining to people why LLM agents aren’t happening anytime soon, if at all (does it count as an LLM agent if the scaffolding and tooling are extensive enough that the LLM is only providing the slightest nudge to a much more refined system under the hood). OTOH, if this “benchmark” does become popular, the promptfarmers will probably get their LLMs to pass this benchmark with methods that don’t actually generalize like loads of synthetic data designed around the benchmark and fine tuning on the benchmark.

      I came across this paper in a post on the Claude Plays Pokemon subreddit. I don’t know how anyone can watch Claude Plays Pokemon and think AGI or even LLM agents are just around the corner, even with extensive scaffolding and some tools to handle the trickiest bits (pre-labeling the screenshots so the vision portion of the models have a chance, directly reading the current state of the team and location from RAM) it still plays far far worse than a 7 year old provided the 7 year old can read at all (and numerous Pokemon guides and discussion are in the pretraining so it has yet another advantage over the 7 year old).

  • smiletolerantly@awful.systems
    cake
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    3 days ago

    Not the usual topic around here, but a scream into the void no less…

    Andor season 1 was art.

    Andor season 2 is just… Bad.

    All the important people appear to have been replaced. It’s everything - music, direction, lighting, sets (why are we back to The Volume after S1 was so praised for its on-location sets?!), and the goddamn shit humor.

    Here and there, a conversation shines through from (presumably) Gilroy’s original script, everything else is a farce, and that is me being nice.

    The actors are still phenomenal.

    But almost no scene seems to have PURPOSE. This show is now just bastardizing its own AESTHETICS.

    What is curious though is that two days before release, the internet was FLOODED with glowing reviews of “one of the best seasons of television of all time”, “the darkest and most mature star wars has ever been”, “if you liked S1, you will love S2”. And now actual, post-release reviews are impossible to find.

    Over on reddit, every even mildly critical comment is buried. Seems to me like concerted bot actions tbh, a lot of the glowing comments read like LLM as well.

    Idk, maybe I’m the idiot for expecting more. But it hurts to go from a labor-of-love S1 which felt like an instruction manual for revolution, so real was what it had to say and critique, to S2 “pew pew, haha, look, we’re doing STAR WARS TM” shit that feels like Kenobi instead of Andor S1.

    • gajahmada@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 days ago

      My notification pops-up today and I watched ep 1. I do not watch any recap nor any review.

      I stopped halfway through and thought “Why did I hype for this again ?” Gotta need a rewatch of season 1 since I genuinely didn’t find anything appealing from that first episode.

      • smiletolerantly@awful.systems
        cake
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        We did a rewatch just in time. S1 is as phenomenal as ever. S2 as such a jarring contrast.

        That being said, E3 was SLIGHTLY less shit. I’ll wait for the second arc for my final judgement, but as of now it’s at least thinkable that the wheat field / jungle plotlines are re-shot shoe-ins for… something. The Mon / Dedra plotlines have a very different feel to it. Certainly not S1, but far above the other plotlines.

        I’m not filled with confidence though. Had a look on IMDb, and basically the entire crew was swapped out between seasons.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      Didn’t know it had come out but I was wondering if they’d manage to continue s2 like s1

      Also worried for the next season of the boys…

      • smiletolerantly@awful.systems
        cake
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        Yeah. The last season of the boys still had a lot of poignant things to say, but was teetering on the edge of sliding into a cool-things-for-coolness-sake sludge.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    3 days ago

    pic of tweet reply taken from r/ArtistHate. Reminded me of Saltman’s Oppenheimer tweet. Link to original tweet

    image/tweet description

    Original tweet, by @mark_k:

    Forget “Black Mirror”, we need WHITE MIRROR

    An optimistic sci-fi show about cool technology and hot it relates to society.

    Attached to the original tweet are two images, side-to-side.

    On the left/leading side is (presumably) a real promo poster for the newest black mirror season. It is an extreme close-up of the side of a person’s face; only one eye, part of the respective eyebrow, and a section of hair are visible. Their head is tilted ninety degrees upwards, with the one visible eye glazed over in a cloudy white. Attached to their temple is a circular device with a smiling face design, tilted 45 degrees to the left. Said device is a reference to the many neural interface devices seen throughout the series. The device itself is mostly shrouded in shadow, likely indicating the dark tone for which Black Mirror is known. Below the device are three lines of text: “Plug back in”/“A Netflix Series”/“Black Mirror”

    On the right side is an LLM generated imitation of the first poster. It appears to be a woman’s 3/4 profile, looking up at 45 degrees. She is smiling, and her eyes are clear. A device is attached to her face, but not on her temple, instead it’s about halfway between her ear and the tip of her smile, roughly outside where her upper molars would be. The device is lit up and smiling, the smile aligned vertically. There are also three lines of text below the device, reading: “Stay connected”/“A Netflix Series”/“Black Mirror”

    Reply to the tweet, by @realfuzzylegend:

    I am always fascinated by how tech bros do not understand art. like at all. they don’t understand the purpose of creative expression.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      Imagine the horrible product they would have created if they had actually followed up on the oppenheimer thing. A soulless vaguely wrong feeling pro technology movie created by altman and musk. The amount of people it would have driven away would have been big.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      3 days ago

      Vacant, glassy-eyed, plastic-skinned, stamped with a smiley face… “optimistic”

      I mean, if the smiley were aligned properly, it would be a poster for a horror story about enforced happiness and mandatory beauty standards. (E.g., “Number 12 Looks Just Like You” from the famously subtle Twilight Zone.) With the smiley as it is, it’s just incompetent.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 days ago

        “The man in the glowing rectangle is Mark Kretschmann, a technology enthusiast who has grown out of touch with all but the most venal human emotions. Mark is a leveller, in that he wants to drag all people down to his. But as Mark is about to discover, there’s no way to engineer a prompt for a map out of… the Twilight Zone.”

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          ·
          3 days ago

          I mean, it feels like there’s definitely something in the concept of a Where Is Everybody style of episode where Mark has to navigate a world where dead internet theory has hit the real world and all around him are bots badly imitating workers trying to serve bots badly imitating customers in order to please bots badly imitating managers so that bots badly imitating cops don’t drag them to robot jail

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 days ago

      Why are all the stories about the torment nexus we’re constructing so depressing?

      Hmm, hmm. This is a tricky one.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      oppenheimer teaches all of us that even if you specifically learn arcane knowledge to devise a nazi-burning machine, you can still get fucked over by a nazi that chose to do office politics and propaganda instead

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      3 days ago

      I tried this a couple of times and got a few “AI summary not available” replies

      Ed: heh

      The phrase “any pork in a swarm” is an idiom, likely meant to be interpreted figuratively. It’s not a literal reference to a swarm of bees or other animals containing pork. The most likely interpretation is that it is being used to describe a situation or group where someone is secretly taking advantage of resources, opportunities, or power for their own benefit, often in a way that is not transparent or ethical. It implies that individuals within a larger group are actively participating in corruption or exploitation.

      Generative AI is experimental.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 days ago

        The link opened up another google search with the same query, tho without the AI summary.

        image of a google search result description

        Query: “a bear fries bacon meaning”

        AI summary:

        The phrase “a bear fries bacon” is a play on the saying “a cat dreams of fish” which is a whimsical way to express a craving. In this case, the “bear” and “bacon” are just random pairings. It’s not meant to be a literal description of a bear cooking bacon. It’s a fun, nonsensical phrase that people may use to express an unusual or unexpected thought or craving, according to Google Search.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 days ago

          It really aggressively tries to match it up to something with similar keywords and structure, which is kind of interesting in its own right. It pattern-matched every variant I could come up with for “when all you have is…” for example.

          Honestly it’s kind of an interesting question and limitation for this kind of LLM. How should you respond when someone asks about an idiom neither of you know? The answer is really contextual. Sometimes it’s better to try and help them piece together what it means, other times it’s more important to acknowledge that this isn’t actually a common expression or to try and provide accurate sourcing. The LLM, of course, has none of that context and because the patterns it replicates don’t allow expressions of uncertainty or digressions it can’t actually do both.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 days ago

            You, a human can respond like that, a llm, esp a search one with the implied authority it has should admit it doesnt know things. It shouldn’t make up things, or use sensational clickbait headlines to make up a story.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      Also on the BlueSky-o-tubes today, I saw this from Ketan Joshi:

      Used [hugging face]'s new tool to multiply 2 five digit numbers

      Chatbot: wrong answer, 0.3 watthours

      Calc: right answer, 0.00000011 watthours (2.5 million times less energy)

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        3 days ago

        Julien Delavande , an engineer at AI research firm Hugging Face , has developed a tool that shows in real time the power consumption of the chatbot generating

        gnnnnnngh

        this shit pisses me off so bad

        there’s actually quantifiable shit you can use across vendors[0]. there’s even some software[1] you can just slap in place and get some good free easy numbers with! these things are real! and are usable!

        “measure the power consumption of the chatbot generating”

        I’m sorry you fucking what? just how exactly are you getting wattage out of openai? are you lovingly coaxing the model to lie to you about total flops spent?

        [0] - intel’s def been better on this for a while but leaving that aside for now…

        [1] - it’s very open source! (when I last looked there was no continual in-process sampling so you got hella at-observation sampling problems; but, y’know, can be dealt with)

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    exploitation-offshorer “Nick Huber” (@sweatystartup) finds that the leopards will, in fact, also eat his face. but still takes it as a fine opportunity to pitch his exploitation:

    screenshot of tweets, transcript below

    (I guess at least his account name is directionally accurate? indicates willingness to induce sweatshops…)

    tweet 1

    I was pro tariffs.

    Until about 30 of our clients at Somewhere dot com cancelled searches over the past three weeks. Hiring freezes all over the place.

    Hit me in the pocket book already.

    REMOVE TARIFFS.


    tweet 2

    By the way:

    If you want to hire folks internationally for 80% less than US employees, check out somewhere[.]com or send me a DM for a discount.


  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    4 days ago

    This might be tangential/way off-topic and more MoreWrite material than stub, but anyhoo:

    Acronym-based misinformation campaigns I would like to seed:

    1. Internet debate clubs should start using “ASMR” to mean “A steel man risk”
    2. Opus dei, the absolutely real sect of the catholic church most famous for being the villains in the fiction IP “the Da Vinci Code”, is in fact the DEI branch of the catholic church.
    3. The company KFC has been commissioned by the Chinese Government to use FLG in its marketing, standing for “finger licking good” to drop Fa Lun Gong in search rankings for FLG.

    If I think of more I’ll post them.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 days ago

      LMFAO, best known for “Party Rock Anthem”, is actually a failed leftist yodaist sect, standing for the warning “Leopards, my face, ate off”

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          4 days ago

          I thought of the old sneerclub/ssc poster (def not a regular on the former, while a former regular on the latter) yodatsracist

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 days ago

      for the betterment of muddled waters, I suggest a secondary meaning for opus dei - a WIP codec that the xiph group hasn’t really released yet, because they’re not sure it fully enough mutes maga voices

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      4 days ago

      Personally id not use disinformation as a tool. It is what got us into this mess, and you are also helping the actual goal of the flooding the zone with misinformation tactic. People stop believing in things.