Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    5 days ago

    Haven’t read the source paper yet (apparently it came out two weeks ago, maybe it already got sneered?) but this might be fun: OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws.

    Full of little gems like

    Beyond proving hallucinations were inevitable, the OpenAI research revealed that industry evaluation methods actively encouraged the problem. Analysis of popular benchmarks, including GPQA, MMLU-Pro, and SWE-bench, found nine out of 10 major evaluations used binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers.

    I had assumed that the problem was solely technical, that the fundamental design of LLMs meant that they’d always generate bullshit, but it hadn’t occurred to me that the developers actively selected for bullshit generation.

    It seems kinda obvious in retrospect… slick bullshit extrusion is very much what is selling “AI” to upper management.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 days ago

      Well, I’ll give them the text equivalent of a “you tried” sticker for finally admitting their automatic bullshit machines produce (gasp) bullshit, but the main sneerable thing I see is the ISO Standard OpenAI Anthropomo-

      the developers actively selected for bullshit generation

      every_tf2_class_laughing_at_once.wav

      (Maximising lies extruded per ocean boiled was definitely what they were going for in hindsight, but it geniunely cracks me up to see them come out and just say it)

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 days ago

    There’s an ACX guest post rehashing the history of Project Xanadu, an important example of historical vaporware that influenced computing primarily through opinions and memes. This particular take is focused on Great Men and isn’t really up to the task of humanizing the participants, but they do put a good spotlight on the cults that affected some of those Great Men. They link to a 1995 article in Wired that tells the same story in a better way, including the “six months” joke. The orange site points out a key weakness that neither narrative quite gets around to admitting: Xanadu’s micropayment-oriented transclusion-and-royalty system is impossible to correctly implement, due to a mismatch between information theory and copyright; given the ability to copy text, copyright is provably absurd. My choice sneer is to examine a comment from one of the ACX regulars:

    The details lie in the devil, for sure…you’d want the price [of making a change to a document] low enough (zero?) not to incur Trivial Inconvenience penalties for prosocial things like building wikis, yet high enough to make the David Gerards of the world think twice.

    Ah yes, low enough to allow our heroic wiki-builders, wiki-citers, and wiki-correctors; and high enough to forbid their brutish wiki-pedants, wiki-lawyers, and wiki-deleters.

    Disclaimer: I know Miller and Tribble from the capability-theory community. My language Monte is literally a Python-flavored version of Miller’s E (WP, esolangs), which is itself a Java-flavored version of Tribble’s Joule. I’m in the minority of a community split over the concept of agoric programming, where a program can expand to use additional resources on demand. To me, an agoric program is flexible about the resources allocated to it and designed to dynamically reconfigure itself; to Miller and others, an agoric program is run on a blockchain and uses micropayments to expand. Maybe more pointedly, to me a smart contract is what a vending machine proffers (see How to interpret a vending machine: smart contracts and contract law for more words); to them, a smart contract is how a social network or augmented/virtual reality allows its inhabitants to construct non-primitive objects.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Xanadu’s micropayment-oriented transclusion-and-royalty system is impossible to correctly implement, due to a mismatch between information theory and copyright; given the ability to copy text, copyright is provably absurd

      it kept being funny to me that even while xanadu had already shown the problems with content control the entirety of the NFT craze just went on as if it was full greenfields novel problem

      The details lie in the devil, for sure…you’d want the price [of making a change to a document] low enough (zero?) not to incur Trivial Inconvenience penalties for prosocial things like building wikis, yet high enough to make the David Gerards of the world think twice.

      some of these people just really don’t know their history very well, do they

      on a total tangent:

      while xanadu’s commercial-aspiration history is intimately tied up in why it never got much further, I do occasionally daydream about if we had, and if we could’ve combined it with more-modern signing and sourcing: daydream in the respect of “CA and cert chains, but for transcluded content”, esp in the face of all the fucking content mills used to push disinfo etc. not sure this would work ootb either, mind you, it’s got its own set of vulnerabilities and problems that you’d need to work through (and ofc you can’t solve social problems purely in the technical domain)

      has there been any meaningful advancement or neat new research in agoric computing? haven’t really looked into it in a while, and the various blockchain nonsense took so much air out of the room for so long I haven’t had to spoons to look

      (separately I know there’s also been some developments in remote trusted compute, but afaict that’s also still quite early days)

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 days ago

        much of the lore of the early/earlier internet being built is also full of some extremely, extremely unhinged stuff. I’ve had some first-hand in-the-trenches accounts from people I’ve known active from the early-mid 90s to middle 00s and holy shit there are some batshit things happening in places. often think of it when I see the kinds of shit thiel/musk/etc are all up to (a lot of it boils down to “they’re big mad that they have to even consider other people and can’t just do whatever they like”)

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 days ago

      The 17 rules also seem to have abuse build in. Documents need to be stored redundantly (without any mention of how many copies that means), and it has a system where people are billed for the data they store. Combine these and storing your data anywhere runs the risk of a malicious actor emptying your accounts. In a ‘it costs ten bucks to store a file here’ ‘sorry we had to securely store ten copies of your file, 100 bucks please’. Weird sort of rules. Feels a lot like it never figured out what it wants to be a centralized or distributed system, a system where writers can make money, or they need to pay to use. And a lot of technical solutions for social problems.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 days ago

      If you use physical force to stop me however, I will make it a priority to ensure you regret doing this when you are on your deathbed. You have probably never met an enemy as intelligent, creative and willing to play the decade-long game as I am.

      “When you were partying, I studied the blade.”

    • sus@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      5 days ago

      Also the point is to get attention of broader public, not just those at the labs.

      The highest possible attainment, to generate several popular memes about crazy cult member does something slightly odd to show his devotion, but isn’t brave enough to do it outside his own home

      disclaimer

      memes often contain mild inaccuracies

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        The way typical US educations (idk about other parts of the world) portray historical protests and activist movements has been disastrous to the ability of people to actually succeed in their activism. My cynical assumption is that is exactly as intended.

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            4 days ago

            So, to give the first example that comes to mind, in my education from Elementary School to High School, the (US) Civil Rights movement of the 1950s and 1960s was taught with a lot of emphasis on passive nonviolent resistance, downplaying just how disruptive they had to make their protests to make them effective and completely ignoring armed movements like the Black Panthers. Martin Luther King Jr.'s interest and advocacy for socialism is ignored. The level of organization and careful planning by some of the organizations isn’t properly explained. (For instance, Rosa Parks didn’t just spontaneously decide to not move her seat one day, they planned it and picked her in order to advance a test case, but I don’t think any of my school classes explained that until High School.) Some of the level of force the federal government had to bring in against the Southern States (i.e. Federal Marshals escorting Ruby Bridges) is properly explained, but the full scale is hard to visualize so. So the overall misleading impression someone could develop or subconsciously perceive is that rights were given to black people through democratic processes after they politely asked for them with just a touch of protests.

            Someone taking the way their education presents the Civil Rights protests at face value without further study will miss the role of armed resistance, miss the level of organization and planning going on behind pivotal acts, and miss just how disruptive protests had to get to be effective. If you are a capital owner benefiting from the current status quo (or well paid middle class that perceives themselves as more aligned with the capital owners than other people that work for a living), then you have a class interest in keeping protests orderly and quiet and harmless and non-disruptive. It vents off frustration in a way that ultimately doesn’t force any kind of change.

            This hunger strike and other rationalist attempts at protesting AI advancement seems to suffer from this kind of mentality. They aren’t organized on a large scale and they don’t have coherent demands they agree on (which is partly a symptom of the fact that the thing they are trying to stop is so speculative and uncertain). Key leaders like Eliezer have come out strongly against any form of (non-state) violence. (Which is a good thing, because their fears are unfounded, but if I actually thought we were doomed with p=.98 I would certainly be contemplating vigilante violence.) (Also, note form the nuke the datacenter’s comments, Eliezer is okay with state level violence.) Additionally, the rationalist often have financial and social ties to the very AI companies they are protesting, further weakening their ability to engage in effective activism.

            • V0ldek@awful.systems
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              4 days ago

              That’s interesting, because in Poland 95% of all history you are taught is “and then they grabbed guns because they were just so fed up with their* shit” and from modern history it’s mostly anti-commumist worker movements that were all about general strikes and loud, disruptive protests.

              *Russians’, Germans’, Austrians’, king’s, …

              • scruiser@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                ·
                4 days ago

                So us Americans do get some of “grabbed guns and openly fought” in the history of our revolutionary war, but its taught in a way that doesn’t link it to any modern movements that armed themselves. And the people most willing to lean into guns and revolutionary war imagery/iconography tend to be far right wing (and against movement for worker’s rights or minorities’ rights or such).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 days ago

      I almost wanna use some reverse psychology to try and make him stop.

      ‘hey im from sneerclub and we are loving this please dont stop this strike’

      (I mean he clearly mentally prepped against arguments and even force (and billionaires), but not someone just making fun of him. Of course he prob doesn’t know about any of these places and hasn’t build us up to Boogeyman status, but imagine it worked)

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      Hmm, it’s still on the funny side of graph for me. I think it could go on for at least another week.

  • David Gerard@awful.systemsM
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    6 days ago

    the talking point about disparaging terms for AI users by choice “I came up with a racist-sounding term for AI users, so if you say ‘clanker’ you must be a racist” is so fucking stupid it’s gotta be some sort of op

    (esp when the made-up racist-sounding term turns out to have originated with Warren fucking Ellis)

    i am extremely disappointed that awful systems users have fallen for it for a moment

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      Side note: The way I’ve seen clanker used has been for the AIs themselves, not their users. I’ve mostly seen the term in the context of star wars memers eager to put their anti-droid memes and jokes to IRL usage.

      • ShakingMyHead@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        5 days ago

        Same here, I’ve never actually seen the term “clanker” be used in reference to a person using the AI, but the AI itself. Which to me was analogous to going to an expensive bakery and accusing the bread of ripping you off instead of the baker (or whoever was setting prices, which wouldn’t be the bread).

        If there was any sort of op going on (which I don’t think there is), I’d guess it would be from the AI doomers who want people to think of these things as things with enough self-awareness that something like “clanker” would actually insult them (but, again, probably not, IMO).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 days ago

      Slightly related to the ‘it is an op’ thing, did you look at the history of the wikipedia page for clanker? There were 3 edits to the page before 1 June 2025.

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      The truth is that we feel shame to a much greater degree than the other side, which makes it pretty easy to divide us on these annoying trivialities.

      My personal hatred of tone policing is greater than my sense of shame, but I imagine that isnt something to expect for most.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 days ago

    The billionaires’ dreams of defeating death with technology have been “realised” by Marvel, which is planning an AI-Poweredtm hologram of him at L.A. Comic Con.

    To the shock of nobody, this act of exploitation through digital necromancy is being met with unfiltered disgust.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      6 days ago

      Check the guys profile. He took some random remark from Gwern indicating what looks to me like human interest, as some sort of commandment.

      My first instinct was to dismiss him as an oddball—until a friend told me I was dealing with a legend of rationality. I have to admit: I nearly shit myself. His comment got more likes than the post I’d spent years working on.

      Someone with, what, a 152 IQ wanted my accounts of surviving bureaucratic military hell? And I’m the same guy who applies scientific rigor to Pokémon analysis

      (The text is bolded etc in various places which I didn’t reproduce)

    • sinedpick@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      So this looks and sounds 100% AI generated, but

      Possible performance idea: A “kata of doubt,” mixing martial arts stances with expressions of deep confusion.

      might be the first time AI slop made me laugh (not out loud, but still)

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 days ago

      Wonder if, esp considering the DHH situations this is sort of a nazi bar style takeover. Where the people who don’t want to make a fuss let in the nice but iffy people who then go mask off, and let the rest in. (The thing the far right accused the left of doing, in a bit of projections). But I know nothing about the politics of anybody involved, could also just be a regular hostile takeover.

      (Doesn’t feel like one just looking at the rubycentral bsky account for a second though. They do have an amazing spin on it. It was to protect against supply chain attacks (also a link to an email article of them, which just feels weird)).

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          7 days ago

          Another sneer on the subject: https://bsky.app/profile/tef.bsky.social/post/3lz7fdou4uk2y

          "In case you’re not sure who dhh is, he’s a danish counterstrike player and race car owner who writes essays like “i am smarter than you” and “foreigners bad”

          rich enough not to worry about consequences but at the very same time, still desperate for status, a man two friends short of a podcast"

          Followup by somebody else:

          “I posted here about him driving at Le Mans in 2024, and several people told me that he’s disliked and mocked as much in the motorsport community as he is in the tech community.”

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 days ago

        ruby’s had this problem for ~2 decades now. like, the “rockstar dev” archetype literally became big directly because of ruby’s popularity and perception at the time

        I haven’t been active in/near the ruby space for a number of years now so I can’t speak to the modern details well at all, but I wouldn’t be too surprised to learn that the various branches of it haven’t really learned how to deal. I will say that I have seen some improvement over that period, but… yeah

        • nfultz@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          6 days ago

          ruby’s had this problem for ~2 decades now. like, the “rockstar dev” archetype literally became big directly because of ruby’s popularity and perception at the time

          I had to look it up, the ~Rails Conf~ Golden Gate Ruby Conf code like a porn star thing was 2009, I didn’t hallucinate it. DHH has deleted those tweets.

          I feel old now.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      So if I understood NVIDIA’s “strategy” right, their usage of companies like Coreweave is drawing in money from other investors and private equity? Does this mean, that unlike many of the other companies in the current bubble, they aren’t going to lose money on net, because they are actually luring in investment from other sources in companies like Coreweave (which is used to buy GPU and thus goes to them), whileleaving the debt/obligations in the hands of companies like Coreweave? If I’m following right this is still a long term losing strategy (assuming some form of AI bubble pop or deflation we are all at least reasonably sure of), but the expected result for NVIDIA is more of a massive drop in revenue as opposed to a total collapse of their company under a mountain of debt?

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    11 days ago

    Sneer inspired by a thread on the preferred Tumblr aggregator subreddit.

    Rationalists found out that human behavior didn’t match their ideological model, then rather than abandon their model or change their ideology decided to replace humanity with AIs designed to behave the way they think humans should, just as soon as they can figure out a way to do that without them destroying all life in the universe.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 days ago

      That thread gives me hope. A decade ago, a random internet discussion in which rationalist came up would probably mention “quirky Harry Potter fanfiction” with mixed reviews, whereas all the top comments on that thread are calling out the alt-right pipeline and the racism.

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 days ago

        I have no hope. The guy who introduced me to LessWrong included what I later realised was a race science pitch. Yudkowsky was pushing this shit in 2007. This sucker just realised a coupla decades late.

  • flere-imsaho@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    9 days ago

    david heinemeier hanson of the ruby on rails fame decided to post a white supremacist screed with a side of transphobia because now he doesn’t need to pretend anything anymore. it’s not surprising, he was heading this way for a while, but seeing the naked apology of fascism is still shocking for me.

    any reasonable open source project he participates in should immediately cut ties with the fucker. (i’m not holding my breath waiting, though.)

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    11 days ago

    Regarding occasional sneer target Lawrence Krauss and his co-conspirators:

    Months of waiting but my review copy of The War on Science has arrived.

    I read Krauss’ introduction. What the fuck happened to this man? He comes off as incapable of basic research, argument, basic scholarship. […] Um… I think I found the bibliography: it’s a pdf on Krauss’ website? And all the essays use different citation formats?

    Most of the essays don’t include any citations in the text but some have accompanying bibliographies?

    I think I’m going insane here.

    What the fuck?

    https://bsky.app/profile/nateo.bsky.social/post/3lyuzaaj76s2o

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      ·
      11 days ago

      Huh, I wonder who this Krauss guy is, haven’t heard of him.

      *open wikipedia*

      *entire subsection titled “Allegations of sexual misconduct”*

      *close wikipedia*

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      10 days ago

      All of those people, Krauss, Dawkins, Harris (okay that one might’ve been unsalvageable from the start, I’m really not sure) are such a great reminder that you can be however smart/educated you want, the moment you believe you’re the smartest boi and stop learning and critically approaching your own output you get sucked into the black hole of your asshole, never to return.

      Like if I had a nickel. It’s hubris every time. All of those people need just a single good friend that, from time to time, would tell them “man, what you said was really fucking stupid just now” and they’d be saved.

      Clout is a proxy of power and power just absolutely rots your fucking brain. Every time a Guy emerges, becomes popular, clearly thinks “haha, but I am different, power will not rot MY brain”, five years later boom, he’s drinking with Jordan Benzo Peterson. Even Joe Fucking Rogan used to be significantly more lucid before someone gave him ten bazillion dollars for a podcast and he suffered severe clout poisoning.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Sorry but who the fuck is that? Not one of our common guests here, I need a primer on her

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      9 days ago

      The commentator who thinks that USD 120k / year is a poor income for someone with a PhD makes me sad. That is what you earn if you become a professor of physics at a research university or get a good postdoc, but she aged out of all of those jobs and was stuck on poorly paid short-term contracts. There are lots of well-paid things that someone with a PhD in physics can do if she is willing to network and work for it, but she chose “rogue intellectual.”

      A German term to look up is WissZeitVG but many academic jobs in many countries are only offered to people no more than x years after receiving their PhD (yep, this discriminates against women and the disabled and those with sick spouses or parents).

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      thanks for linking this, was fun to watch

      hadn’t seen that saltman clip (been real busy running around pretty afk the last few weeks), but it’s a work of art. despite grokking the dynamics, it continues to be astounding just how vast the gulf between fact and market vibes are

      and as usual, Collier does a fantastic job ripping the whole idea a new one in a most comprehensive manner

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    13
    ·
    12 days ago

    Starting things off with a newsletter by Jared White that caught my attention: Why “Normies” Hate Programmers and the End of the Playful Hacker Trope, which directly discusses how the public perception of programmers has changed for the worse, and how best to rehabilitate it.

    Adding my own two cents, the rise of gen-AI has definitely played a role here - I’m gonna quote Baldur Bjarnason directly here, since he said it better than I could:

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      12 days ago

      This is an interesting crystallization that parallels a lot of thoughts I’ve been having, and it’s particularly hopeful that it seeks to discard the “hacker” moniker and instead specifically describe the subjects as programmers. Looking back, I was only becoming terminally online circa 1997, and back then it seemed like there was an across-the-spectrum effort to reclaim the term “hacker” into a positive connotation after the federal prosecutions of the early 90s. People from aspirant-executive types like Paul Graham to dirty hippies like RMS were insistent that being a “hacker” was a good thing, maybe the best possible thing. This was, of course, a dead letter as soon as Facebook set up at “One Hacker Way” in Menlo Park, but I’d say it’s definitely for the best to finally put a solid tombstone on top of that cultural impulse.

      As well, because my understanding of the defining activity of the positive-good “hacker” is that it’s all too close to Zuckerberg’s “move fast and break things,” and I think Jared White would probably agree with me. Paul Graham was willing to embrace the term because he was used to the interactive development style of Lisp environments, but the mainstream tools have only fitfully evolved in that direction at best. When “hacking,” the “hacker” makes a series of short, small iterations with a mostly nebulous goal in mind, and the bulk of the effort may actually be what’s invested in the minimum viable product. The self-conception inherits from geek culture a slumped posture of almost permanent insufficiency, perhaps hiding a Straussian victimhood complex to justify maintaining one’s own otherness.

      In mentioning Jobs, the piece gestures towards the important cultural distinction that I still think is underexamined. If we’re going to reclaim and rehabilitate even homeopathic amounts of Jobs’ reputation, the thesis we’re trying to get at is that his conception of computers as human tools is directly at odds with the AI promoters’ (and, more broadly, most cloud vendors’) conception of computers as separate entities. The development of generative AI is only loosely connected with the sanitized smiley-face conception of “hacking.” The sheer amount of resources and time spent on training foreclose the possibility of a rapid development loop, and you’re still not guaranteed viable output at the end. Your “hacks” can devolve into a complete mess, and at eye-watering expense.

      I went and skimmed Graham’s Hackers and Painters again to see if I could find any choice quotes along these lines, since he spends that entire essay overdosing on the virtuosity of the “hacker.” And hoo boy:

      Measuring what hackers are actually trying to do, designing beautiful software, would be much more difficult. You need a good sense of design to judge good design. And there is no correlation, except possibly a negative one, between people’s ability to recognize good design and their confidence that they can.

      You think Graham will ever realize that we’re culminating a generation of his precious “hackers” who ultimately failed at all this?

      • mirrorwitch@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        12 days ago

        re: last line: no, he never will admit or concede to a single damn thing, and that’s why every time I remember this article exists I have to reread dabblers & blowhards one more time purely for defensive catharsis

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          11 days ago

          I don’t even know the degree to which that’s the fault of the old hackers, though. I think we need to acknowledge the degree to which a CS degree became a good default like an MBA before it, only instead of “business” it was pitched as a ticket to a well-paying job in “computer”. I would argue that a large number of those graduates were never going to be particularly interested in the craft of programming beyond what was absolutely necessary to pull a paycheck.

      • Don Piano@feddit.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 days ago

        Interesting, I’d go rhetorically more in this direction: A hack is not a solution, it’s the temporary fix (or… break?) until you get around to doing it properly. On the axis where hacks are on one end and solutions on the other, genAI shit is beyond the hack. It’s not even a temporary fix, its less, functionally and culturally.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          10 days ago

          A hack can also just be a clever way to use a system in a way it wasnt designed.

          Say you put a Ring doorbell on a drone as a perimeter defense thing? A hack. See also the woman who makes bad robots.

          It also can be a certain playfulness with tech. Which is why hacker is dead. It cannot survive contact with capitalist forces.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      11 days ago

      AFAIK the USA is the only country where programmers make very high wages compared to other college-educated people in a profession anyone can enter. Its a myth that so-called STEM majors earn much more than others, although people with a professional degree often launch their careers quicker than people without (but if you really want to launch your career quickly, learn a trade or work in an extractive industry somewhere remote). So I think for a long time programmers in the USA made peace with FAANG because they got a share of the booty.

      • k4rlos@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        Not the only. Former USSR and Eastern Europe as well, and it’s way worse there. Typically, SWE would earn about several TIMES more than your college-educated person. This leads to programmers being obnoxious libertarian nazi fucktards.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      11 days ago

      Hackers is dead. (Apologies to punk)

      Id say that for one reason alone, when Musk claimed grok was from the guide nobody really turned on him.

      Unrelated to programmers or hackers, Elons father (CW: racism) went fully mask off and claims Elon agrees with him. Which considering his promotion of the UK racists does not feel off the mark. (And he is spreading the dumb ‘[Africans] have an [average] IQ of 63’ shit, and claims it is all genetic. Sure man, the average African needs help understanding the business end of a hammer. As I said before, guess I met the smartest Africans in the world then, as my university had a few smart exchange students from an African country. If you look at his statements it is even dumber than normal, as he says population, so that means either non-Black Africans are not included, showing just how much he thinks of himself as the other, or they are, and the Black African average is even lower).

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      10 days ago

      TIL Hank Green, the milquetoast BlueSky poster, also has some YouTube channel. How quaint.

      I think every time I learn That Guy From BlueSky also has some other gig different from posting silly memes I lose some respect for them.

      E.g. I thought Mark Cuban was just a dumb libertarian shitposter, but then it turned out he has a cuntillion dollars and also participated in a show unironically called “Shark Tank” that I still don’t 100% believe was a real thing because by god

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 days ago

        I figured he’d be a lot better known for his YouTube career than for his bsky posting. I see his stuff all the time in my recommendations, though his style isn’t my cup of tea so I seldom watch any of them.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 days ago

          I haven’t seen the YouTube recommendation page in so long I wouldn’t know. Invidious my beloved <3

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 days ago

      What’s up with all the websites that tell me “you’ve reached the limit of free articles for the month” even though I’ve literally never entered that site before in my life. Stop gaslighting me you cunts.

      Anyway, here’s the archive