

I call it: “scientific” racism. To me that gets the point across the best.
I call it: “scientific” racism. To me that gets the point across the best.
I wondered if this should be called a shitpost or an effortpost, then I wondered what would something that is both be called and I came up with “constipationpost”.
So, great constipationpost?
Was jumpscared on my YouTube recommendations page by a video from AI safety peddler Rob Miles and decided to take a look.
It talked about how it’s almost impossible to detect whether a model was deliberately trained to output some “bad” output (like vulnerable code) for some specific set of inputs.
Pretty mild as cult stuff goes, mostly anthropomorphizing and referring to such LLM as a “sleeper agent”. But maybe some of y’all will find it interesting.
It’s two guys in London and one guy in San Francisco. In London there’s presumably no OpenAI office, in SF, you can’t be at two places at once and Anthropic has more true believers/does more critihype.
Unrelated, few minutes before writing this a bona-fide cultist replied to the programming dev post. Cultist with the handle “BussyGyatt @feddit.org”. Truly the dumbest timeline.
Yeah, didn’t even cross their mind that it could be wrong, because it looked ok.
Shamelessly posting link to my skeet thread (skeet trail?) on my experience with an (mandatory) AI chatbot workshop. Nothing that will surprise regulars here too much, but if you want to share the pain…
https://bsky.app/profile/jfranek.bsky.social/post/3lxtdvr4xyc2q
That’s how you get a codebase that kinda sorta works in a way but is more evolved than designed, full of security holes, slow as heck, and disorganized to the point where it’s impossible to fix bugs, adds features, or understand what’s going on.
Well, one of the ways *glancing at the code I’m responsible for, sweating profusely*
resulting in one person getting bacon added to their ice cream in error
At first, I couldn’t believe that the staff didn’t catch that. But thinking about it, no, I totally can.
He’re hoping that the more he spends time gooning the more he’ll leave the rest of us alone. crosses fingers
When I read Excel Copilot, I thought "they finally added a chatbot that lets you generate a spreadsheet/range/table processing a datasource. Like “copilot, create a table that aggregates prices by category from table xyz”.
To which I was like “Ok, maybe that could be useful to some of the many non-technical excel users.” I wasn’t prepared for whatever this is.
I mean with vibe-coding/excelling? you eventually get something that can run deterministically.
Are we… are we gonna start seeing terminally AI-pilled bozos implementing gacha mechanics in data pipelines?
Yay! *pats myself on the back*
I have no idea what is good web design. I’ll just note makes the waving red, white and blue flag in the background makes the white heading text pretty hard to read.
Second quote is classic “you must be prompting it wrong”. No, it can’t be that people which find a tool less useful will be using it less often.
Why would you do waluigi and wario dirty like that?
I think Leathery Pete might have read too much Left Behind.
I’ve been recommended more Veo 3 fails by The Algorithm. Apparently even some promptfans think it sucks.
You WILL believe what happened when they tried to replicate Google’s demos using the exact same prompts.
All enthusiasm for polyamory replaced with enthusiasm for tag team matches
both would be funnier
the model was supposed to be trained solely on his own art
much simpler models are practically impossible to train without an existing model to build upon. With GenAI it’s safe to assume that training that base model included large scale scraping without consent
his other references seem to be more garden variety neo-nazi type
Also apparently pro LGBT neo-nazis, which I refuse to believe are not a parody. See this cursed screenshot:
Slack CEO responded there that it was all a “billing mistake” and that they’ll do better in the future and people are having none of it.
A rare orange site W, surprisingly heartwarming.