

downrounds in AI venture funding
I’m @froztbyte more or less everywhere that matters
downrounds in AI venture funding
dunno, I think it’s a great contribution. not to his case, mind you, but to the collective human experience. as a warning to others, and entertainment.
OpenAI buys services from a pile of Altman’s other portfolio companies
which reminds me of one of my actual favourite parts of the bayfuckers playacting building companies: how absolutely self-cycling a lot of the funding ends up being. shartups burning fucking piles of money on other, also-VC-funded, shartups. totally normal and healthy way for money to flow.
looks like they felt that chatgpt pro wasn’t losing money fast enough, you can now get sora on the pro sub
digging around in the the issue linked to that, it seems like the person who closed/approved this is someone from a different, external agency who lists moz as a client (her hachy profile also lists that as her employer)
this pr was closed “because we have new copy”
there’s probably some questions to be asked around how this decision/instruction got made, but one would have to wade into moz’s corp and discussion systems to do so (and apparently they also have a (people mostly communicating on) Slack problem - nfi if that’s open to community joining)
none of them look good tho tbh
I think it’s a nonsense nothingburger “clarification”, esp. given the defaults firefox sets a priori on a fresh profile. even with the “no, don’t turn $x on” choices for things that it does offer those for, there’s still some egregious defaults being turned on
the cynic in me says it’s intentionally vague because they’re trying to, in advance, lay the legal groundwork for whatever the fuck they push on by default. my motivation for that thought is because of seeing the exact playbook being used by other services in the past, and it tracks with the way they’ve been pushing other features lately
did some digging and apparently the (moz poster) it’s this person. check the patents.
mega groan
wow if only any of this happened before and we could learn from it
naturally, at some point (when discord is no longer willing to carry them for free) the communities will move, and find some other way to comms. but what sucks is just how much will be locked up inside those discord instances. fucking walled gardens.
where you literally told it to do that
bit of a theme with these people
please excuse me while I make incoherent screaming noises (archive)
[tgp-janice.bel]
“hi, please excuse me for the interruption. I want to know if you’ve heard of our lord and saviour, Quiet Electronics?”
how can you mention kernel module blocks and not include pcspkr in your list
v sus
So basically, this term is just pure hype, designed to play up the “intelligence” part of it, to suggest that “AI can be great”.
people knotting themselves into a pretzel to avoid recognising that they’ve been deeply and thoroughly conned for years
The article just boils down to “use AI for the things that we think it’s good at, and don’t use it for the things we think it’s bad at!”
I love how thoroughly inconcrete that suggestion is. supes a great answer for this thing we’re supposed to be putting all of society on
it’s also a hell of a trip to frame it as “believers” vs “skeptics”. I get it’s vox and it’s basically a captured mouthpiece and that it’s probably wildly insane to expect even scientism (much less so an acknowledgement of science/evidence), but fucking hell
any of y’all running short on your supply of really tortured sentences? no worries, I’ve got a supply drop
What will count, he says, is industrial revolution-style irreversible growth.
While AI is improving fast, it remains wildly flawed
Moreover, a recent Eye on the Market [PDF] report by Michael Cembalest, chairman of Market and Investment Strategy for JP Morgan Asset Management, questions whether the immense investments in AI and the infrastructure required to support it, already made or committed by the tech giants, will ever pay off
that paragraph doesn’t punch very hard, but the (2024) pdf that it links to starts out with this as a bolded title line:
A severe case of COVIDIA: prognosis for an AI-driven US equity market
which, well, 1) immensely tortured sentence, 2) “aww poor baby, etc etc”
entertained by the rapid fire “hmm, shit, is all this worth it?” that’s Ever So Suddenly boiling up everywhere. bet it’s entirely unrelated to people working on quarterly portfolio reviews, tho
I don’t remember seeing Lesswrongers play with these sorts of ideas
the all-defector in rajaniemi’s books probably sits pretty close?
(I still can’t tell if rajaniemi’s actually really into all this shit, or just found it all to be a highly convenient backstory for some entertainingly detailed scifi)
not all crayon - some are spaghetti and sauce
there’s been some (what appears to me to be) remarkable progress in the field, in that I know that it’s possible to create intentional structures. it’s very much not my field so I can’t speak to it in detail, I think the best way I could describe where I understand it to be is that it’s like people building with lego, if that makes sense?
but yeah it’s still a damn far way off from what we’d call “gene programming” as we have “computer programming”
they really should shove sbf and holmes in the same yard. they can shack up and con each other, and we wouldn’t have to keep hearing all these insane interviews
market potential: all trustfund babies
irl flinched