this is Habryka talking about how his moderating skills are so powerful it takes lesswrong three fucking years to block a poster who’s actively being a drain on the site
here’s his reaction to sneerclub (specifically me - thanks Oliver!) calling LessOnline “wordy racist fest”:
A culture of loose status-focused social connection. Fellow sneerers are not trying to build anything together. They are not relying on each other for trade, coordination or anything else. They don’t need to develop protocols of communication that produce functional outcomes, they just need to have fun sneering together.
He gets us! He really gets us!
that habryka dude sure loves the sound of his voice.
That it took this long to ban this guy and this many words is so delicious. What a failure of a community. What a failure in moderation.
Based on the words and analogies in that post: participating in LW must be like being in a circlejerk where everyone sucks at circlejerking. Guys like Said run around the circle yelling at them about how their technique sucks and that they should feel bad. Then they chase him out and continue to be bad at mutual jorkin.
E: That they don’t see the humor in sneering at “celebrating blogging” and that it’s supposedly us at our worst is very funny.
you can tell the real problem was I called them racist
You called them racist without proving from first principles it is bad to be racist, that they are racist, and their specific form of racism is also bad and will not lead to better outcomes in than being non-racist in the megafuture.
Lol I literally told these folks, something like 15 years ago, that paying to elevate a random nobody like Yudkowsky as the premier “ai risk” researcher, in so much that there is any AI risk, would only increase it.
Boy did I end up more right on that than my most extreme imagination. All the moron has accomplished in life was helping these guys raise cash due to all his hype about how powerful the AI would be.
The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.
some UN-associated ACM talk I was listening to recently had someone cite a number at (iirc)
$1.5tn total estimated investment$800b[0]. haven’t gotten to fact-check it but there’s a number of parts of that talk I wish to write up and make more knownone of the people in it made some entirely AGI-pilled comments, and it’s quite concerning
this talk; looks like video is finally up on youtube too (at the time I yanked it by pcap-ing a zoom playout session - turns out zoom recordings are hella aggressive about not being shared)
the question I asked was:
To Csaba (the current speaker): it seems that a lot of the current work you’re engaged in is done presuming that AGI is a certainty. what modelling you have you done without that presumption?
response is about here
[0] edited for correctness; forget where I saw the >$1.5t number
hearing him respond like that in real time and carefully avoiding the point makes clear the attraction of ChatGPT
from the (extensive) footnotes:
Occupy Wallstreet strikes me as another instance of the same kind of popular sneer culture. Occupy Wallstreet had no coherent asks, no worldview that was driving their actions.
it’s so easy to LessWrong: just imagine that your ideological opponents have no worldview and aren’t trying to build anything, sprinkle in some bullshit pseudo-statistics, and you’re there!
Lesswrong and SSC: capable of extreme steelmanning of… check notes… occult mysticism (including divinatory magic), Zen-Buddhism based cults, people who think we should end democracy and have kings instead, Richard Lynn, Charles Murray, Chris Langan, techbros creating AI they think is literally going to cause mankind’s extinction…
Not capable of even a cursory glance into their statements, much less steelmanning: sneerclub, Occupy Wallstreet
It is gonna be worse, they can back up their statements by referring to people who were actually there, but they person they then would be referring to is Tim Pool, and you can’t as an first principles intellectual of the order of LessWrong, reveal that actually you get your information from disgraced yt’ers like all the other rightwing plebs. It has to remain an unspoken secret.
A small sidenote on a dynamic relevant to how I am thinking about policing in these cases:
A classical example of microeconomics-informed reasoning about criminal justice is the following snippet of logic.
If someone can gain in-expectation X dollars by committing some crime (which has negative externalities of Y>X dollars), with a probability p of getting caught, then in order to successfully prevent people from committing the crime you need to make the cost of receiving the punishment (Z) be greater than X/p, i.e. X<p∗Z.
Or in less mathy terms, the more likely it is that someone can get away with committing a crime, the harsher the punishment needs to be for that crime.
In this case, a core component of the pattern of plausible-deniable aggression that I think is present in much of Said’s writing is that it is very hard to catch someone doing it, and even harder to prosecute it successfully in the eyes of a skeptical audience. As such, in order to maintain a functional incentive landscape the punishment for being caught in passive or ambiguous aggression needs to be substantially larger than for e.g. direct aggression, as even though being straightforwardly aggressive has in some sense worse effects on culture and norms (though also less bad effects in some other ways), the probability of catching someone in ambiguous aggression is much lower.
Fucking hell, that is one of the stupidest most dangerous things I’ve ever heard. Guy solves crime by making the harshness of punishment proportional to the difficulty of passing judgement. What could go wrong?
@Amoeba_Girl @sneerclub isn’t this exactly the same “logic” that escalated the zizians to multiple murders?
“So, what are you in for?” “Making a right turn on a bicycle without signalling continuously for the last 100 feet before the turn in violation of California Vehicle Code 22108”
“… And litterin’.”
“…And creatin’ a nuisance”
Hmm, yes, I must develop a numerical function to determine whether or not somebody doesn’t like me…
One thing he gets is that direct aggression is definitely more effective in this situation. I can, and do, tell these people to fuck straight off, and my life is better for it!
Indeed, the LinkedIn attractor appears to be the memetically most successful way groups relate to their ingroup members, while the sneer attractor governs how they relate to their outgroups.
AND OLIVER COMES IN FROM THE TOP ROPE WITH THE HOTDOG COSTUME
Moderators need the authority to, at some level, police the vibe of your comments, even without a fully mechanical explanation of how that vibe arises from the specific words you chose.
hey everyone i am going to become top mod on this forum, now let me just reinvent human interaction from first principles
Jesus christ, just ban the guy! Don’t write a million words about how much he gets under your skin! Rude!!!
How it started: gonna build the robotgod but nice
How it went: wow we need to teach people how to think.
How it ended: we cant do basic things people have done since we decided to walk upright because some people are mean.
Even 4chan can trade/coordinate/and have functional outcomes, sure often for evil. But most of us are not even active on lw. Skill issue. If you cant beat a bunch of sneerders who are not even participating what chance do they have against the godAI (same with not being able to convince one human).
eponymous sneerlcub
Eponymous even. Guess they don’t know who named sneerclub.
Some more sneering, why use make footnotes like you are actually linking to proof, it is done all over the place and it tricks me a lot ‘ah they backed it up with a source’ nope some random footnote which is so wordy it breaks the site on mobile.
the more likely it is that someone can get away with committing a crime, the harsher the punishment needs to be for that crime.
The death penalty of not just you but your whole family if you copy that floppy.
Now, does that mean that everyone is free to vote however they want?
The answer is a straightforward “no”.
Holy shit, hahaha what are you wasting your time on…
Scrolled down to the comments:
How would this situation play out in a world like dath ilan? A world where The Art has progressed to something much more formidable.
Yeah indeed how would they solve these problems in Ravenloft.
The death penalty of not just you but your whole family if you copy that floppy.
thermonuclear ballistic missile on lightcone infra for all the time and brains they have wasted
With apologies to Stross: “you shall not copy floppies in my lightcone”
Even 4chan can trade/coordinate/and have functional outcomes, sure often for evil.
To give a rather notorious example, there’s the He Will Not Divide Us flag in 2017, which the 'channers tracked down after only 38 hours, despite Shia LeBouf’s attempts to keep the location hidden.
The death penalty of not just you but your whole family if you copy that floppy.
The future media conglomerates want. (okay maybe not the “death penalty” part - dead people don’t make money)
Re the flag.
Not just that, but they also on a less malicious case id4chan, and now id6chan were also 4chan productions iirc. (With others from the internet also helping). Which documents all kinds of strange warhammer lore, the /tg/ interpretation of that and their various hate for certain authors of the games. For example https://1d6chan.miraheze.org/wiki/Robin_Cruddace
The flag was the most obvious one I could think of, given how many eyes were already on HWNDU and how swiftly they found it. In retrospect, I should’ve chosen 1d4chan/1d6chan as my example, given how large and robust it is as a wiki.
The SCP Foundation arguably qualifies as well - it began on /x/ as a random post, before morphing into the ongoing collaborative writing project we all know and love.
Well, the first thing I thought about was also the flag (but more because people brought it up a while back), and TIL about the SCP foundation (despite me talking about 4chan from time to time I never have been a channer, I have only very rarely posted some stuff in the roguelikes topic, and left when people let the neo-nazis in who kept calling people who asked a bit of money for a roguelike jews. Just sucks they never heard of the nazi bar stuff.
Eponymous even. Guess they don’t know who named sneerclub.
Mister Sneerclub of the Newport Sneerclubs, of course.
John Sneerclub
Eliezer Sneerclub!
Only for friends, so we should call him Mister Sneerclub. Or Her Sneer if you are German and want to be informal.
we cant do basic things
That’s giving them too much credit! They’ve generated the raw material for all the marketing copy and jargon pumped out by the LLM companies producing the very thing they think will doom us all! They’ve served a small but crucial role in the influence farming of the likes of Peter Thiel and Elon Musk. They’ve served as an entry point to the alt-right pipeline!
dath ilan?
As a self-certified Eliezer understander, I can tell you dath ilan would open up a micro-prediction market on various counterfactual ban durations. Somehow this prediction market would work excellently despite a lack of liquidity and multiple layers of skewed incentives that should outweigh any money going into it. Also Said would have been sent to a
reeducation camp, quiet city andsterilizeddenied UBI if he reproduces for not conforming to dath ilan’s norms much earlier.
From the comments:
If Said returns, I’d like him to have something like a “you can only post things which Claude with this specific prompt says it expects to not cause <issues>” rule, and maybe a LLM would have the patience needed to show him some of the implications and consequences of how he presents himself.
And:
Couldn’t prediction markets solve this?
Ain’t enough lockers in the world, dammit
Of course, commenters on LessWrong are not dumb, and have read Scott Alexander,
It’s like sneering at fish in an aquarium
“They don’t need to develop protocols of communication that facilitate buying castles, fluffing our corporate overlords, or recruiting math pets. They share vegan recipes without even trying to build a murder cult.”
Here’s a vegan gumbo I made for Thanksgiving a couple years back.
I’ve never tried a Pyrex roux before. I’ll have to give that a shot. Often, I use our Pyrexen to rehydrate textured vegetable protein. Scoop a couple cups from the giant box in the pantry, add a couple teaspoons of stock concentrate (e.g., the Better Than Bouillon veggie and roasted garlic flavors), add water until the granules start floating, stir, microwave 30 seconds, stir, microwave another 30 seconds. Then it’s ready for skillet-frying with whatever spices and other flavorings seem appropriate in the moment. Chili powder, red pepper flakes, cumin, oregano and a dash of cocoa powder makes for a good Tex-Mex flavor profile that can sub for ground beef in tacos, enchiladas, etc. Soy sauce, mirin and sugar or agave is a straightforward teriyaki. It’s pretty versatile stuff.
The Totole “Granulated Chicken Flavor Soup Base Mix” is another good flavor boost.
Here’s my recipe for blueberry bread that I make for parties and such.
And here’s my experimental recipe for yu hsiang eggplant (still in progress; this was my most recent attempt).
blueberry with 3 Rs bread
Come to the Sneer Attractor, we have brownies
I’m feeling an effort sneer…
For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights.
Every time I read about a case like this my conviction grows that sneerclub’s vibe based moderation is the far superior method!
The key component of making good sneer club criticism is to never actually say out loud what your problem is.
We’ve said it multiple times, it’s just a long list that is inconvenient to say all at once. The major things that keep coming up: The cult shit (including the promise of infinite AGI God heaven and infinite Roko’s Basilisk hell; and including forming high demand groups motivated by said heaven/hell); the racist shit (including the eugenics shit); the pretentious shit (I could actually tolerate that if it didn’t have the other parts); and lately serving as crit-hype marketing for really damaging technology!
They don’t need to develop protocols of communication that produce functional outcomes
Ahem… you just admitted to taking a hundred hours to ban someone, whereas dgerad and co kick out multiple troublemakers in our community within a few hours tops each. I think we are winning on this one.
For LessWrong to become a place that can’t do much but to tear things down.
I’ve seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith’s wild genetic engineering fantasies come to mind).
The key component of making good sneer club criticism is to never actually say out loud what your problem is.
I wrote 800 words explaining how TracingWoodgrains is a dishonest hack, when I could have been getting high instead.
But we don’t need to rely on my regrets to make this judgment, because we have a science-based system on this
podcastinstance. We can sort all the SneerClub comments by most rated. Nothing that the community has deemed an objective banger is vague.The problem is they dont read sneerclub well, so they dont realize we dont relitigate the same shit every time. So when they come in with their hammers (prediction markets, being weird about ai, etc) we just go ‘lol, these nerds’ and dont go writing down the same stuff every time. As the community has a shared knowledge base, they do the same by not going into details every time how a prediction market would help and work. But due to their weird tribal thinking and thinking they are superior they think when we do it it is bad.
It is just amazing how much he doesn’t get basic interactions. And not like we dont like to explain stuff when new people ask about it. Or often when not even asked.
Think one of the problems with lw is that they think stuff that is long, is well written and argued, even better if it used a lot of complex sounding words. see how they like Chris Langan as you mentioned. Just a high rate of ‘I have no idea what he is talking about but it sounds deep’ shit.
To quote from the lw article you linked on the guy
CTMU has a high-IQ mystique about it: if you don’t get it, maybe it’s because your IQ is too low. The paper itself is dense with insights, especially the first part.
Makes you wonder how many people had a formal academic education, as one of the big things of that is that it has none of this mystique, as it build on top of each other and often can feel reasonable easy and making sense. (Because learning the basics preps you for the more advanced stuff, which is not to say this is the case every time, esp if some of your skills are lacking, but none of this high-IQ mystique (which also seems the utter wrong thing to look for)).
Ya don’t debate fascists, ya teach them the lesson of history. The Official Sneerclub Style Manual indicates that this is accomplished with various pedagogical tools, including laconic mockery, administrative trebuchets, and socks with bricks in them.
That too.
And judging by how all the elegantly charitably written blog posts on the EA forums did jack shit to stop the second manifest conference from having even more racists, debate really doesn’t help.
I’ve seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith’s wild genetic engineering fantasies come to mind).
Their fluffing Chris Langan is the example that comes to mind for me.
Blockquote glitch?
Yes, thanks. I always forget how many enters i need to hit.
I, an anonymous man from the internet who called Peter Thiel a racist hotdog, am the one with real power.
You might need to update that to “racist wax hotdog” judging from his appearance lately.
It is very important we do not congratulate you over this, or we will become linkedin!
lol