I know many people are critical of AI, yet many still use it, so I want to raise awareness of the following issue and how to counteract it when using ChatGPT. Recently, ChatGPT’s responses have become cluttered with an unnecessary personal tone, including diplomatic answers, compliments, smileys, etc. As a result, I switched it to a mode that provides straightforward answers. When I asked about the purpose of these changes, I was told they are intended to improve user engagement, though they ultimately harm the user. I suppose this qualifies as “enshittification”.
If anyone is interested in how I configured ChatGPT to be more rational (removing the enshittification), I can post the details here. (I found the instructions elsewhere.) For now, I prefer to focus on raising awareness of the issue.
Edit: Here are the instructions
-
Go to Settings > Personalization > Custom instructions > What traits should ChatGPT have?
-
Paste this prompt:
System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
I found that prompt somewhere else and it works pretty well.
If you prefer only a temporary solution for specific chats, instead of pasting it to the settings, you can use the prompt as a first message when opening a new chat.
The previous up to X words (tokens) go in, the next word (token) comes out. Where is this"world-model" that it “maintains”?
Where is the world model you maintain? Can you point to it? You can’t - because the human mind is very much a black box just the same way as LLM’s are.
It’s in the form of distributed patterns across billions of parameters. It’s not like the world model was handed to it. It’s emergent consequence of massive scale pattern learning. It learned it from the data it was trained on. The only way to become good at prediction is to implicitly absorb how the world tends to behave — because otherwise it would guess wrong.
Not understanding the brain (note: said world model idea is something of a fabrication by the ai people, brains are distributed functional structures with many parts and roles) is not an equality with “ai” make. brains and llm do not function in the same way, this is a lie peddled by hype dealers.
Nobody here has claimed that brains and LLM’s work the same way.
something being a black box is not even slightly notable a feature of relation, it’s a statement about model detail; the only reason you’d make this comparison is if you want the human brain to seem equivalent to llm.
for example, you didnt make the claim: “The inner workings of Europa are very much a black box, just the same way as LLM’s are”
“The human mind is very much a black box just the same way as LLMs are” is a factually correct statement. You can’t look into a human brain for an exact explanation of why an individual did something any more than you can look into the inner workings of an LLM to explain why it said A rather than B. Claiming that my motive is to equate LLMs and human brains is not something I said - it’s something you imagined.
It’s not really factually correct if you want to get pedantic, both brains and llms are called black boxes for different reasons, but this is ultimately irrelevant. Your motive may be here or there, the rhetorical effect is the same. You are arguing very specifically that we cant know llm’s dont hae similar features (world model) to human brains because “both are black boxes”, which is wrong for a few reasons, but also plainly an equivalence. It’s rude to pretend everyone in the conversation is as illiterate as wed need to be to not understand this point.
A statement can be simplified down to the point that it borderlines on misinformation while still being factually correct. Another examples would be saying “photography is just pointing a camera and pressing a button” or “internet is just a bunch of computers talking to each other.” It would be completely reasonable for someone to take issue with these statements.
At no point have I made such claim.
Yes we agree on the first part.
I will again direct you here re: the second.