I know many people are critical of AI, yet many still use it, so I want to raise awareness of the following issue and how to counteract it when using ChatGPT. Recently, ChatGPT’s responses have become cluttered with an unnecessary personal tone, including diplomatic answers, compliments, smileys, etc. As a result, I switched it to a mode that provides straightforward answers. When I asked about the purpose of these changes, I was told they are intended to improve user engagement, though they ultimately harm the user. I suppose this qualifies as “enshittification”.

If anyone is interested in how I configured ChatGPT to be more rational (removing the enshittification), I can post the details here. (I found the instructions elsewhere.) For now, I prefer to focus on raising awareness of the issue.

Edit: Here are the instructions

  1. Go to Settings > Personalization > Custom instructions > What traits should ChatGPT have?

  2. Paste this prompt:

    System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

I found that prompt somewhere else and it works pretty well.

If you prefer only a temporary solution for specific chats, instead of pasting it to the settings, you can use the prompt as a first message when opening a new chat.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      5
      ·
      13 hours ago

      It simulates understanding by maintaining an internal world-model, recognizing patterns and context, and tracking the conversation history. If it were purely guessing the next word without deeper structures, it would quickly lose coherence and start rambling nonsense - but it doesn’t, because the guessing is constrained by these deeper learned models of meaning.

      • Tyoda@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        12 hours ago

        The previous up to X words (tokens) go in, the next word (token) comes out. Where is this"world-model" that it “maintains”?

        • Opinionhaver@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          ·
          12 hours ago

          Where is the world model you maintain? Can you point to it? You can’t - because the human mind is very much a black box just the same way as LLM’s are.

          It’s in the form of distributed patterns across billions of parameters. It’s not like the world model was handed to it. It’s emergent consequence of massive scale pattern learning. It learned it from the data it was trained on. The only way to become good at prediction is to implicitly absorb how the world tends to behave — because otherwise it would guess wrong.

          • Umbrias@beehaw.org
            link
            fedilink
            arrow-up
            3
            ·
            11 hours ago

            Not understanding the brain (note: said world model idea is something of a fabrication by the ai people, brains are distributed functional structures with many parts and roles) is not an equality with “ai” make. brains and llm do not function in the same way, this is a lie peddled by hype dealers.

              • Umbrias@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                9 hours ago

                Where is the world model you maintain? Can you point to it? You can’t - because the human mind is very much a black box just the same way as LLM’s are.

                something being a black box is not even slightly notable a feature of relation, it’s a statement about model detail; the only reason you’d make this comparison is if you want the human brain to seem equivalent to llm.

                for example, you didnt make the claim: “The inner workings of Europa are very much a black box, just the same way as LLM’s are”

                • Opinionhaver@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  9 hours ago

                  “The human mind is very much a black box just the same way as LLMs are” is a factually correct statement. You can’t look into a human brain for an exact explanation of why an individual did something any more than you can look into the inner workings of an LLM to explain why it said A rather than B. Claiming that my motive is to equate LLMs and human brains is not something I said - it’s something you imagined.

                  • Umbrias@beehaw.org
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    9 hours ago

                    It’s not really factually correct if you want to get pedantic, both brains and llms are called black boxes for different reasons, but this is ultimately irrelevant. Your motive may be here or there, the rhetorical effect is the same. You are arguing very specifically that we cant know llm’s dont hae similar features (world model) to human brains because “both are black boxes”, which is wrong for a few reasons, but also plainly an equivalence. It’s rude to pretend everyone in the conversation is as illiterate as wed need to be to not understand this point.

    • cabbage@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      13 hours ago

      It, uhm, predicts tokens?

      If calling it a word predictor is oversimplifying, I mean.