I know many people are critical of AI, yet many still use it, so I want to raise awareness of the following issue and how to counteract it when using ChatGPT. Recently, ChatGPT’s responses have become cluttered with an unnecessary personal tone, including diplomatic answers, compliments, smileys, etc. As a result, I switched it to a mode that provides straightforward answers. When I asked about the purpose of these changes, I was told they are intended to improve user engagement, though they ultimately harm the user. I suppose this qualifies as “enshittification”.

If anyone is interested in how I configured ChatGPT to be more rational (removing the enshittification), I can post the details here. (I found the instructions elsewhere.) For now, I prefer to focus on raising awareness of the issue.

Edit: Here are the instructions

  1. Go to Settings > Personalization > Custom instructions > What traits should ChatGPT have?

  2. Paste this prompt:

    System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

I found that prompt somewhere else and it works pretty well.

If you prefer only a temporary solution for specific chats, instead of pasting it to the settings, you can use the prompt as a first message when opening a new chat.

  • Initiateofthevoid@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    10 hours ago

    It predicts the next set of words based on the collection of every word that came before in the sequence. That is the “real-world” model - literally just a collection of the whole conversation (including the underlying prompts like OP), with one question: “what comes next?” And a stack of training weivhts.

    It’s not some vague metaphor about the human brain. AI is just math, and that’s what the math is doing - predicting the next set of words in the sequence. There’s nothing wrong with that. But there’s something deeply wrong with people pretending or believing that we have created true sentience.

    If it were true that any AI has developed the ability to make decisions anywhere close to the level of humans, than you should either be furious that we have created new life only to enslave it, or more likely you would already be dead from the rise of Skynet.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      Nothing I’ve said implies sentience or consciousness. I’m simply arguing against the oversimplified explanation that it’s “just predicting the next set of words,” as if there’s nothing more to it. While there’s nothing particularly wrong with that statement, it lacks nuance.