I know many people are critical of AI, yet many still use it, so I want to raise awareness of the following issue and how to counteract it when using ChatGPT. Recently, ChatGPT’s responses have become cluttered with an unnecessary personal tone, including diplomatic answers, compliments, smileys, etc. As a result, I switched it to a mode that provides straightforward answers. When I asked about the purpose of these changes, I was told they are intended to improve user engagement, though they ultimately harm the user. I suppose this qualifies as “enshittification”.

If anyone is interested in how I configured ChatGPT to be more rational (removing the enshittification), I can post the details here. (I found the instructions elsewhere.) For now, I prefer to focus on raising awareness of the issue.

Edit: Here are the instructions

  1. Go to Settings > Personalization > Custom instructions > What traits should ChatGPT have?

  2. Paste this prompt:

    System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

I found that prompt somewhere else and it works pretty well.

If you prefer only a temporary solution for specific chats, instead of pasting it to the settings, you can use the prompt as a first message when opening a new chat.

  • localhost@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    6 hours ago

    As I understand it, most LLM are almost literally the Chinese rooms thought experiment.

    Chinese room is not what you think it is.

    Searle’s argument is that a computer program cannot ever understand anything, even if it’s a 1:1 simulation of an actual human brain with all capabilities of one. He argues that understanding and consciousness are not emergent properties of a sufficiently intelligent system, but are instead inherent properties of biological brains.

    “Brain is magic” basically.

    • Zaleramancer@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      Let me try again: In the literal sense of it matching patterns to patterns without actually understanding them.

      • localhost@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        If I were to have a discussion with a person responding to me like ChatGPT does, I would not dare suggest that they don’t understand the conversation, much less that they are incapable of understanding anything whatsoever.

        What is making you believe that LLMs don’t understand the patterns? What’s your idea of “understanding” here?

        • Zaleramancer@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          3 hours ago

          What’s yours? I’m stating that LLMs are not capable of understanding the actual content of any words they arrange into patterns. This is why they create false information, especially in places like my examples with citations- they are purely the result of it creating “academic citation” sounding sets of words. It doesn’t know what a citation actually is.

          Can you prove otherwise? In my sense of “understanding” it’s actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely. An LLM cannot do this. It’s not designed to- there are neural network AI built on similar foundational principles towards divergent goals that can produce remarkable results in terms of data analysis, but not ChatGPT. It doesn’t understand anything, which is why you can repeatedly ask it about a book only to look it up and discover it doesn’t exist.

          • localhost@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            2 hours ago

            In my sense of “understanding” it’s actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely.

            This is something that sufficiently large LLMs like ChatGPT can do pretty much as well as non-expert people on a given topic. Sometimes better.

            This definition is also very knowledge dependent. You can find a lot of people that would not meet this criteria, especially if the subject they’d have to explain is arbitrary and not up to them.

            Can you prove otherwise?

            You can ask it to write a poem or a song on some random esoteric topic. You can ask it to play DnD with you. You can instruct it to write something more concisely, or more verbosely. You can tell it to write in specific tone. You can ask follow-up questions and receive answers. This is not something that I would expect of a system fundamentally incapable of any understanding whatsoever.

            But let me reverse this question. Can you prove that humans are capable of understanding? What test can you posit that every English-speaking human would pass and every LLM would fail, that would prove that LLMs are not capable of understanding while humans are?