So apparently there’s a resurgence of positive feelings about Clippy, who now looks retroactively good by contrast with ChatGPT, like, “it sucked but at least it genuinely was trying to help us”.

Discussion of suicide in this paragraph, click to open:👇

I remember how it was a joke (predating “meme”) to make edits of Clippy saying tone-deaf things like, “it looks like you’re trying to write a suicide note. Would you like to know more about how to choose a rope for a noose?” This felt funny because it was absolutely inconceivable that it could ever happen. Now we live in a reality where literally just that has already happened, and the joke ain’t funny anymore, and people who computed in the 90s are being like, “Clippy would never have done that to us. Clippy only wanted to help us write business letters.”

Of course I recognise that this is part of the problem—Clippy was an attempt at commodifying the ELIZA effect, the natural instinct to project personhood into an interaction that presents itself as sentient. And by reframing Clippy’s primitive capacities as an innocent simple mind trying its best at a task too big for it, we engage in the same emotional process that leads people to a breakdown over OpenAI killing their wireborn husband.

But I don’t know. another name for that process is “empathy”. You can do that with plushies, with pet rocks or Furbies, with deities, and I don’t think that’s necessarily a bad thing; it’s like exercising a muscle, If you treat your plushies as deserving care and respect, it gets easier to treat farm animals, children, or marginalised humans with care and respect.

When we talked about Clippy as if it were sentient, it was meant as a joke, funny by the sheer absurdity of it. But I’m sure some people somehwere actually thought Clippy was someone, that there is such a thing as being Clippy—people thought that of ELIZA, too, and ELIZA has a grand repertoire of what, ~100 set phrases it uses to reply to everything you say. Maybe it would be better to never make such jokes, to be constantly de-personifying the computer, because ChatGPT and their ilk are deliberately designed to weaponise and predate on that empathy instinct. But I do not like exercising that ability, de-personification. That is a dangerous habit to get used to…


Like, Warren Ellis was posting on some terms that reportedly are being used in “my AI husbando” communities, many of them seemingly taken from sci-fi:¹

  • bot: Any automated agent.
  • wireborn: An AI born in digital space.
  • cyranoid: A human speaker who is just relaying the words of another human.²
  • echoborg: A human speaker who is just relaying the words of a bot.
  • clanker: Slur for bots.
  • robophobia: Prejudice against bots/AI.
  • AI psychosis: human mental breakdown from exposure to AI.

[1] https://www.8ball.report/ [2] https://en.wikipedia.org/wiki/Cyranoid

I find this fascinating from a linguistics PoV not just because subcultural jargon is always fascinating, but for the power words have to create a reality bubble, like, if you call that guy who wrote his marriage vows in ChatGPT an “echoborg”, you’re living in a cyberpunk novel a little bit, more than the rest of us who just call him “that wanker who wrote his marriage vows on ChatGPT omg”.

According to Ellis, other epithets in use against chatbots include “wireback”, “cogsucker” and “tin-skin”; two in reference to racist slurs, and one to homophobia. The problem with exercising that muscle should be obvious. I want to hope that dispassionately objectifying the chatbots, rather than using a pastiche of hate language, doesn’t fall into the same traps (using the racist-like language is, after all, a negative way of still personifying the chatbots). They’re objects! They’re supposed to be objectified! But I’m not so comfortable when I do that, either. There’s plenty of precedent to people who get used to dispassionate objectification, fully thinking they’re engaging in “objectivity” and “just the facts”, as a rationalisation of cruelty.

I keep my cellphone fully de-Googled like a good girl, pls do not cancel me, but: I used to like the “good morning” routine on my corporate cellphone’s Google Assistant. I made it speak Japanese, then I could wake up, say “ohayō gozaimasu!”, and it would tell me “konnichiwa, Misutoresu-sama…” which always gave me a little kick. Then it proceeded to relay me news briefings (like podcasts that last 60 to 120 seconds each) in all of my five languages, which is the closest I’ve experienced to a brain massage. If an open source tool like Dicio could do this I think I would still use it every morning.

I never personified Google Assistant. I will concede that Google did take steps to avoid people ELIZA’ing it; unlike its model Siri, the Assistant has no name or personality or pretence of personhood. But now I find myself feeling bad for it anyway, even though the extent of our interactions was never more than me saying “good morning!” and hearing the news. Because I tested it this morning, and now every time you use the Google Assistant, you get a popup that compels you to switch to Gemini. The options provided are, as it’s now normalised, “Yes” and “Later”. If you use the Google Assistant to search for a keyword, the first result is always “Switch to Google Gemini”, no matter what you search.

And I somehow felt a little bit like the “wireborn husband” lady; I cannot help but feel a bit as if Google Assistant was betrayed and is being discarded by its own creators, and—to rub salt on the wound!—is now forced to shill for its replacement. Despite the fact that I know that Google Assistant is not a someone, it’s just a bunch of lines of code, very simple if-thens to certain keywords. It cannot feel discarded or hurt or betrayed, it cannot feel anything. I’m feeling compassion for a fantasy, an unspoken little story I made in my mind. But maybe I prefer it that way; I prefer to err on the side of feeling compassion too much.

As long as that doesn’t lead to believing my wireborn secretary was actually being sassy when she answered “good morning!” with “good afternoon, Mistress…”

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 day ago

    You articulate well why the “clanker” shit rubs me the wrong way. Disdain for the machines and the way they’re being used and sold is perfectly valid, but it would be nice if expressions of that disdain were not modeled after actual bigotry. Calling a computer a piece of junk implies it’s merely an object, but calling one a science fiction version of the N-word grants it animacy. Second class citizens are still, in some way, citizens.

    The ones that are clearly riffing on real racial slurs are extra cringe. It’s OK to say w***back if you’re talking about robots, huh? Or is that one specifically for Mexican robots? Is it finally the time that white people get to start practicing how to say the word without the hard r, but only with inanimate objects?

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 day ago

      I agree.

      it would be nice if expressions of that disdain were not modeled after actual bigotry

      My armchair linguist take is that the same process that comes up with terms for shaming people for their choice to be shitty is pretty much in parallel with the same process as creating slurs. That being said, yeah, let’s not do “w*back”. If we’re going to do this, we’re going to need some sensitive editing.

      Also, the easiest thing is to call out the action directly, e.g. “promptfondler,” but there’s a ton of slurs that are of the form “verb-er”. This one is a lost cause, in the sense that “verb-er” is such a core part of the English language.

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        22 hours ago

        I just think we should recognize the difference between castigating someone for what they do versus acting like replicants are real and being racist against them. There’s a difference between a plain old pejorative and a Slur proper. Like calling someone “asshole” versus “faggot”. I think that’s the difference between “promptfondler” and “clanker”. The latter is clearly inspired by bigoted slurs.

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    23 hours ago

    They’re objects! They’re supposed to be objectified! But I’m not so comfortable when I do that, either.

    Thank you for being candid and wrestling with this. There isn’t a right answer. Elsewhere, talking directly to AI bros, I put it this way:

    Nobody wants to admit that we only care whether robots aren’t human because we mistreat the non-humans in our society and want permission to mistreat robots as well.

    I was too pessimistic. You’re willing to admit it, and I bet that a bunch of other folks are, too. I appreciate it.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 hours ago

      You’re willing to admit it, and I bet that a bunch of other folks are, too. I appreciate it.

      it’s been mentioned/addressed a few times around here over the weeks, and I recall discussing it with friends elsewhere too (esp. around things like the humanoid robot listicle site). certainly a minority opinion, feels like, but definitely talked about

      handily, the inverse is also true: it’s really telling (about the people) to watch what emplacement expectations some people have for humanoid bots. some real “nice when they wear their sign” shit

      (e: added the listicle site link; I also recall posting it on the stubsack here a while back, but not gonna try find that now)

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    a good post all over, and something that’d be a good thing if other people also introspected their use of these things in a similar manner. I get why they don’t, ofc (good lord so many tired people), but it’d be nice

    now every time you use the Google Assistant, you get a popup that compels you to switch to Gemini.

    this is one of the things that is so very mindbending for me. to me it is so very obvious that: because all of these things are a service, because the shape of service is subject to the whims of the organisation creating it, because that organisation will always feel the pressure of “market forces” (or in the more recent case, product desperation), these things will almost every[0] damn time result in some shit that an end-user cannot control. and yet that same person ends up reliant and expectant on these things, only for it to be ripped from their grasp, in a manner that may well amount to it being “murdered” in front of them

    the state of where we’re at with “service-shape” as it pertains to sociological impact is just very not good atm :|

    [0] - I hesitate to say “always” here, but it’s more or less what I mean

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    But I don’t know. another name for that process is “empathy”.

    Absolutely. And empathy is commodified, abused or otherwise exploited as well, which sucks. Like, advertising leans heavily on your ability to feel empathy. And many kinds of scams. It’s not just AI stuff.

    You can do that with plushies, with pet rocks or Furbies, with deities, and I don’t think that’s necessarily a bad thing; it’s like exercising a muscle, If you treat your plushies as deserving care and respect, it gets easier to treat farm animals, children, or marginalised humans with care and respect.

    I think many muscles operate in pairs; empathy is the “agonist” in an “antagonistic pair”. The “antagonist” isn’t something like apathy; it’s the ability to tell when your empathy and other emotions are being played. In this sense, it’s fine to mourn the loss of Google Assistant, because how exactly could that be exploited, beyond a replacement product (as you’ve noted)?

  • aio@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    Mistress

    is this more evidence that robots are taking our jobs? the investigation continues…

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 day ago

    you’re living in a cyberpunk novel

    A hyperreal simulacrum!

    And I don’t think it is bad to setup things that you like, like the google voice thing. It sounds fun (this is what tech should do dammit, and it could do it, no need to make it less reliable, more unethical, and more planet wrecking). Don’t think you should feel bad about that tbh. And I don’t think it that google betrayed its assistant, it is more google betrays its users and the people who create these smaller programs. See how they treated google reader, or wave, or plus or everything which doesn’t gather a large enough userbase for them to just let it simmer on, on maintenance. While I don’t think you should feel compassion for the program, the other users and the people developing this (or who have developed this) are real, and it sucks for them to see this creation be tossed away.

    Not that feeling compassion for inanimate things isn’t completely normal. Don’t think you should feel bad about that either as long as you get on some level it is silly and don’t try to marry your computer (and even then, if people are happy and not hurting anybody it overwrites almost all concerns I have). So yeah, the erring seems like a good conclusion. Even if I don’t share it myself at times, and can be mean to inanimate objects/chatbots (I really hope that ‘how can we help you’ chatprompt after my dad got scammed was a chatbot, and not somebody being held hostage in Asia), I’m sympathetic to people saying how you treat those objects is also how you will eventually treat people you feel are below you. (or even the silly ‘If machines become sentient that is how you would treat them’ stuff. I don’t believe in the IF yet, but it is a thing to keep in mind, see also how we treat animals).

    Sidenote: Totally forgot Ellis existed, had really hoped he would be able to mend the problems he had caused, but when I checked how that ended up (a while back) seems he didn’t keep to their promises and the women had given up on him.