• @HardlightCereal@lemmy.world
    link
    fedilink
    English
    99 months ago

    AI is often only trained on neurotypical cishet white men. What happens when a community of colour is full of people who don’t have the same conversational norms as white people, and the bot thinks they’re harassing each other? What happens when a neurodivergent community talk to each other in a neurodivergent way? Autistic people often get called “robotic”, will the AI feel the same way and ban them as bots? What happens when an AI is used to moderate a trans community, and flags everything as NSFW because its training data says “transgender” is a porn category?

    • @Cephirux
      link
      69 months ago

      I think it’s a bold assumption to think that AI is often only trained by neurotypical cishet white men, though it is a possibility. I do not fully understand how AI works and how the company trains their AI so I cannot comment any further. I admit AI has its downsides, but AI also has its upsides, same as humans. Reddit is free to utilize AI to moderate subreddits, and users are free to complain or leave reddit if they deem that their AI is more harmful than helpful.

        • @Cephirux
          link
          109 months ago

          Nope, just my personality. I think i have grammar mistakes too.

          • @HardlightCereal@lemmy.world
            link
            fedilink
            English
            -12
            edit-2
            9 months ago

            If your personality makes you sound like a bot, then you’re exactly the kind of person I’m talking about when I say that AI is going to ban real people for being spambots. I think you sound like a bot, and so will AI. I am capable of critical thinking and looking past first impressions, an AI is not.

            I think maybe you’re under the impression that computers run on perfect logic. Machine learning systems actually run on pure instinct. You are more capable of logical reasoning than an ML program is. You’re less capable than a traditional algorithmic program, but you’re more capable than an AI.

            • @Cephirux
              link
              -19 months ago

              I admit I might be biased towards AI because I believe AI isn’t biased because it doesn’t have any desire, to sleep, breath, eat, etc. Everyone is capable of critical thinking, the question is, is it good or not? And since AI is trained by humans and humans have critical thinking, I don’t see why AI cannot develop one, although it may not be as good as some people.

              • @9point6@lemmy.world
                link
                fedilink
                11
                edit-2
                9 months ago

                All AI has to be biased, that bias is the training data and (inherently biased) humans select the training set. Funnily enough, the weights on each node of a neutral net are even sometimes called biases!

                If any AI wasn’t biased it would simply produce unintelligible garbage.

              • @HardlightCereal@lemmy.world
                link
                fedilink
                English
                69 months ago

                I’m a computer scientist, and I will tell you right now that AI is biased. Here’s how you train a neural network AI: you arrange a whole lot of neurons, you reinforce the connections between the neurons when it succeeds, and you weaken the connections when it fails. That’s the same way your brain works. When you eat food or have sex or do something else beneficial to survival, your neural connections are strengthened. An ANN AI is driven by its training directive just like you’re driven to eat or have sex. It develops the same biases.

                And since AI is trained by humans and humans have critical thinking, I don’t see why AI cannot develop one

                This is nonsense. Humans invented the horse drawn wagon. Is a wagon ever going to develop critical thinking? No. AI isn’t a child with boundless potential, it’s a tool, just like a wagon. If humans want AI to have critical thinking, they’re going to have to build it. And no human has ever succeeded at that yet. The AI that Reddit is using does not have it. And since the AI is a profitable tool in its current state, it will probably not be improved to the level of a human.

                • @btaf45@lemmy.worldOP
                  link
                  fedilink
                  49 months ago

                  I’m a computer scientist, and I will tell you right now that AI is biased.

                  AI is also constantly wrong.

                  ChatGPT lies about science.

                  ChatGPT lies about history

                  ChatGPT lies about politics

                  ChatGPT lies about nonexistent programming libraries

                  ChatGPT lies about nonexistent legal cases

                  ChatGPT lies about nonexistent criminal backgrounds

                  The only time I would trust ChatGPT is when there are no right and wrong answers.

              • jungle
                link
                fedilink
                69 months ago

                That’s not AI works. It’s exactly as biased as the humans who produced the content on which it is trained.

                That said, I also don’t believe these models have been trained exclusively on white straight men’s conversations, that would take some effort to achieve.

                More likely, it’s been trained on internet forums, so similar to what it’s being asked to moderate. And add long as there’s a human at the other end of an appeal, it should be fine.

              • @AA5B@lemmy.world
                link
                fedilink
                2
                edit-2
                9 months ago

                All AI does is look for patterns to complete. You train it on some set of data such as Reddit, which can be biased, and set some sort of feedback for whether it makes the right choice, which can be biased, and find out what patterns it thinks it sees, which may be biased, to apply to new situations

        • Chaos
          link
          fedilink
          -1
          edit-2
          9 months ago

          Just checked this with an AI detector and it said human. Bot 1, human 0. This sentance kinda undermined your point for keeping humans only.

    • Lvxferre
      link
      fedilink
      08 months ago

      AI is often only trained on neurotypical cishet white men.

      Can you back up this claim? Unless you’re just being an assumer, or you expect people to be suckers/gullible/“chrust” you.

      What happens when a community of colour is full of people who don’t have the same conversational norms as white people

      In this statement alone, there are not one but two instances of a racist discourse:

      1. Conflating culture (conversational norms) with race.
      2. Singling out “white people”, but lumping together the others under the same label (“people of colour”).

      You are being racist. What you’re saying there boils down to “those brown people act in weird ways because they’re brown”. Don’t.

      What happens when a neurodivergent community talk to each other in a neurodivergent way? Autistic people often get called “robotic”, will the AI feel the same way and ban them as bots?

      The reason why autists are often called “robotic” has to do with voice prosody. It does not apply to text.

      And the very claim that you’re making - that autists would write in a way that an “AI” would confuse them with bots - sounds, frankly, dehumanising and insulting towards them. And reinforcing the stereotype that they’re robotic.

      [From another comment] Did you write your comment with chatgpt?

      Passive aggressively attacking the other poster won’t help.


      Odds are that you’re full of good intentions writing the above, but frankly? Go pave hell back in Reddit, you’re being racist and dehumanising.