• @Cephirux
    link
    61 year ago

    I think it’s a bold assumption to think that AI is often only trained by neurotypical cishet white men, though it is a possibility. I do not fully understand how AI works and how the company trains their AI so I cannot comment any further. I admit AI has its downsides, but AI also has its upsides, same as humans. Reddit is free to utilize AI to moderate subreddits, and users are free to complain or leave reddit if they deem that their AI is more harmful than helpful.

      • @Cephirux
        link
        101 year ago

        Nope, just my personality. I think i have grammar mistakes too.

        • @HardlightCereal@lemmy.world
          link
          fedilink
          English
          -12
          edit-2
          1 year ago

          If your personality makes you sound like a bot, then you’re exactly the kind of person I’m talking about when I say that AI is going to ban real people for being spambots. I think you sound like a bot, and so will AI. I am capable of critical thinking and looking past first impressions, an AI is not.

          I think maybe you’re under the impression that computers run on perfect logic. Machine learning systems actually run on pure instinct. You are more capable of logical reasoning than an ML program is. You’re less capable than a traditional algorithmic program, but you’re more capable than an AI.

          • @Cephirux
            link
            -11 year ago

            I admit I might be biased towards AI because I believe AI isn’t biased because it doesn’t have any desire, to sleep, breath, eat, etc. Everyone is capable of critical thinking, the question is, is it good or not? And since AI is trained by humans and humans have critical thinking, I don’t see why AI cannot develop one, although it may not be as good as some people.

            • @9point6@lemmy.world
              link
              fedilink
              11
              edit-2
              1 year ago

              All AI has to be biased, that bias is the training data and (inherently biased) humans select the training set. Funnily enough, the weights on each node of a neutral net are even sometimes called biases!

              If any AI wasn’t biased it would simply produce unintelligible garbage.

            • @HardlightCereal@lemmy.world
              link
              fedilink
              English
              61 year ago

              I’m a computer scientist, and I will tell you right now that AI is biased. Here’s how you train a neural network AI: you arrange a whole lot of neurons, you reinforce the connections between the neurons when it succeeds, and you weaken the connections when it fails. That’s the same way your brain works. When you eat food or have sex or do something else beneficial to survival, your neural connections are strengthened. An ANN AI is driven by its training directive just like you’re driven to eat or have sex. It develops the same biases.

              And since AI is trained by humans and humans have critical thinking, I don’t see why AI cannot develop one

              This is nonsense. Humans invented the horse drawn wagon. Is a wagon ever going to develop critical thinking? No. AI isn’t a child with boundless potential, it’s a tool, just like a wagon. If humans want AI to have critical thinking, they’re going to have to build it. And no human has ever succeeded at that yet. The AI that Reddit is using does not have it. And since the AI is a profitable tool in its current state, it will probably not be improved to the level of a human.

              • @btaf45@lemmy.worldOP
                link
                fedilink
                41 year ago

                I’m a computer scientist, and I will tell you right now that AI is biased.

                AI is also constantly wrong.

                ChatGPT lies about science.

                ChatGPT lies about history

                ChatGPT lies about politics

                ChatGPT lies about nonexistent programming libraries

                ChatGPT lies about nonexistent legal cases

                ChatGPT lies about nonexistent criminal backgrounds

                The only time I would trust ChatGPT is when there are no right and wrong answers.

            • jungle
              link
              fedilink
              61 year ago

              That’s not AI works. It’s exactly as biased as the humans who produced the content on which it is trained.

              That said, I also don’t believe these models have been trained exclusively on white straight men’s conversations, that would take some effort to achieve.

              More likely, it’s been trained on internet forums, so similar to what it’s being asked to moderate. And add long as there’s a human at the other end of an appeal, it should be fine.

            • @AA5B@lemmy.world
              link
              fedilink
              2
              edit-2
              1 year ago

              All AI does is look for patterns to complete. You train it on some set of data such as Reddit, which can be biased, and set some sort of feedback for whether it makes the right choice, which can be biased, and find out what patterns it thinks it sees, which may be biased, to apply to new situations

      • Chaos
        link
        fedilink
        -1
        edit-2
        1 year ago

        Just checked this with an AI detector and it said human. Bot 1, human 0. This sentance kinda undermined your point for keeping humans only.