• ShaunaTheDead
    link
    fedilink
    121 year ago

    I read about an early study into AI where they were using it to predict whether the pictured animal was a dog or a wolf. It got really good at detecting wolves and when they analyzed how it was determining whether it was a wolf or not, they found that it wasn’t looking at the animal at all but instead checking if there was a lot of snow on the ground. If there was, it would say it was a wolf, if there wasn’t it would say dog.

    The problem was with the data set used to train the AI. It was doing exactly what it was told. That’s the big problem with AI is that it does exactly what we tell it to do, but people are hilariously bad at describing exactly the result they want down to the absolute finest level of detail.

    • Cethin
      link
      fedilink
      English
      2
      edit-2
      1 year ago

      I would describe it more as giving the results we’re asking for rather than doing what we tell it to, but that’s a little bit of too much semantics probably. We mostly don’t tell it what to do. We just give it data with some labels and it tries to generate reasons for those labels basically. It’s essentially the issue humans have of “correlation does not equal causation” except with no awareness of this and significantly worse.