Seven years later, Kyle’s argument is that AirSpace has turned into what he now calls Filterworld, a phrase he uses to describe how algorithmic recommendations have become one of the most dominating forces in culture, and as a result, have pushed society to converge on a kind of soulless sameness in its tastes.

  • Robert Rothenberg
    link
    fedilink
    09 months ago

    @souperk @pelespirit

    > For example, nuclear energy is a neutral thing on its own, when used to generate power it’s (arguably) a net positive…

    It’s more complicated than that.

    Mining uranium has side effects, usually for poorer communities.

    The fuel has to handled safety, as well a the waste which to be safely stored for 1000s of years.

    Nuclear plants have to be designed and built well.

    The most benign democracies have made made a mess of those issues.

    1/n

    • Robert Rothenberg
      link
      fedilink
      09 months ago

      @souperk @pelespirit

      > The same goes for algorithms, when they are used to save lives at hospitals it’s a net positive

      Again, more complicated.

      Are the algorithms mathematically sound, or just AI/machine learning magic fairy dust?

      Do the algorithms have implicit biases against poor people, or those with darker skin or who live in certain postcodes?

      2/n

      • @souperk@reddthat.com
        link
        fedilink
        1
        edit-2
        9 months ago

        Again, more complicated.

        It doesn’t have to be.

        Are the algorithms mathematically sound, or just AI/machine learning magic fairy dust?

        MAB algorithms lie in middle. They are a mathematically sound way to explore the unknown and make reasonable decisions given whatever context is available.

        There have been a few hospital trials with success, but progress is slow and funding is low. There are a few really interesting papers if you are interested to read more.

        Do the algorithms have implicit biases against poor people, or those with darker skin or who live in certain postcodes?

        In a sense, it’s not different than laws that discriminate against people of color or other marginalized communities. The fact that a bunch of super privileged lawmakers create laws that disproportionately harm us, does not mean that the concept of law is flawed.

        You got to ask yourself why the algorithm was given that information in the first place, and more importantly who gave it?

        What we call algorithm, is actually two things. A set of instructions (the actual algorithm) and a set of parameters. The instructions explain how to use those parameters in order to make a decision. The parameters may or may not be biased, it all depends on the process that is used to generate those parameters.

        AI in particular uses a process called training, in which people make decisions, and another algorithm is used to adjust the parameters so those decisions can be genralized and repeated by the AI. When, biased people make biased decisions, they are going to train an AI to make biased decisions.

        Unfortunately, that’s our reality, biased people make biased decisions, as a result we have biased laws and biased algorithms.

        By the way, this is what the author calls algorithm cleanse, and it’s bureaucracy supercharged. Why hire someone to reject applicants of color when you can build an algorithm to do that? Making a legal case against that is much harder, and the legal system isn’t ready to understand the nuisances of the case.

        However, in contrast to the laws, we marginalized people can create our own “algorithms”, thay are not biased to our best effort. The fediverse is living proof of this. Why fight the system when we can make our own?