• @treefrog@lemm.ee
    link
    fedilink
    English
    28 months ago

    I just heard a news report on OpenAI developing technology to make deep fakes easier. They realized this could cause harm. So they’re only releasing it to a few educational institutions.

    This is harm reduction. And I realize corporate ethics is something of an oxymoron. But something along these lines was what the original person was meaning by a harm reduction approach by microsoft. If they’re aware their technology is going to cause harms to democracy, they have an ethical duty to reduce those harms. Unfortunately, corporations often put ethical duties to increase shareholder value first. That doesn’t mean they don’t have other ethical responsibilities too.

    • @elshandra@lemmy.world
      link
      fedilink
      English
      18 months ago

      I suppose, could be harm reduction. Like peeling a bandaid off slowly instead of ripping it off.

      They’re here, they might not be everywhere yet, but they’re here to stay as much as photoshopped images or trick photography are. Just more lies to hide the truth.

      All we can do now is get better at dealing with them.

      • @treefrog@lemm.ee
        link
        fedilink
        English
        18 months ago

        I hear you about it just being an evolution of the propaganda machine. And I think it’s going to reveal cracks in the system. That it’s going to rip the bandaid off faster than climate change which is the slow peel we’re all dealing with already.

        Harm reduction would be investing money in government regulation. Lobbying for government regulation. Usually this is seen as a disaster for business, but in this case it would throttle competitors too. And possibly save a lot of lives. Because this sort of automated propaganda is going to create a lot of fascist regimes all over the planet. Propped up by the illusion of democracy.

        More so than it already is.