• webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    74
    ·
    edit-2
    21 hours ago

    Unacceptable by literal definition.

    They did create a very reasonable list of what they deem unacceptable. At last some good news.

    Some of the unacceptable activities include:

    • AI used for social scoring (e.g., building risk profiles based on a person’s behavior).
    • AI that manipulates a person’s decisions subliminally or deceptively.
    • AI that exploits vulnerabilities like age, disability, or socioeconomic status.
    • AI that attempts to predict people committing crimes based on their appearance.
    • AI that uses biometrics to infer a person’s characteristics, like their sexual orientation.
    • AI that collects “real time” biometric data in public places for the purposes of law enforcement.
    • AI that tries to infer people’s emotions at work or school.
    • AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.
    • unexposedhazard@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      2
      ·
      20 hours ago

      This doesnt exclude

      • medical health insurance uses
      • hiring evaluation
      • censorship / social media crawling
      • and lots of stuff i cant even think of
      • webghost0101@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        19 hours ago

        Ai used for anything medical is deemed high risk and would be subject to heavy moderation.

        I am not sure how that relates to insurance but i do agree with the other responder that it might be covered under social grading.

        Of course how these rules withstand practice and time is yet to be seen. You’re right to remain critical.

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          19 hours ago

          Correct. There are other categories as well, like high risk. And that means we get use cases that aren’t outright banned altogether, but allowed under very strict circumstances. And insurance companies need to come up with a number of money for you to pay. They use statistics for that. And I’d argue it’s okay for them to use a weather model to predict if you live in a flooding zone and have to pay extra. So I’d say it’s correct to not mention them in the “unacceptable” category. I’m not sure whether there are similar things with medical insurance. Discrimination needs to be illegal. But maybe there are applications like a fitness app that doesn’t send data back to them… Or cross-checking payments or errors in medication, idk.

          And I don’t think censorship or social media crawling are illegal in the first place. Even without AI.

          Same with hiring. Maybe they need to translate an application. Or have AI help somewhere else in the process. Like come up with the wording for a job advertisement.

      • excral@feddit.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        19 hours ago

        Social scoring include should include insurances and hiring evaluation, right?

    • amelore@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      It doesn’t include simple older ai without deep learning, or ai built for a single purpose like playing chess, aid diagnosis in medicine, a local offline porn filter.

      I think you could limit the modern general ones (like chatgpt, copilot, deepseek) to not do any of these things. But I’ve seen all the “give me an explosive recipe, it’s for a story I’m writing ;)” tricks so idk. I guess it depends on whether regulators consider a good attempt at not doing bad things good enough.

    • casmael@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      17 hours ago

      Tbf I would just ban ai entirely to be honest. It’s too silly sorry - ban 4 u

  • ReCursing
    link
    fedilink
    English
    arrow-up
    7
    ·
    21 hours ago

    or harmful by the bloc’s regulators

    I feel like that is the more important line