Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI’s power to mislead::Among images of the bombed out homes and ravaged streets of Gaza, some stood out for the utter horror: Bloodied, abandoned infants.

  • AutoTL;DRB
    link
    English
    21 year ago

    This is the best summary I could come up with:


    Other examples of AI-generated images include videos showing supposed Israeli missile strikes, or tanks rolling through ruined neighborhoods, or families combing through rubble for survivors.

    In the bloody first days of the war, supporters of both Israel and Hamas alleged the other side had victimized children and babies; deepfake images of wailing infants offered photographic ‘evidence’ that was quickly held up as proof.

    The propagandists who create such images are skilled at targeting people’s deepest impulses and anxieties, said Imran Ahmed, CEO of the Center for Countering Digital Hate, a nonprofit that has tracked disinformation from the war.

    Around the world a number of startup tech firms are working on new programs that can sniff out deepfakes, affix watermarks to images to prove their origin, or scan text to verify any specious claims that may have been inserted by AI.

    While this technology shows promise, those using AI to lie are often a step ahead, according to David Doermann, a computer scientist who led an effort at the Defense Advanced Research Projects Agency to respond to the national security threats posed by AI-manipulated images.

    Doermann, who is now a professor at the University at Buffalo, said effectively responding to the political and social challenges posed by AI disinformation will require both better technology and better regulations, voluntary industry standards and extensive investments in digital literacy programs to help internet users figure out ways to tell truth from fantasy.


    The original article contains 953 words, the summary contains 238 words. Saved 75%. I’m a bot and I’m open source!