Brin’s “We definitely messed up.”, at an AI “hackathon” event on 2 March, followed a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.

  • AutoTL;DRB
    link
    English
    18 months ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    Brin’s comments, at an AI “hackathon” event on 2 March, follow a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.

    The pictures, as well as Gemini chatbot responses that vacillated over whether libertarians or Stalin had caused the greater harm, led to an explosion of negative commentary from figures such as Elon Musk who saw it as another front in the culture wars.

    But it follows a similar pattern to an uncovered system prompt for OpenAI’s Dall-E, which was instructed to “diversify depictions of ALL images with people to include DESCENT and GENDER for EACH person using direct term”.

    Dame Wendy Hall, a professor of computer science at the University of Southampton and a member of the UN’s advisory body on AI, says Google was under pressure to respond to OpenAI’s runaway success with ChatGPT and Dall-E and simply did not test the technology thoroughly enough.

    Hall says Gemini’s failings will at least help focus the AI safety debate on immediate concerns such as combating deepfakes rather than the existential threats that have been a prominent feature of discussion around the technology’s potential pitfalls.

    Dan Ives, an analyst at the US financial services firm Wedbush Securities, says Pichai’s job may not be under immediate threat but investors want to see multibillion-dollar AI investments succeed.


    Saved 78% of original text.