- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
This is the best summary I could come up with:
Those are basically a coin flip — and the outcomes are off in the distance, as those cases wind their way through the legal system.
A bigger problem right now is that AI systems are really good at making just believable enough fake images and audio — and with tools like OpenAI’s new Sora, maybe video soon, too.
So today, Verge policy editor Adi Robertson joins the show to discuss how AI might supercharge misinformation and lies in an election that’s already as contentious as any in our lifetimes — and what might be done about it.
Those bursts of attention have receded, with little or nothing to show for it — and in the case of Twitter, a wholesale retreat from any moderation at all as Elon Musk turned the platform into what’s now X.
And X is where fake pornographic images of Taylor Swift have been most widely distributed — a preview of the problems facing every major platform.
But these problems aren’t going away, and it’s important to take stock of how AI companies, social media platforms, and policymakers are trying to deal with it and what we as individuals should keep in mind as the election cycle kicks into high gear.
The original article contains 474 words, the summary contains 205 words. Saved 57%. I’m a bot and I’m open source!