doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
This article even explicitely says as much.
My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.
How do they know that? Did the pedos text them to let them know? Sounds very made up.
The article says “remixed” images of old victims have cropped up.
And again, what’s the source? The great thing with articles about CSAM is that you don’t need sources, everyone just assumes you have them, but obviously cannot share.
Did at least one pedo try that? Most likely yes. Is it the best way to get good quality fake CSAM? Not at all.
I don’t know man. But I assume associations concerned with child abuse are all over that shit and checking it out. I’m not a specialist of CSAM but I assume an article that says old victims show up in previously-unseen images doesn’t lie, because why would it? It’s not like Wired is a pedo outlet…
Also, it was just a question. I’m not trying to convince you of anything 🙂
I think that aricle lacks nuance. It’s a bit baity and attends to the usual talking points without contextualizing the numbers or what’s actually happening out there, the consequences or the harm. That makes me believe the author just wants to push some point across.
But I’ve yet to read a good article on this. Most articles are like this one. But yeah, are a few thousand images much in the context of crime that’s happening online? Where are these numbers from and what’s with the claim that there are more actual pictures out there? I seriously doubt that at this point, if it’s so easy to generate images. And what consequences does all of this have? Does it mean an increase or a decrease in abuse? And lots of services have implemented filters… Are the platforms doing their due diligence? Is this a general societal issue or criminals doing crime?