doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
This article even explicitely says as much.
My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.
The article is bullshit that wants to stir shit up for more clicks.
You don’t need a single CSAM image to train AI to make fake CSAM. In fact, if you used the images from the database of known CSAM, you’d get very shit results because most of them are very old and thus the quality most likely sucks.
Additionally, in another comment you mention that it’s users training their models locally, so that answers your 2nd question of why companies are not sued: they don’t have CSAM in their training dataset.