doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?

This article even explicitely says as much.

My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    22 hours ago

    It’s certainly technically possible. I suspect these AI models just aren’t good at it. So the pedophiles need to train them on actual images.

    I can imagine for example AI doesn’t know what puberty is since it has in fact not seen a lot of naked children. It would try to infer from all the internet porn it’s seen, and draw any female with big breasts, disregarding age. And that’s not how children actually look.

    I haven’t tried, since it’s illegal where I live. But that’s my suspicion why pedophiles bother with training models.


    (Edit: If that’s the case, it would mean the tech companies are more or less innocent. At least at this.

    And note a lot of the CSAM talk is FUD (spreading fear, uncertainty and doubt) I usually see this in the context of someone pushing for total surveillance of the people. It’s far less pronounced in my experience than some people make it to be. I’ve been around on the internet, and I haven’t seen any real pictures, yet. I’m glad that I didn’t, but that makes me believe you have to actively look for that kind of stuff, or be targeted somehow.

    And I think a bit mure nuance would help. This article also lumps together fictional drawings and real pictures. I think that’s counterproductive, since one is a heinous crime and has real victims. And like, drawing nude anime children or de-aging celebrities isn’t acceptable either (depends on legislation), but I think we need to differentiate here. I think real pictures are entirely on a different level and should have far more severe consequences. If we mix everything together, we kind of take away from that.)