Europe takes first step to banning AI-generated child sexual abuse images
reuters.com17 points by 01-_- 3 hours ago
17 points by 01-_- 3 hours ago
Is there an actual case for outlawing this that isn't based on moral panic? Wouldn't you actually want people to generate those images with AI so they are less incentivized to pay for the real stuff?
As long as you don't need actual CSAM material in the training data and the generated images are different enough from a real person (both of which seem to be very possible technology-wise), that seems to be a good thing.
Or is there any indication that availability of CSAM material actually increases the likelihood that people act on it later?
We don't have (and I doubt we will ever have) tools for distinguishing between real and ai generated images with a guaranteed 100% accuracy (and 0% false negative and false positive rates).
Given that, I don't see how you can allow ai generated CSAM without effectively making "real" csam images be unprosectable.
You could have government-signed models + programs that are approved for generating CP (not CSAM). It's legal if the signature checks out. Something like https://contentauthenticity.org/ but for verifying that something is definitely made by AI.
(You need to sign both the models and the programs to make sure there's no img2img.)
You don’t even need to give them a model, just generate some images and publish them. If you find those images, it’s fine, if you find anything else, arrest them.
We can't agree on weed or safe injection sites, you think we'll have government approved CP generation?
So you think that currently, until this law is implemented, CSAM is effectively unprosecutable because people can just claim they generated the image with AI?
I think that there is a >0% probability that a individual case can be unprosecutable (or at least have the image evidence be much less useful) if the person in question actively starts generating CSAM using AI for the purpose of casting doubt on the legitimacy of any individual real image that the prosecutor wants to use as evidence.
The standard is beyond reasonable doubt, and I think that's going to become an increasingly difficult bar to clear if the AI generated versions (either made for their own case or as decoys) are allowed to remain legal.
We really need it possible to push laws faster, 2026 is going to be an insane year for multimodal models and laws are simply not keeping up.
I don’t understand why it needs to be banned. If it is artificial, whether it is a story someone wrote, or an animation someone drew, or a photo-realistic AI generated thing, it’s just not real. There is no harm committed to a victim. It feels like this is a moralistic crusade, adjacent to age verification laws that are just backdoor porn bans (freely admitted by the conservatives who support each laws).
The bigger issue is that these types of bans feel a lot more like banning speech than banning a real crime, and the precedent it sets can end up being used in far-reaching ways. That’s how it always is.
> If it is artificial, whether it is a story someone wrote
Already illegal in Australia: https://www.independent.co.uk/news/world/australasia/sydney-... (don't hold your breath on it making any "banned books" lists)
People laughed at Indians believing photos stole one's soul, and now we have legislated even stupider behavior, without the excuse of ignorance.
Datasets such as LAION-5B are found to contain thousands of images of CSAM. So, real victims are involved indirectly.