Child sexual abuse material, known as CSAM, refers to visual depictions of sexual violence against children. But what if the child involved is not real? Would that make it all right?
With the rise of artificial intelligence, discussions surrounding AI-generated CSAM are spreading in the media and amongst decision-makers globally. New laws are being introduced, trying to keep up with the lightning-fast development of technology and the constantly evolving nature of offending against children.
From a child rights perspective, it is clear that no kind of child abuse should be normalized, accepted, or legalized. This, however, is not how things currently stand. AI-generated CSAM is not systematically criminalized within the European Union or in the United States.
In June, the European Parliament voted in favor of updated draft legislation that would strengthen children’s rights in light of developing technologies. The proposed amendments to the existing Directive 2011/93/EU would specifically criminalize AI-generated CSAM in all EU member states.
Earlier this year a candidate for the Finnish municipal elections expressed his views to decriminalize CSAM that does not depict a real child, citing that there is no real basis to criminalize the material and that it might even reduce offending against children. In short, he suggests that animated, and otherwise “not real” depictions of sexual acts towards a child should be legalized.
Even if you limit yourself to this differentiation, where technology currently stands, AI generated material is based on real images of abuse. Therefore, whilst the computer-generated material would not necessarily depict an actual event or a real child, it would be based on previously captured abuse material. There is certainly no way to ensure that an image created by technology would have no basis in real events.
Research suggests that viewing CSAM may lead to in-person violence against children. Due to the photorealistic images that can be created with technology, it is often impossible to determine whether an image is real or computer-generated. Therefore, even if viewing AI-generated CSAM would not harm a real child, it may very well lead to hands-on offenses against children.
We must also discuss the profound moral and ethical consequences of normalizing sexual violence against children through discussions about legalizing CSAM that does not depict a real child. Merely entertaining the idea that AI-generated or otherwise “not real” depictions of child sexual abuse could be justified or decriminalized undermines children’s rights to be protected from all forms of sexual violence.
The normalization of computer-generated CSAM, even if it is not based on a real victim, reinforces dangerous narratives around sexual violence against children, desensitizes society to its harms, and emboldens those who seek to exploit children.
Legitimizing AI-generated CSAM would inevitably strengthen the market for such content, sustaining and encouraging demand for material that, at its core, accepts the exploitation and sexualization of children.
When a society begins to accept the artificial depiction of child sexual abuse as permissible, it creates a slippery slope where the boundaries of moral decency are eroded.
We need stronger, uniform legislation, criminalizing all forms of CSAM.
We must focus on systematically preventing sexual violence against children, including by treating offenders and people who fear they might offend against a child. Sexual violence against children is a public health epidemic, one which will not be solved by offering perpetrators the very thing they are looking for: child sexual abuse material, regardless of its format.