Are tech giants doing enough – if anything – to stop AI-generated child abuse?

0
3
Are tech giants doing enough – if anything – to stop AI-generated child abuse?


This article references child sexual abuse.

I know we all feel the same rage this week. It’s simmering into an effervescent boiling fury in me as I watch another tech giant embroiled in a scandal, exploiting and humiliating women on an international scale. If you haven’t heard already, Elon Musk’s AI creation, Grok, has been under fire for generating AI, sexualised images of women on X – at the behest of other users. But there’s another layer to this that must be addressed: Grok’s deepfake imagery is emblematic of the future of childhood sexual abuse.

Today (15 Jan), X announced measures to prevent the Grok account from generating images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers. This comes after the image-generation feature was initially restricted only to paid users.

X’s delay in making these vital changes has shown a lack of concern for women’s safety and consent. And we can’t ignore the threat this technology poses to children. Numerous cases of underage girls being edited to wear bikinis have already been reported; however, watchdogs like eSafety in Australia say that such images do not meet the threshold for child sexual abuse material (CSAM). (Sidenote: We do not use the term “child pornography”, and neither should anyone else, because it implies that the child has consented to appear in such material, which no child is capable of doing. Correct terminology is essential.)

In England and Wales, CSAM and online abuse increased by a staggering 26% last year, with 51,672 crimes recorded. Over 100 such crimes are reported to police every day, yet figures like Musk appear to have no issue pressing ahead with technology that endangers children around the world. And it’s not just images that are the problem. Research by the Internet Watch Foundation (IWF) revealed that reports of AI-generated CSAM have risen by 400% — a staggering 210 webpages were found to contain such content in the first six months of 2025. It’s likely that AI-generated full-length videos are not far behind. The number of new images and videos rises every day, fuelled by the normalisation of apps like Grok violating the consent of adults and children alike.

I know, even in this, that there will still be cries of dissent. Those who allege that surely digital versions of these crimes are less damaging than perpetrators physically harming children. If only eradicating childhood sexual abuse were that easy. The same excuses were touted when a Japanese sex robot company was revealed to be making and delivering child sex dolls. They alleged that these dolls could prevent potential abusers from harming real children; however, research shows that possessing such dolls has no provable benefit in reducing urges. In fact, mimicking such urges could make people more likely to carry them out on real children, because the doll becomes less effective over time. Return your eyes to out-of-control AI imagery, and you can see where the pattern will lead us: to more abuse of children.

And that’s already a threat we’re seeing play out. As more CSAM has become available, more crimes against children have been committed in person. Amongst the 122,678 child sexual abuse and exploitation offences recorded by the police in England and Wales in 2024, 58% were in-person offences. When irresponsible tech bros allow their technology to participate in the digital abuse of women and girls, it’s inevitable that this abuse will translate to physical harm in the real world. As much as these creeps want to live in augmented reality where women and girls have no autonomy, we still live in a real world with living, breathing people who can and are being harmed by these vile “technological advancements”.

Childhood sexual abuse is already a worldwide epidemic that governments are ill-equipped to confront. The normalisation of AI apps that empower lonely, let’s be honest, mostly men, to violate people from the comfort of their home will inevitably lead to the expansion of CSAM, an abhorrent thought that should terrify tech bros back into their caves for good. It won’t, because their only goal is profit at the expense of society. But it’s up to all of us to challenge such apps, prevent their use, and educate people about the dangers they pose to us all. While it’s gratifying to see Ofcom and the government take action, it won’t work unless people collectively reject this technology.

We’re all facing down the future of sexual violence in app form. Technology invented to streamline human existence, while robots take care of the boring stuff, has been redirected to become the new frontier of exploitation. Just look at the downhill trajectory of the founder of OpenAI, the company that owns ChatGPT. The man who claims that his technology will one day help cure cancer is now excitedly announcing that ChatGPT will soon have ‘erotica’ functions for adults. How far we’ve come, yay us.



Source link