
Protecting Our Future: The Critical Fight Against Child Exploitation in AI
Table of Contents
- Introduction
- Defining Child Exploitation in AI
- Recent Cases and Studies
- Ethical Standards vs. Technological Advancement
- Potential Futures
- Agents of Change
- Conclusion
- References
Content Warning: This article contains discussion involving child exploitation in AI and other sensitive matters relating to such. It will include both real-life examples and cases that have already occurred and discuss the possibility of future events as well. Be advised that this is going to be a stark look into the situation and will aim to inform, guide, and explain the future of AI imagery. If such subject matters are upsetting to read, this paragraph serves to provide a fair chance of making the decision to expose oneself to this content.

Introduction
Human ingenuity has always surpassed human discretion. Thus far, the invention of gun safety came after the invention of guns, and that kind of parallel has been consistent throughout all of history. Certain inventions do not have a safety guide with them, such as the nuclear bomb. The safety guide for that creation is to not use it.
The internet itself is also a danger. Decades later, people established safety guides for the internet, such as when websites needed to announce that they were going to be using cookies and explicitly say when they were going to be taking on the user’s data.
The modern medusa here is AI image generation systems and the potential of utilising them for the exploitation and victimization of children. It’s not limited to children, of course, but the severity of the situation is exacerbated in this context.
Defining Child Exploitation in AI
In AI, child exploitation takes various forms, and that is one of the problems. Consider a situation where an individual, of legal age and merit, is depicted through AI, with their consent, engaging in sexual activities. Now consider that this individual has astonishingly childlike features.
At what point does it become child exploitation? Legally speaking, an adult with juvenile characteristics producing sexual content is not crossing any lines. Ethically speaking, however, there are countless issues. In the case of AI, doubly so, as no individual was actually harmed in the making of such content. In such a situation, it is incredibly hard to define exactly what constitutes child exploitation in AI.
By utilizing the established research into the consumption of pornographic content, the conclusion is that depiction itself remains the only significant factor. Ergo, for purposes of child exploitation in AI, if it looks like a child, it is treated as a child, without exception.

Recent Cases and Studies
There have been several cases in the past few years where the rapid onset of AI in mainstream media made safety guidelines impossible. Individuals have begun creating AI-generated adult content of their own peers, some as young as middle schoolers. Through the use of AI based image splicing, they were able to realistically place the faces of their victims on the bodies of adult movie stars for specifically malicious intent.
Kaylin Hayman, an actress on Disney who made her debut at the age of 10 and stayed on until the age of 14, was one such victim.
“I felt violated and disgusted to think about the fact that grown men had seen me in such a horrendous manner,” Hayman said. “While speaking about this topic is daunting, I know deep down I need to share my voice. I need to bring awareness and justice to those in my position.”
This is a high profile actress, and even she struggled for justice. Those who do not have that kind of influence cannot stand up for themselves in the same way. In March, five eighth grade Beverly Hills students were expelled for generating and distributing AI generated nudes of their classmate. To be clear, the students here were merely expelled at the discretion of the school authorities. The lack of guidelines and precedence hindered proper legislative action from being taken.
As AI grows easier to use and navigate, such issues will become immensely more prominent. It is everyone’s personal duty to get ahead of the statement.
Ethical Standards vs. Technological Advancement
The evolution of AI necessitates the evolution of ethical AI standards. Certain problems are complex enough that they need to be solved before they become prominent, and the ethics regarding AI is one of those things. The easiest answer is to simply put bans on AI development and harshly limit the growth of such technology, but that isn’t particularly helpful or progressive.
AI has become such an integral part of daily life and use that it is impossible to ban it outright. Even AI regulation would not work very well. Throwing bureaucratic red tape around the development without addressing technology ethics will not work either. All it will do is force companies to cut corners and cheat their way ahead. It will simply have the opposite effect of technology being hidden from the public eye to prevent scrutiny.
There is no easy or straightforward answer to this either. No broad or sweeping advice that’s applicable to several key issues at once. At least, not immediately. California has taken some steps already, and they have chosen to treat AI generated child pornography the same way they handle non-AI generated images.
Distribution and possession are crimes, and the bill was passed in the Assembly Public Safety Committee. It is now headed for the Privacy and Consumer Protection Committee to be approved or denied. Other states are expected to follow.
Potential Futures
It’s difficult to predict how AI will change things, but social justice requires us to have these kinds of discussions. In cases where actual children are targeted and exploited, it is quite cut and dry. In instances where the depiction merely appears to be that of a child, it gets far more murky.
Conversations need to happen as to whether this victimless crime, that of depicting a fictional child in a horrible situation, should be acceptable. Many believe that a crime that is victimless should not be a crime at all. If no one was harmed in the process, what exactly is the punishment supposed to be for? In recent history, everything from feminism to intersectionality to gay rights has come so far because of this same sentiment. There is no reason to prevent two men from marrying each other since no one is being harmed in the process, but it is quite uncomfortable to have that logic applied to the aforementioned situation.
Our best course of action requires us to stay well informed on the topic.
Agents of Change
One thing we can do is educate and ask. Parents can teach their children, teachers can guide their students, and the public can put pressure on companies. Even in the case of “victimless” AI generated content, individuals who are threats to society may have their unacceptable desires normalized or facilitated. The more normalized certain contents become, the harder it gets for victims to appeal for justice.
The question of pedophilia in these conversations is necessary, but in the context of “Victimless AI” art, it is extremely uncomfortable. It is important that, if we want to bring about change, we cannot avoid the discomfort. It’s not going to go away if we don’t look at it. It needs to be addressed, like all other things. Individuals have personal authority and the power to be the agents of change required to make AI use better.
Conclusion
The issues surrounding AI-generated art depicting minors in exploitative ways are undoubtedly complex, but the ethical implications are clear-cut. In the absence of comprehensive state laws prohibiting such exploitation through AI, cases like Hayman’s have been swiftly escalated to federal authorities, but that is not how we can expect most cases to go. Catalyzing meaningful change begins with raising awareness and fostering dialogue within our communities. Advancing technologies while safeguarding the wellbeing and dignity of people can be possible through our collective effort.
References
Singh, S., & Nambiar, V. (2024). Role of Artificial Intelligence in the Prevention of Online Child Sexual Abuse: A Systematic Review of Literature. Journal of Applied Security Research, 1–42. https://doi.org/10.1080/19361610.2024.2331885
Singer, N. (2024, April 8). Teen Girls Confront an Epidemic of Deepfake Nudes in Schools. The New York Times. https://www.nytimes.com/2024/04/08/technology/deepfake-ai-nudes-westfield-high-school.html
Minevich, M. (n.d.). Revolutionizing Child Protection: The UN And UAE’s Groundbreaking AI For Safer Children Collaboration. Forbes. Retrieved from https://www.forbes.com/sites/markminevich/2023/12/26/revolutionizing-child-protection-the-un-and-uaes-groundbreaking-ai-for-safer-children-collaboration/?sh=798782a11cdf
