“Deepfake Abuse is Abuse”: UNICEF Demands Global Crackdown on AI Exploitation

Tech News, United Nations,6 February, 2026 : In a landmark declaration issued this week, UNICEF has officially classified AI-generated sexualized content of minors as Child Sexual Abuse Material (CSAM), warning that “there is nothing fake about the harm it causes.” Citing a massive study across 11 countries, the agency revealed that over 1.2 million children reported their likenesses being manipulated into sexually explicit deepfakes in the past year alone.
The Urgent Recommendations
UNICEF’s “Call to Action” outlines a three-pronged strategy targeting lawmakers, tech giants, and developers to close the gap between rapid AI advancement and child safety.
UNICEF is calling on all governments to immediately update criminal codes to ensure AI tools cannot be used as a shield for exploitation. Expand the legal definition of CSAM to explicitly include AI-generated or AI-manipulated content, regardless of whether a “real” child was present. Legislation must criminalize the creation, procurement, and possession of such material, not just its distribution. Implementing strict criminal and civil penalties for individuals and “legal persons” (companies) that fail to protect children from these harms.
The agency criticized the current “reactive” model of content moderation, where images are often removed only after a victim reports them. Digital companies must invest in high-fidelity detection technologies to prevent the upload of AI-generated CSAM before it can circulate. In cases where material bypasses filters, platforms must ensure removal within minutes, not days, to prevent viral spread. Extra scrutiny is demanded for generative AI tools (like “nudification” or face-swap bots) embedded directly into social media platforms.
UNICEF is demanding that safety protocols be baked into the foundational code of AI models before they reach the public. Developers must implement robust guardrails that prevent AI models from generating explicit content of minors, even when prompted with creative or “jailbroken” text. For open-source models, developers are urged to conduct rigorous “Red Team” testing to identify and patch vulnerabilities that could be exploited by malicious actors.
The Human Toll: New Data Highlights
The report, titled Deepfake Abuse is Abuse, underscores a growing atmosphere of fear among the youth:
-
The Classroom Reality: In some surveyed regions, 1 in 25 children—the equivalent of one child per classroom—has already been a victim of deepfake manipulation.
-
Nearly two-thirds of children (up to 66%) in study countries reported active worry that AI could be used to create fake sexual images of them.
-
UNICEF warns that purely synthetic CSAM fuels the demand for abusive content and creates “significant challenges” for law enforcement trying to find real victims in a sea of AI-generated decoys.
“The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up.” — UNICEF Statement, February 2026



