A new study from the United Kingdom found that artificial intelligence is enabling a surge in child pornography.
Internet Watch Foundation released a study on Jan. 16 warning that 2025 was “was the worst year on record for online child sexual abuse material.”
That was because the group tracked a 26,362 percent increase in photo-realistic AI videos depicting children getting sexually abused.
Those videos often involved “real and recognisable child victims.”
In 2024, the group was only able to find 13 such videos. But in 2025, they found 3,440.
An increasing number of AI tools allow users to create custom pictures and videos. As the tools become more prevalent, their abuse is becoming more prevalent as well.
“This material can now be made at scale by criminals with minimal technical knowledge, and can have harmful effects on children whose likenesses are coopted into the imagery, as well as further normalising sexual violence against children and undermining efforts to create an internet free of child sexual abuse and exploitation,” Internet Watch Foundation said.
“Analysts believe offenders are using the technology in greater numbers as the sophistication of AI video tools improves.”
The organization broke down child abuse material by Category A and Category B — the two most extreme classifications of child pornography in British law.
Some 65 percent of the videos were Category A — which includes “penetration, sexual torture, and even bestiality.”
Another 30 percent were classified as Category B.
“Governments around the world must ensure AI companies embed safety by design principles from the very beginning,” Internet Watch Foundation Chief Executive Kerry Smith said.
“It is unacceptable that technology is released which allows criminals to create this content.”
According to a report from CBS News, Internet Watch Foundation responded to more than 300,000 reports of child sexual abuse material in 2025.
Such material is banned under federal law in the United States.
Various AI platforms made by American companies — including Grok, the AI tool used by Elon Musk’s social media platform X — allow the creation of materials depicting the sexualization of women and minors.
X recently said they updated Grok to prevent the AI from “allowing the editing of images of real people in revealing clothing such as bikinis.”
“Additionally, we will geoblock in jurisdictions where such content is illegal, the ability of all users in those locations to generate images of real people in bikinis, underwear, and similar attire in Grok on X, and xAI is implementing similar geoblocking measures for the Grok app,” the company said.
Advertise with The Western Journal and reach millions of highly engaged readers, while supporting our work. Advertise Today.











