In both the UK and the US, authorities have experienced unsatisfactory lags in taking down AI-created child sexual abuse material: report
According to a six-month news analysis, Bloomberg shared on 15 April 2026, a finding of how explosive growth in AI-generated child sexual abuse material is crippling law enforcement worldwide, diverting resources from genuine child victims.
Reporters Alexandra S Levine, Riley Griffin, and Kurt Wagner have detailed a crisis where synthetic imagery, now photorealistic and extreme, overwhelms detection systems.
Notable statistics and reports are cited in the report:
- In 2025, the US National Center for Missing and Exploited Children (NCMEC) processed over 1.5m CyberTipline reports tied to generative AI, encompassing 7,000+ instances of users holding AI-made CSAM and 30,000+ generation attempts.
- Britain’s Internet Watch Foundation (IWF) has verified 8,029 AI images and 3,443 videos as realistic abuse depictions — a 26,385% increase from 13 videos in 2024 — with 65% falling into Category A, the gravest severity level. This shift includes full-motion videos with fake audio, plus AI chatbots on public sites that store abuse visuals and urge users to enact virtual assaults.
- Investigative strain is acute. A report by the UK inspectorate on 15 April (the same day as the release of the Bloomberg analysis) had slammed police in England and Wales for inadequate online CSAM responses; National Crime Agency referrals had soared 66% to 20,704 in 2024, amid a 48% decade-long rise in registered offenders. Officers handle up to 54 cases simultaneously, with digital forensics lagging two years, as Chief Inspector Michelle Skeer had noted: forces are “falling short”, delaying child safeguards.
- US cases highlight futility: NBC has identified 36 prosecutions across 22 states involving AI material, all convictions — yet investigators lament time lost on fabricated subjects.
- Legislative fixes have lagged tech advances: Thirty-seven states have enacted AI-CSAM bans, over half in 2024; an Ohio man’s April 2026 guilty plea has marked the first federal win under the Take It Down Act for deploying 100+ online models.
- The UK’s Crime and Policing Bill introduces penalties for possessing creation tools. Victims’ Commissioner Claire Waxman cautions that hesitations cost lives, pushing for unified action.
The Bloomberg probe warns that AI has emboldened sexual predators while eroding investigation efficacy. Its report is urging the use of watermarks, AI filters, and international pacts to reclaim ground,