Norwegian police, led by Helge Haugland at Kripos, have encountered numerous tips regarding AI-created child abuse imagery through NCMEC. As AI advances, distinguishing genuine content from fabricated material becomes progressively challenging, causing concerns about strained investigative capacities.
Haugland notes that while identifying AI-produced content was once simpler, its quality now makes discernment between authentic photos/videos and those produced using AI increasingly complex.This issue is part of a broader trend highlighted by research associates with Thorn and Stanford Observatory, revealing a substantial increase in computer-generated abuse material since August 2022, despite accounting for only a small percentage of total online content.
Kripos emphasizes that creating, sharing, and storing AI-generated images sexually exploiting children violate Norwegian law.Consequently, Haugland expresses worry that the growing prevalence of AI-manipulated content might lead to misidentification of fictitious victims and perpetrators, stretching already limited resources.
Additionally, there exists a concern that actual instances of abuse involving real individuals could be altered to resemble AI-generated content.
Negotiations on a proposed EU bill intended to safeguard children were halted last year due to privacy concerns among other issues, resulting in the extension of existing regulations until April 2026.