www.cbsnews.com
The U.K.-based Internet Watch Foundation detected a 26,362% spike in AI-generated child sexual abuse material (CSAM) last year, identifying 3,440 videos. This represents a dramatic rise from just 13 cases detected the previous year. The surge is largely driven by advancements in “deepfake” technology, which allows bad actors to fabricate explicit content using real images of children. Law enforcement and tech companies are struggling to keep pace, as the sheer volume of AI-generated content exacerbates the challenge of tracking offenders and removing illegal material. The explosion in synthetic CSAM is prompting calls for stricter legal frameworks and improved detection tools.
Read More
