How AI is being abused to create child sexual abuse imagery

A sobering new report from the Internet Watch Foundation (IWF) focuses on it’s 2023 investigation of the first reports of child sexual abuse material (CSAM) generated by artificial intelligence (AI).

Initial investigations uncovered a world of text-to-image technology. In short, you type in what you want to see in online generators and the software generates the image.

The technology is fast and accurate – images usually fit the text description very well. Many images can be generated at once – you are only really limited by the speed of your computer. You can then pick out your favourites; edit them; direct the technology to output exactly what you want.

These AI images can be so convincing that they are indistinguishable from real images.

In total, 20,254 AI-generated images were found to have been posted to one dark web CSAM forum in a one-month period. Of these, 11,108 images were selected for assessment by IWF analysts. These were the images that were judged most likely to be criminal. Of these, 2,562 images were assessed as criminal pseudo-photographs, and 416 assessed as criminal prohibited images.

The report is available on their website, here.