Flutterby™! : CSAM in Stable Diffusion

Next unread comment / Catchup all unread comments User Account Info | Logout | XML/Pilot/etc versions | Long version (with comments) | Weblog archives | Site Map | | Browse Topics

CSAM in Stable Diffusion

2023-12-20 18:21:08.063349+01 by Dan Lyke 0 comments

404 Media: Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material

The model is a massive part of the AI-ecosystem, used by Google and Stable Diffusion. The removal follows discoveries made by Stanford researchers, who found thousands instances of suspected child sexual abuse material in the dataset.

Washington Post: Exploitive, illegal photos of children found in the data that trains some AI

In a report released by Stanford University’s Internet Observatory, researchers said they found at least 1,008 images of child exploitation in a popular open source database of images, called LAION-5B, that AI image-generating models such as Stable Diffusion rely on to create hyper-realistic photos.

Stanford Internet Observatory Cyber Policy Center: Identifying and Eliminating CSAM in Generative ML Training Data and Models (PDF)

[ related topics: Free Software Children and growing up Photography Erotic Sexual Culture Astronomy Journalism and Media Net Culture Machinery Trains Skating Education Artificial Intelligence Race Databases ]

comments in ascending chronological order (reverse):