Child Sex Abuse Material Was Found In a Major AI Dataset (1 Viewer)

Users who are viewing this thread

Cold Ethyl

Super Moderator
Super Moderator
A popular training dataset for AI image generation contained links to child abuse imagery, Stanford’s Internet Observatory found, potentially allowing AI models to create harmful content.

LAION-5B, a dataset used by Stable Diffusion creator Stability AI, included at least 1,679 illegal images scraped from social media posts and popular adult websites.

The researchers began combing through the LAION dataset in September 2023 to investigate how much, if any, child sexual abuse material (CSAM) was present. They looked through hashes or the image’s identifiers. These were sent to CSAM detection platforms like PhotoDNA and verified by the Canadian Centre for Child Protection.

Menu

AI image training dataset found to include child sexual abuse imagery​

/

Stanford researchers discovered LAION-5B, used by Stable Diffusion, included thousands of links to CSAM.​


By Emilia David, a reporter who covers AI. Prior to joining The Verge, she covered the intersection between technology, finance, and the economy.
Dec 20, 2023, 10:57 AM EST|4 Comments / 4 New

Share this story​




Logo of LAION, which created the LAION datasets

Photo Illustration by Rafael Henrique / SOPA Images / LightRocket via Getty Images
A popular training dataset for AI image generation contained links to child abuse imagery, Stanford’s Internet Observatory found, potentially allowing AI models to create harmful content.
LAION-5B, a dataset used by Stable Diffusion creator Stability AI, included at least 1,679 illegal images scraped from social media posts and popular adult websites.

The researchers began combing through the LAION dataset in September 2023 to investigate how much, if any, child sexual abuse material (CSAM) was present. They looked through hashes or the image’s identifiers. These were sent to CSAM detection platforms like PhotoDNA and verified by the Canadian Centre for Child Protection.

The dataset does not keep repositories of the images, according to the LAION website. It indexes the internet and contains links to images and alt text that it scrapes. Google’s initial version of the Imagen text-to-image AI tool, released only for research, trained on a different variant of LAION’s datasets called LAION-400M, an older version of 5B. The company said subsequent iterations did not use LAION datasets. The Stanford report noted Imagen’s developers found 400M included “a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes.”

LAION, the nonprofit that manages the dataset, told Bloomberg it has a “zero-tolerance” policy for harmful content and would temporarily remove the datasets online. Stability AI told the publication that it has guidelines against the misuse of its platforms. The company said that while it trained its models with LAION-5B, it focused on a portion of the dataset and fine-tuned it for safety.
Stanford’s researchers said the presence of CSAM does not necessarily influence the output of models trained on the dataset. Still, there’s always the possibility the model learned something from the images.

The presence of repeated identical instances of CSAM is also problematic, particularly due to its reinforcement of images of specific victims,” the report said.
The researchers acknowledged it would be difficult to fully remove the problematic content, especially from the AI models trained on it. They recommended that models trained on LAION-5B, such as Stable Diffusion 1.5, “should be deprecated and distribution ceased where feasible.” Google released a new version of Imagen but has not made public which dataset it trained on aside from not using LAION.

US attorneys general have called on Congress to set up a committee to investigate the impact of AI on child exploitation and prohibit the creation of AI-generated CSAM.
Correction, December 20 2:42 PM ET: Updated to clarify Google’s first version of Imagen trained on LAION-400M and not LAION-5B, and includes more information on LAION-400M from the Stanford report.

 
Back
Top