News
Late last year, researchers at Stanford University found that Stable Diffusion 1.5 was trained on a cache of illegal child sexual abuse material, as Forbes previously reported. “Unfortunately ...
Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse ... AI image-makers such as Stable Diffusion and Midjourney.
Stanford researchers discovered LAION-5B, used by Stable Diffusion, included thousands ... for AI image generation contained links to child abuse imagery, Stanford’s Internet Observatory found ...
US authorities have arrested a 42-year-old Wisconsin man for allegedly using AI image generator Stable Diffusion to create photorealistic child sexual abuse images. The Justice Department says ...
A man was arrested in the U.S. after he allegedly used the AI text-to-image model Stable Diffusion to create media ... or distribute AI-generated child sexual abuse material,” said Principal ...
and images of child abuse. Discussing the changes Stable Diffusion Version 2 in the software’s official Discord, Mostaque notes this latter use-case is the reason for filtering out NSFW content.
Key members of the artificial intelligence research team that developed Stable Diffusion, a text-to-image generation model that helped catalyze the AI boom, have resigned from British AI unicorn ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results