News
Late last year, researchers at Stanford University found that Stable Diffusion 1.5 was trained on a cache of illegal child sexual abuse material, as Forbes previously reported. “Unfortunately ...
Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse ... AI image-makers such as Stable Diffusion and Midjourney.
Stanford researchers discovered LAION-5B, used by Stable Diffusion, included thousands ... for AI image generation contained links to child abuse imagery, Stanford’s Internet Observatory found ...
US authorities have arrested a 42-year-old Wisconsin man for allegedly using AI image generator Stable Diffusion to create photorealistic child sexual abuse images. The Justice Department says ...
But child-safety experts said many appeared to have relied on open-source tools, such as Stable Diffusion, which can be run in an unrestricted and unpoliced way. Stability AI, which runs Stable ...
and images of child abuse. Discussing the changes Stable Diffusion Version 2 in the software’s official Discord, Mostaque notes this latter use-case is the reason for filtering out NSFW content.
Stability is currently “deploying four million tablets to every child.” (Malawi’s government did not return requests for comment.) Less than two months after Stable Diffusion's public launch ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results