Can AI image generators be policed to prevent explicit deepfakes of children?


by The Guardian

The Guardian— As one of the largest ‘training’ datasets has been found to contain child sexual abuse material, can bans on creating such imagery be feasible?Child abusers are creating AI-generated “deepfakes” of their targets in order to blackmail them into filming their own abuse, beginning a cycle of sextortion that can last for years.Creating simulated child abuse imagery is illegal in the UK, and Labour and the Conservatives have aligned on the desire to ban all explicit AI-generated images of real...

Tech Times—This AI Camera Can Create Deepfake Images of People By Removing Their Clothing: How Scary is NUCA?. With deepfake photos alarming people, the usefulness of AI is overshadowed by its danger like in the case of the NUCA camera which removes the subject's clothing in an instant.

Boing Boing—Facebook pushing AI-generated images of starving, drowning, bruised and mutilated children into users' feeds. At 404 Media, Jason Koebler reports that AI images of suffering children, some depicting them mutilated or dying, are appearing in Facebook users feeds. It's the latest and most alarming example of AI-generated garbage filling the platform, complete with inane replies from bots, from "boomers", and from unambigious humans helping spread the posts by angrily calling them out, a classic Facebook engagement success model. — Read the rest

9to5Mac—Apple pulls AI image apps from the App Store after learning they could generate nude images. Apple is cracking down on a category of AI image generation apps that “advertised the ability to create nonconsensual nude images.” According to a new report from 404 Media, Apple has removed multiple AI apps from the App Store that claimed they could “create nonconsensual nude images.” more