Top AI companies commit to child safety principles as industry grapples with deepfake scandals


by NBC News

NBC News— After a series of highly publicized scandals related to deepfakes and child sexual abuse material (CSAM) have plagued the artificial intelligence industry, top AI companies have come together and pledged to combat the spread of AI-generated CSAM. Thorn, a nonprofit that creates technology to fight child sexual abuse, announced Tuesday that Meta, Google, Microsoft, CivitAI, Stability AI, Amazon, OpenAI and several other companies have signed onto new standards created by the group in an attempt...

Tech Times—AI Companies Join Forces to Ensure Child Safety Principles in Technologies. AI companies have pledged to commit efforts against AI-generated child sexual abuse materials and deepfake scandals.

Tech Times—Top AI Companies Unite to Combat Child Sexual Abuse Content Online. Led by renowned child safety organization Thorn and non-profit All Tech Is Human. This collaborative endeavor emphasizes the crucial role of technology in fostering a secure online landscape for young users.

Neowin—Microsoft and Google announce new child safety commitments for generative AI services. Microsoft and Google, in collaboration with Thorn and All Tech Is Human, have announced new child safety commitments for generative AI features in their various AI software products.