With deepfake photos alarming people, the usefulness of AI is overshadowed by its danger like in the case of the NUCA camera which removes the subject's clothing in an instant.
This op/ed comes from Sarah Williams, CEO of IMPEL, the international collective licensing agency representing digital music publishing rights Source
Deepfakes threaten to upend elections. No one can stop them (First column, 9th story, link) Related stories:AI can predict political orientations from blank faces
As one of the largest ‘training’ datasets has been found to contain child sexual abuse material, can bans on creating such imagery be feasible?Child abusers are creating AI-generated “deepfakes” of their targets in order to blackmail them into filming their own abuse, beginning a cycle of sextortion that can last for years.Creating simulated child abuse imagery is illegal in the UK, and Labour and the Conservatives have aligned on the desire to ban all explicit AI-generated images of real...
Researchers from National Taiwan University and the University of Maryland have published a new Journal of Marketing article that examines how marketers can use GenAI to provide empathetic customer care.
Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience. The discussions over AI’s political leanings and efforts to fight bias are important....
Pedophiles and bad actors are now using AI deepfake nudes of children to extort more explicit media from them.
The Visual Affective Skills Animator, or VASA, is a machine-learning framework that analyzes a facial photo and then animates it to a voice, syncing the lips and mouth movements to the audio. It also simulates facial expressions, head movements, and even unseen body movements.Read Entire Article
A new AI model by Microsoft Research Asia, VASA-1, can create incredibly realistic deepfakes based only on a single photograph and one voice sample. The model easily beats anything available today.
Sexually explicit deepfake pictures have proliferated online and pop-up on mainstream sites uncensored. Victims say the fake images impact their education, jobs and relationships. There is currently no federal legislation to protect deepfake victims
Though it was trained on real people's images and audio clips, VASA-1 is allegedly meant for creating 'virtual characters.'
Microsoft recently announced two new features for Copilot. The first update is for Microsoft Forms, Microsoft’s web-based application that is used for creating surveys and quizzes. Users can now use Copilot to create quizzes directly from existing documents, textbooks, or notes. It makes things so much easier for teachers. This goes by the Feature ID 389149, […]