• Agents of manipulation (the real AI risk)

    Artificial agents will make our lives better. At the same time, these superpowers could easily be deployed as agents of manipulation.

  • EU demands clarity from Microsoft on AI risks in Bing

    The European Commission could fine Microsoft if it doesn't provide adequate information on risks stemming from generative AI features in search engine Bing by May 27. The Commission said on Friday that it is worried about the dissemination of deep fakes and automated manipulation of services that can mislead voters. It said it was stepping up enforcement actions on the matter, as it had not received a reply to a request for information sent on March 14. If the deadline is not met, the commission...

  • EU Demands AI Risk Data From Microsoft, Threatens Fine

    This is the second request made by the commission to Microsoft, following an initial notice sent on March 14.

  • UK opens office in San Francisco to tackle AI risk

    Ahead of the AI safety summit kicking off in Seoul, South Korea later this week, its co-host the United Kingdom is expanding its own efforts in the field. The AI Safety Institute – a U.K. body set up in November 2023 with the ambitious goal of assessing and addressing risks in AI platforms – said […] © 2024 TechCrunch. All rights reserved. For personal use only.

  • What happened to OpenAI’s long-term AI risk team?

    Former team members have either resigned or been absorbed into other research groups.

    • CP24

    No expert consensus on AI risks, trajectory ‘remarkably uncertain’: report

    A major international report on the safety of artificial intelligence says experts can’t agree on the risk the technology poses — and it’s unclear whether AI will help or harm us. The report, chaired by Canada’s Yoshua Bengio, concludes the "future trajectory of general-purpose AI is remarkably uncertain." It says a "wide range of trajectories" are possible "even in the near future, including both very positive and very negative outcomes." The report was commissioned at last year’s AI...

  • US, China meeting this week to talk AI safety, risks

    The Tuesday discussions in Geneva will cover “areas of concern” and “views on the technical risks,” per an administration official.

  • Explainer: What risks do advanced AI models pose in the wrong hands?

    The Biden administration is poised to open up a new front in its effort to safeguard U.S. AI from China and Russia with preliminary plans to place guardrails around the most advanced AI models, Reuters reported on Wednesday. Government and private sector researchers worry U.S. adversaries could use the models, which mine vast amounts of text and images to summarize information and generate content, to wage aggressive cyber attacks or even create potent biological weapons. Here are some threats...

  • Study: AI chatbots that simulate the dead risk haunting the bereaved

    Artificial intelligence (AI) chatbots which simulate the language and personalities of dead people risk distressing loved ones left behind through “unwanted digital hauntings,” a researcher has warned. A Cambridge University study suggested that the AI chatbots – known as deadbots – need design safety protocols to prevent causing psychological harm. Some companies are already offering services, which allow a chatbot to simulate language patterns and personality traits of a dead person using the...

  • US, China will meet in Geneva this week to discuss ‘AI Risk’

    Mid-level officials from the NSC and State Department will lead the talks, which follow on Xi-Biden summit last November. No public joint statement is expected, let alone a formal agreement.

  • Forget AI: Physical threats are biggest risk facing the 2024 election

    November’s vote has been called the “AI election,” but officials are most worried about the physical safety of election workers and infrastructure.

  • US Legislators Introduce AI Export Control Bill to Prevent Risks to Critical Infrastructure

    The bill proposes amendments to the Export Control Reform Act of 2018, aiming to prevent exploitation of US AI models by foreign adversaries.