Leaders in the American artificial intelligence arena, including Microsoft, Google, and OpenAI, are poised to voluntarily uphold specific safety principles for their technology this Friday. This development comes in the wake of a nudge from the White House, with the commitment expected to hold until Congress legislates AI regulation, reports Bloomberg.
The Biden administration has emphasized the necessity for AI corporations to foster their technology responsibly. Government officials are keen on ensuring tech firms innovate with generative AI in a manner that is beneficial to society and does not compromise public safety, rights, and democratic values.
Vice President Kamala Harris, in a May meeting with the CEOs of OpenAI, Microsoft, Alphabet, and Anthropic, highlighted the obligation these companies have to guarantee the safety and security of their AI products. Additionally, President Joe Biden convened with industry leaders last month to deliberate over AI matters.
Per a preliminary document reviewed by Bloomberg, the tech companies are about to consent to eight recommended safety, security, and social responsibility measures. These comprise:
- Facilitating independent experts to scrutinize models for adverse behavior
- Channeling investments towards cybersecurity
- Encouraging third parties to identify security vulnerabilities
- Indicating societal risks such as biases and unsuitable uses
- Centering on research into societal risks posed by AI
- Disseminating trust and safety data with other companies and the government
- Applying watermarks to audio and visual content to clarify AI-generated content
- Utilizing advanced AI systems, referred to as frontier models, to address society’s most pressing issues
The voluntary nature of this agreement signifies the challenges legislators face in matching the swift advancement of AI technology. Several bills proposing AI regulation have been introduced in Congress. One proposal aims to prevent firms from leveraging Section 230 protections to dodge liability for detrimental AI-generated content, while another pushes for political ads to include disclaimers when generative AI is used. It’s worth mentioning that reportedly, administrators in the Houses of Representatives have imposed restrictions on the utilization of generative AI within congressional offices.
Frequently Asked Questions (FAQs) about AI safety measures
What AI companies are committing to safety measures?
Microsoft, Google, and OpenAI are among the major AI companies that are reportedly committing to specific safety measures for their technology.
What prompted these companies to commit to safety measures?
The commitment comes in response to a push from the White House, as part of the Biden administration’s focus on ensuring that AI technology is developed responsibly.
What are some of the safety measures the companies are expected to implement?
The companies are set to agree to eight measures including allowing independent experts to test models for adverse behavior, investing in cybersecurity, encouraging third parties to discover security vulnerabilities, indicating societal risks, focusing on research into societal risks posed by AI, sharing trust and safety data with government and other companies, watermarking AI-generated content, and using advanced AI systems to tackle societal problems.
What happens to the commitment when Congress legislates AI regulation?
The commitment is voluntary and will expire when Congress passes legislation to regulate AI.
What legislative actions are being considered regarding AI?
Several bills have been introduced in Congress in the hope of regulating AI. They aim to prevent companies from using Section 230 protections to avoid liability for harmful AI-generated content and to require political ads to include disclosures when generative AI is employed.