Major AI Developers Commit to Shared Safety and Transparency Goals in Non-Binding Agreement with Biden Administration
In a bid to promote responsible and transparent development of artificial intelligence (AI), the Biden administration has secured "voluntary commitments" from seven leading AI developers. The companies participating in this non-binding agreement are OpenAI, Anthropic, Google, Inflection, Microsoft, Meta, and Amazon. The voluntary commitments are aimed at addressing safety concerns and ensuring transparency within the AI industry. Although no legal enforcement is involved, the companies' adherence to these principles is expected to be a matter of public record.
The White House convened a meeting with representatives from the seven companies to discuss their commitment to the following shared goals:
Pre-Release Security Testing. AI systems will undergo rigorous internal and external security tests, including "red teaming" by external experts, to identify vulnerabilities before deployment.
Information Sharing. The companies will collaborate with government agencies, academia, and civil society to share information on AI risks and mitigation techniques, such as preventing unauthorized access to AI systems.
Investment in Cybersecurity. Measures will be taken to protect private model data, such as intellectual property and sensitive information, from insider threats and cyberattacks.
Third-Party Discovery and Reporting. The companies will facilitate third-party discovery and reporting of vulnerabilities through bug bounty programs or domain expert analysis.
Robust Watermarking. The development of robust watermarking or similar methods will be explored to identify AI-generated content, ensuring accountability and preventing misinformation.
Transparency of AI Systems. The companies commit to provide information on the capabilities, limitations, and appropriate and inappropriate uses of their AI systems.
Research on Societal Risks. Prioritizing research on addressing societal risks, such as systematic bias and privacy issues, will be undertaken.
AI for Social Impact. Development and deployment of AI to address significant societal challenges, like cancer prevention and climate change, will be encouraged.
While the commitments are voluntary, the White House hinted at the possibility of introducing an Executive Order to encourage compliance. The order could potentially direct the Federal Trade Commission (FTC) to scrutinize AI products claiming robust security and hold companies accountable for failing to allow external security testing.
The White House's proactive approach towards AI reflects its determination not to be caught off-guard, as it was with social media's disruptive capabilities in the past. President Biden and Vice President Harris have actively engaged with industry leaders to devise a national AI strategy. The administration has also allocated significant funding to support AI research centers and programs, though the national science and research community is already making substantial strides in this area.
While substantive AI legislation might still be years away, this voluntary commitment from major AI developers represents a step towards fostering responsible AI development and ensuring transparency in an industry that is evolving at breakneck speed.