Microsoft, Google, and xAI said they will give the U.S. government access to new artificial intelligence models for national security testing.
The Center for AI Standards and Innovation at the Commerce Department announced the agreement Tuesday, days after the Pentagon said it had reached a separate deal with seven tech companies to use AI in classified systems.
Under the agreement, the government will evaluate the companies’ models before deployment and conduct research on their capabilities and security risks. The arrangement fulfills a pledge by President Donald Trump’s administration in July to work with technology companies to vet AI models for “national security risks.”
Microsoft said it would work with government scientists to test AI systems “in ways that probe unexpected behaviors.” The company said the two sides will develop shared data sets and workflows for testing its models. Microsoft also said it had signed a similar agreement with the United Kingdom’s AI Security Institute.
“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” Center Director Chris Fall said in a statement.
The move builds on 2024 agreements with OpenAI and Anthropic under President Joe Biden, when the center was known as the U.S. Artificial Intelligence Safety Institute. At that time, it focused on AI tests, definitions, and voluntary safety standards. It was led by Biden tech adviser Elizabeth Kelly, who has since joined Anthropic, according to her LinkedIn profile.
The Pentagon’s separate agreement covered Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX.