Google, Microsoft and xAI agree to share AI models with U.S. government before public release

May 9, 2026 Alphabet Inc., Microsoft and xAI have reached an agreement with the Trump administration to provide early versions of their artificial intelligence models to the U.S. government for testing before public release. The arrangement allows federal officials to evaluate advanced AI systems with some safeguards reduced or removed in order to assess potential national security and cybersecurity risks.

The evaluations will be led by the Commerce Department’s Center for AI Standards and Innovation, or CAISI, which has already conducted more than 40 assessments on AI systems, including models that had not yet been publicly launched. OpenAI and Anthropic entered into similar agreements with the Commerce Department in 2024.

“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” said Chris Fall in a statement announcing the expanded partnerships. “These expanded industry collaborations help us scale our work in the public interest at a critical moment.”

The agreements represent another step toward formalized government oversight of increasingly powerful AI systems as concerns grow over cyberattacks, misuse and autonomous capabilities.

According to details reported by The Wall Street Journal, the participating companies will share versions of their models with CAISI that have fewer restrictions or disabled safeguards so government evaluators can better understand how the systems behave under stress or malicious use scenarios.

That distinction matters because many public-facing AI systems already contain layers of moderation and safety controls designed to prevent harmful outputs. Testing models without some of those protections gives officials a clearer picture of the underlying capabilities and potential vulnerabilities of frontier AI systems.

The agreements arrive as the Trump administration considers a broader executive order focused on AI cybersecurity risks.

According to the Journal, White House officials are weighing the creation of a more formal government review structure that would establish standards for advanced AI systems before release. The proposed framework is intended to reduce the risk of cyberattacks, infrastructure disruptions and other harms linked to prematurely deployed AI models.

The move reflects a growing shift in how governments are approaching AI regulation. Earlier policy discussions largely focused on consumer harms, misinformation and transparency. Increasingly, however, national security agencies are treating advanced AI systems as strategic infrastructure with direct implications for cyberwarfare, intelligence and critical systems protection.

The involvement of companies like xAI also signals how rapidly newer AI firms are being integrated into federal oversight conversations alongside more established players such as Google and Microsoft.

For the companies involved, cooperating with government evaluations may also help shape future regulation before stricter mandatory rules emerge. Voluntary testing arrangements allow firms to participate directly in defining technical standards and risk thresholds that could later become formal policy requirements.

At the same time, the agreements raise questions about how much visibility governments should have into proprietary AI systems before public release.

The evaluations involve unreleased frontier models that companies typically guard closely because of competitive pressure and concerns around intellectual property. Sharing systems with federal agencies — especially with safety layers weakened or removed — introduces a level of cooperation between government and AI developers that would have seemed unusual only a few years ago.

The initiative also highlights how quickly AI governance is evolving into a cybersecurity issue rather than purely a technology policy debate.

Large language models are increasingly capable of generating code, automating vulnerability discovery and assisting with cyber operations. Governments worldwide are becoming more concerned about how those capabilities could be weaponized if released without proper safeguards.

CAISI’s role appears designed to create something closer to a testing and standards body for frontier AI systems. While the centre does not currently have direct regulatory authority, its evaluations could become increasingly influential if the administration formalizes a broader oversight framework through executive action.



Top Stories

Related Articles

May 11, 2026 Instructure has restored access to its Canvas learning platform after a cyberattack disrupted service for universities and more...

May 11, 2026 The Maryland Office of People's Counsel has filed a complaint with the Federal Energy Regulatory Commission arguing more...

May 11, 2026 General Motors has reached a privacy-related settlement with California regulators over allegations that it collected and sold more...

May 9, 2026 Google Fitbit has introduced the Fitbit Air, a new $99.99 screenless fitness tracker designed around AI-powered health more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn