OpenAI CEO Altman says Pentagon deal was rushed

March 3, 2026 OpenAI CEO Sam Altman admitted on Monday that the company “shouldn’t have rushed” its new agreement with the U.S. Department of Defense and outlined revisions to clarify limits on how its AI systems may be used. The changes follow public backlash and a failed negotiation between the Pentagon and rival Anthropic over safeguards for military use of advanced AI models.

In a repost of what he described as an internal memo, Altman said OpenAI would amend the contract to include explicit language stating that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” As highlighted in the memo, “the Department understands the limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Altman also said the Defense Department affirmed that OpenAI’s tools would not be used by intelligence agencies such as the NSA. “There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety,” he wrote, adding that OpenAI would work with the Pentagon on technical safeguards. He acknowledged that the announcement timing “looked opportunistic and sloppy.”

The agreement was disclosed hours after negotiations between Anthropic and the Pentagon broke down. Anthropic CEO Dario Amodei had said he “cannot in good conscience accede to the Pentagon’s request” for unrestricted access to the company’s systems. “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei wrote.

U.S. Defense Secretary Pete Hegseth said Friday that Anthropic would be designated a supply-chain threat after talks collapsed. Anthropic had previously deployed models on the Defense Department’s classified network but sought guarantees that its tools would not be used for domestic surveillance or to operate and develop autonomous weapons without human control.

OpenAI’s announcement triggered an online boycott campaign branded “QuitGPT,” which claims more than 1.5 million people have taken actions such as cancelling subscriptions or sharing protest messages. The campaign accuses OpenAI of enabling “any lawful purpose,” including mass surveillance and autonomous weapons, and urges users to switch to alternatives including Claude and Google’s Gemini.

Altman said over the weekend that Anthropic “should not be designated as a [supply chain risk]” and that he hopes the Defense Department offers it similar terms.

Top Stories

Related Articles

March 3, 2026 A study from Spain’s IMDEA Networks Institute has found that tire-pressure monitoring systems (TPMS) in modern vehicles more...

March 3, 2026 Anthropic’s Claude chatbot experienced a global service disruption Monday, displaying an error message to users while its more...

March 3, 2026 U.S. uninstalls of ChatGPT’s mobile app surged 295 per cent day over day on Feb. 28 after more...

March 2, 2026 Thousands of exposed Google Cloud API keys can authenticate to Gemini endpoints when the Generative Language API more...

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn