March 4, 2026 OpenAI has amended its agreement with the U.S. Department of Defense after CEO Sam Altman acknowledged the original rollout was “opportunistic and sloppy,” adding explicit prohibitions against domestic surveillance of U.S. persons. The revisions come amid public backlash and rising ChatGPT uninstalls over how the AI model is deployed in classified military operations.
On Monday, Altman said OpenAI would update the contract to clarify that its systems shall not be “intentionally used for domestic surveillance of U.S. persons and nationals.” He added that intelligence agencies such as the National Security Agency would require a “follow-on modification” to use OpenAI’s models. “The issues are super complex, and demand clear communication,” Altman wrote, conceding the company “shouldn’t have rushed to get this out on Friday.”
The original agreement was announced shortly after talks between the Pentagon and rival Anthropic collapsed. Anthropic had sought assurances that its Claude model would not be used for fully autonomous weapons or mass surveillance of Americans. Following its refusal to drop those red lines, the U.S. government directed federal agencies to cease using Anthropic’s technology and labelled it a supply-chain risk.
Altman told OpenAI employees that the company does not “get to make operational decisions” about how its models are used. “You don’t get to weigh in on that,” he said, referring to military actions. He maintained that the Pentagon respects OpenAI’s technical safeguards but that operational authority rests with Defense Secretary Pete Hegseth.
The controversy has spilled into the consumer market. According to Sensor Tower data cited by multiple outlets, U.S. ChatGPT uninstall rates surged by as much as 295 per cent in recent days, while Anthropic’s Claude climbed to the top of Apple’s App Store rankings. Installs of Claude reportedly rose sharply over the same period.
AI systems are already embedded in military workflows, including data analysis and logistics. NATO’s Task Force Maven integrates AI tools to process satellite data and intelligence reports, though officials stress human oversight. Lieutenant Colonel Amanda Gustave said there is “always introducing a human in the loop” and that it “would never be the case” that an AI would “make a decision for us.”
