February 26, 2026 U.S. Defense Secretary Pete Hegseth has reportedly set a Friday deadline for Anthropic to remove internal restrictions that limit how its AI can be used by the Pentagon, or risk losing future Defense Department contracts. Per Politico and Axios, the Defense Department is preparing contingency plans if Anthropic refuses to revise its policies.
The company maintains two “red lines”: it will not support AI-controlled weapons or mass domestic surveillance of American citizens. Sources familiar with the matter told CNN that Anthropic believes current AI systems are not reliable enough to operate weapons autonomously and that there is no legal framework governing large-scale AI surveillance of Americans.
Pentagon officials argue that compliance with U.S. law and the Law of Armed Conflict should be the governing standard for military AI deployment. As reported by Axios, defence leaders want fewer company-imposed restrictions layered on top of existing legal requirements as the department accelerates AI integration across logistics, intelligence analysis and operational planning.
The disagreement is not about whether the proposed uses are illegal. It is about whether a vendor can unilaterally narrow permissible use cases beyond statutory obligations. There is also a structural dimension. While the department may move to replace Anthropic, the U.S. government retains broad national security authorities that can, in certain circumstances, compel cooperation from critical technology providers. That reality underscores the uneven leverage between public institutions and private AI developers.
For technology executives watching the defence market, the episode signals a deeper question. As AI becomes embedded in national security infrastructure, the line between corporate policy and sovereign authority is being tested in real time.
