Pentagon ditches Anthropic AI over “security risk” and OpenAI takes over
On Friday the US Pentagon cut ties with Anthropic, the company behind Claude AI. Defense Secretary Pete Hegseth designated the San Francisco-based company a “supply-chain risk to national security.
NDSS 2025 – Be Careful Of What You Embed: Demystifying OLE Vulnerabilities
On Friday the US Pentagon cut ties with Anthropic, the company behind Claude AI. Defense Secretary Pete Hegseth designated the San Francisco-based company a “supply-chain risk to national security.”
The supply-chain risk designation means that no contractor, supplier, or partner doing business with the US military can deal with Anthropic. The label previously applied only to foreign adversaries like Huawei, though, and using it against a US company marks a rare escalation in a government-industry dispute. According to reports, President Donald Trump also ordered every federal agency to stop using Anthropic’s technology.
What Anthropic wouldn’t budge on
Anthropic called the designation “unlawful and politically motivated” and said it intends to challenge it in court.
At the center of the dispute is how far Anthropic believes its models should be allowed to go inside military systems. Anthropic, which was the first frontier AI company deployed on the military’s classified networks, wanted two contractual restrictions on its AI model Claude, as outlined in its response to the Pentagon’s announcement. It forbade the Pentagon to use its tech for the mass domestic surveillance of Americans and did not want its tech employed in fully autonomous weapons.
The Pentagon had previously demanded that all AI vendors agree to “all lawful purposes” language as part of their contracts. Anthropic told ABC that what the Pentagon finally offered left the door open for the government to violate the company’s no-surveillance and no-weapons clauses.
Defense Secretary Hegseth responded with a statement cancelling Anthropic’s $200m Pentagon contract, awarded last July. He accused Anthropic of attempting to seize veto power over military operations and called the company’s position fundamentally incompatible with American principles.
Anthropic’s CEO Dario Amodei called the government’s response retaliatory and punitive and promised to challenge the designation in court.
Legal scholars suggest that the AI company could have a strong case, questioning whether Hegseth can meet the statutory requirements for such a designation, which is allegedly intended to protect military systems from adversarial sabotage rather than resolving a commercial disagreement over contract terms.
Dan W. Ball, senior fellow at the American Foundation for Innovation, called the Pengaton’s move “attempted corporate murder,” arguing that Google, Amazon, and NVIDIA would have to detach themselves from Anthropic if Hegseth got his way. Amazon is Anthropic’s primary cloud computing provider, but it also uses Google’s data centers extensively. Both companies are investors in Anthropic, as is NVIDIA, which also partners with the AI company on GPU engineering. If the Pentagon’s designation restricts federal contractors from integrating Anthropic technology into defense-related systems, those partners could be required to separate or ringfence any federal-facing work involving the company.
OpenAI steps in
In a whirlwind of policy changes by the US military, the Pentagon also signed a deal with ChatGPT creator OpenAI on Friday evening, just a few hours after dropping Anthropic.
OpenAI CEO Sam Altman said the agreement preserved the same principles Anthropic had been blacklisted for defending.
The difference, according to Altman, is the enforcement mechanism. Instead of hard contractual prohibitions, OpenAI accepted the “all lawful purposes” framework but layered on architectural controls: cloud-only deployment, a proprietary safety stack the Pentagon agreed not to override, and cleared engineers embedded forward. OpenAI said these protections made the company confident that the Pentagon couldn’t cross the red lines it shares with Anthropic.
Altman reportedly said Anthropic’s approach differed because it relied on specific contract language rather than existing legal protections, adding Anthropic “may have wanted more operational control than we did.”
The morning after
The policy dispute did not immediately change how existing systems were operating. According to reporting by The Wall Street Journal and Axios, US Central Command used Anthropic’s AI during Operation Epic Fury, a coordinated US–Israeli operation targeting Iran. The outlets reported that the system was used for intelligence assessment, target analysis, and operational modeling.
Claude remained in use because it was already embedded in certain classified military systems. As a senior defense official previously told Axios:
“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
Hegseth announced a six-month period during which the Pentagon will pick Anthropic’s AI out of its systems.
Consumers vote with their feet
The dispute has also prompted reactions from some AI industry employees and users. More than 875 employees across Google and OpenAI signed an open letter backing Anthropic’s stance. According to the letter:
“They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.”
A consumer boycott, organized under the name QuitGPT, is organizing a campaign to avoid using ChatGPT, along with a protest at OpenAI’s HQ this week. Claude also rocketed to the top of Apple’s App Store over the weekend.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
*** This is a Security Bloggers Network syndicated blog from Malwarebytes authored by Malwarebytes. Read the original post at: https://www.malwarebytes.com/blog/news/2026/03/pentagon-ditches-anthropic-ai-over-security-risk-and-openai-takes-over
