Anthropic’s ongoing battle with the Pentagon over the military’s use of its AI systems flared up this week around a hypothetical nuclear strike scenario, according to new reporting from the Washington Post.
The Claude AI builder has frustrated the Pentagon by objecting to its systems being used for autonomous weaponry and the mass surveillance of US citizens. To cut to the heart of the debate, a defense official told WaPo, the Pentagon’s technology chief posed an extreme hypothetical: would Anthropic let the military use Claude to help shoot down a nuclear-armed intercontinental ballistic missile?
Anthropic CEO Dario Amodei’s response apparently irritated Pentagon leaders. “You could call us and we’d work it out,” was how the defense source characterized it, in WaPo’s words.
An Anthropic spokesperson denied that Amodei gave that response and called the account “patently false.” The company had agreed to allow Claude to be used for missile defense, they said.
Be that as it may, it’s clear that the parties are failing to see eye to eye. The standoff swirls over the Pentagon’s demands that Anthropic loosen its safeguards around Claude, which is making the company uneasy.
For months, Trump administration figures both inside and outside the DoD have piled pressure on Anthropic, a company founded by former OpenAI employees with an avowed focus on safety. Amodei has criticized the administration’s attempts to curb AI regulation, which included a proposed ban on all state-level AI regulation. Trump officials such as AI czar David Sacks retaliated by calling Amodei “woke” and accusing him of “fear-mongering.”
The tensions have mounted in recent weeks. During a tense meeting with Defense secretary Pete Hegseth on Tuesday, Amodei was reportedly presented with a series of ultimatums. If Anthropic didn’t allow the military unrestricted use of its AI, the Pentagon could cut off Anthropic from all current and future contracts, including its outstanding $200 million contract to deploy Claude across the military signed last summer, by declaring it a supply chain risk. The Pentagon also threatened using the Defense Production Act to force Anthropic to hand over its AI technology, a Cold War era law whose usage in this context would be legally dubious and almost certainly challenged.
In a statement Thursday, Amodei said that Anthropic could not agree to the Pentagon’s “final” proposal to have unrestricted use of Claude systems, despite Hegseth’s threats. Defense officials fumed at the rebuttal. On X, under secretary of defense for research and engineering Emil Michael accused Amodei of having a “God-complex,” adding that Amodei “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”
Pentagon spokesperson Sean Parnell insisted on X that the Pentagon had “no interest in using AI to conduct mass surveillance of Americans” or to use AI to “develop autonomous weapons that operate without human involvement.” Instead, Parnell claimed, the Pentagon is simply demanding to use Anthropic’s AI for “all lawful purposes.”
“We will not let ANY company dictate the terms regarding how we make operational decisions,” Parnell added. “They have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk.”
It’s unclear what either side’s next move will be. But Anthropic may no longer be alone in its fight. Axios reported that rival OpenAI CEO Sam Altman wrote in a memo to staff that he would draw the same line in the sand over the military’s use of its own AI products as Anthropic.
“This is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance,” Altman wrote. “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions.”
Anthropic may be getting additional reinforcements from elsewhere in Silicon Valley. Two coalitions of workers that include employees from Google, Microsoft, Amazon, and OpenAI have demanded their employers to join Anthropic in refusing to let the military demand unrestricted use of AI systems, Bloomberg reported.
The nuclear scenario proposed by the Pentagon during its talks with Anthropic, while an extreme hypothetical, underscore how deeply it intends to deploy AI tech. The US, along with other major powers like France and China, have agreed to require a human to be involved in all decisions to use nuclear weapons. But an AI could still influence a human’s decision to press the big red button, Paul Dean, vice president of the global nuclear program at the nonprofit Nuclear Threat Initiative, warned WaPo. In recent war games, leading AI models including Claude, Gemini, and ChatGPT, all opted to deploy nukes in the vast majority of scenarios.
“It’s not simply ensuring that there’s a human being in the decision-making loop,” Dean told WaPo. “The question is, to what extent will AI impact that human decision-making?”
More on AI: Anthropic Drops Its Huge Safety Pledge That Was Supposedly the Whole Point of the Company