Anthropic CEO Dario Amodei has said the company will not remove key safeguards from its AI system, Claude, despite mounting pressure from the United States Department of War.
In a detailed public statement, Amodei warned that certain uses of artificial intelligence could undermine democratic values rather than protect them.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” Amodei said, referring to demands that Claude be opened for unrestricted military use.
Amodie also hit out at Department of War for trying to arm-twist Anthropic into letting key safety measures go, by threatening to designate the company as a “supply chain risk” and invoking the “defense production act” to forcibly make Claude comply with instructions.
“These threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security,” added Amodei.
Anthropic was the first frontier AI firm to deploy its models within classified US government networks and National Laboratories. Claude is currently used across defense and intelligence agencies for intelligence analysis, operational planning, cyber operations and modeling.
According to Anthropic CEO, the Department of War had demanded mass domestic surveillance and fully autonomous weapons- clauses that were never included initially in the contract.
Amodei argued that AI-powered surveillance of Americans at scale is incompatible with democratic principles, warning that such systems can compile vast amounts of public data into detailed personal profiles automatically.
On autonomous weapons, he acknowledged they may eventually play a role in defense but said today’s AI systems are not reliable enough to remove humans from targeting decisions. “We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” he stated.
Amodei also revealed that defense officials have threatened to cut Anthropic from government systems. Despite the standoff, he said the company remains ready to support US national security — but without compromising its core safeguards. He highlighted on dependency of system for long-term trust in AI. There are very clear ethical boundaries and responsible deployment, especially when we talk about high-stakes military environments.
Also Read: Pentagon Pressures Anthropic to Loosen AI Restrictions, Sets Deadline


