Despite the ongoing friction between AI startup Anthropic and U.S. government over the use of Artificial Intelligence models in classified military networks, the Pentagon is willing to exempt limited use of Anthropic’s AI tools in cases of national security. The move comes even as U.S. Department of Defense moves forward with a broader ban on Anthropic after labelling it as a “supply-chain” risk.
According to a report in Reuters, as per an internal memo dated March 6 and signed by Pentagon Chief Information Officer Kirsten Davies, exemptions may be granted in “rare and extraordinary circumstances” when the technology is considered essential to national security operations and no practical alternative exists.
Under the policy, any military unit seeking an exemption must submit a detailed risk mitigation plan explaining why the AI tools are necessary and how potential security risks will be managed. Approval will only be considered for mission-critical activities that directly support national defense operations.
The memo highlights the practical challenges the Pentagon may face while attempting to fully remove Anthropic technology from its systems and supply chains.
The directive follows weeks of internal debate within the Defense Department about safeguards governing the military’s use of artificial intelligence. The dispute ended when Defense Secretary Pete Hegseth labeled Anthropic a supply chain risk and ordered a ban on its use by the Pentagon and its contractors.
Anthropic has responded by filing a lawsuit aimed at blocking the Pentagon from enforcing the ban.
Also Read: Anthropic Sues U.S. Department of Defense Over Supply Chain Risk Branding
The memo also instructs officials to prioritize removing Anthropic products from highly sensitive systems tied to critical missions, including nuclear weapons and ballistic missile defense. Defense contracting officers have been given 30 days to notify contractors about the policy, and companies must certify full compliance within 180 days.



