Pentagon Threatens AI Firm Over Military Use Restrictions

Additional Coverage:

Pentagon Delivers Ultimatum to AI Firm Anthropic Over Military Use of Claude

WASHINGTON D.C. – A high-stakes showdown is brewing between the Pentagon and leading artificial intelligence firm Anthropic, with the military threatening to cancel a $200 million contract if the company doesn’t lift restrictions on how its Claude AI system can be used by defense forces.

Sources close to the discussions reveal that the dispute escalated after the Pentagon claimed Anthropic questioned whether its product was used in a January military operation to capture Venezuelan leader Nicolás Maduro, hinting at potential disapproval. The Department of Defense (DoD) is adamant that AI companies must allow their products to be utilized for all lawful military purposes without company oversight or approval.

Anthropic, which positions itself as a safety-oriented AI company, has reportedly drawn “red lines” against using its products for fully autonomous weapons or mass surveillance of Americans.

War Secretary Pete Hegseth delivered a firm ultimatum to Anthropic CEO Dario Amodei during a Tuesday meeting at the Pentagon. While praising the company’s technology and expressing a desire to continue the partnership, Hegseth reportedly outlined severe repercussions if Anthropic refused to comply. These include the termination of the lucrative contract, designation as a supply chain risk (potentially hindering future defense collaborations), or even the invocation of the Defense Production Act to compel access to the technology.

This dispute carries significant weight as Claude is currently the only advanced commercial AI model operating within the Pentagon’s classified networks, a testament to its capabilities under the $200 million contract awarded in summer 2025. Pentagon officials argue that the DoD cannot depend on a private company that imposes categorical restrictions on lawful uses of its technology. Hegseth reportedly likened the situation to being told the military couldn’t use a specific aircraft for a mission.

The clash represents an early and critical test of who ultimately controls the guardrails on advanced AI within U.S. defense systems: private companies or the Pentagon. The outcome is expected to significantly influence how the military partners with leading AI developers as it integrates powerful machine learning tools into national security operations.

During the meeting, Amodei reportedly defended Anthropic’s restrictions, asserting they would not impede lawful and legitimate War Department operations. A senior Pentagon official countered, stating their position “has nothing to do with mass surveillance or autonomous targeting” and emphasized that “there’s always a human involved and the department always follows the law.” Both sides indicated that fully autonomous weapons are not currently contemplated under the department’s lawful use framework, suggesting the dispute is as much about control as it is about specific battlefield applications.

The potential invocation of the Defense Production Act, a rare move, underscores the Pentagon’s determination to secure access to frontier AI systems deemed critical for defense needs. A supply chain risk designation would also significantly impact Anthropic’s ability to work with federal vendors. Terminating the contract would not only end the partnership but also disrupt existing workflows within the Pentagon’s classified networks, requiring a transition to an alternative provider.

Pentagon officials also noted that Elon Musk’s Grok AI chatbot has already agreed to allow its products for all lawful purposes, including potential integration into classified systems, with other AI firms reportedly “close” to similar arrangements.

In a statement, an Anthropic spokesperson confirmed the meeting between CEO Dario Amodei and Secretary Hegseth, stating, “We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”


Read More About This Story:

TRENDING NOW

LATEST LOCAL NEWS