Additional Coverage:
Pentagon Labels Anthropic a National Security Threat, Sparking Industry Outcry
**Washington D.C. ** – In a dramatic move that could send ripples across the American artificial intelligence landscape, the Department of Defense (DoD) has designated Anthropic, a leading U.S.
AI company, as a “supply-chain risk to national security.” This unprecedented classification effectively bans the company from conducting business with the U.S. military.
The designation, which Anthropic confirmed receiving on Wednesday, requires the Pentagon and its contractors to immediately cease using Anthropic’s AI services for all defense-related operations. Defense Secretary Pete Hegseth foreshadowed the decision late last week on X, formerly Twitter.
This development follows months of escalating tensions and contentious negotiations between the military and Anthropic regarding the permissible scope of military use for the company’s Claude AI systems. While generative AI models like Claude have been rapidly adopted by the Trump administration, including for defense purposes, Anthropic CEO Dario Amodei had sought stricter assurances that its technology would not be employed for lethal autonomous weapons or widespread domestic surveillance. The Pentagon, conversely, aimed for expansive “any lawful use” of these powerful AI systems.
Amodei issued a statement Thursday night, expressing disagreement with the designation. “We do not believe this action is legally sound, and we see no choice but to challenge it in court,” Amodei stated.
He emphasized a shared commitment to national security, writing, “Anthropic has much more in common with the Department of War than we have differences… All our future decisions will flow from that shared premise.”
Other AI Giants Step In
Until recently, Anthropic was the sole AI provider cleared for use on the Defense Department’s classified networks. However, within hours of Secretary Hegseth’s initial announcement last week regarding the potential “supply chain risk” label, OpenAI CEO Sam Altman declared a new agreement with the Pentagon.
This deal allows OpenAI’s services to be utilized in classified settings, positioning it to potentially absorb a significant portion of Anthropic’s former business with the Pentagon. Elon Musk’s xAI and its Grok AI systems also secured a similar agreement with the Pentagon last week.
Amodei clarified that the ban on military business does not extend to contracts with military suppliers for non-defense-related purposes. Anthropic maintains extensive partnerships with major tech companies like Amazon and Microsoft, many of whom also hold substantial contracts with the Pentagon.
A senior Defense Department official, speaking to NBC News on Thursday, confirmed the immediate effect of the supply chain risk determination. The official underscored the military’s stance, stating, “This has been about one fundamental principle: the military being able to use technology for all lawful purposes. The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk.”
Secretary Hegseth’s post last Friday indicated that Anthropic would be allowed a maximum of six months to facilitate a “seamless transition to a better and more patriotic service.” He sharply criticized Anthropic’s conduct, stating, “Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.” He reiterated the Pentagon’s unwavering demand for “full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”
Industry Concerns Mount Over Precedent
The application of a “supply chain risk” label to an American company is unprecedented, typically reserved for foreign adversaries and their associated enterprises. Legal observers have expressed skepticism about the legal viability of the designation, suggesting it may serve more as a deterrent to other companies considering similar restrictions on military use.
The mere threat of such a designation had already caused significant unease within Washington and the tech industry. Throughout the week preceding the official announcement, defense experts, Anthropic rival OpenAI, and members of Congress actively sought to de-escalate tensions between Anthropic and the Pentagon. An influential tech advocacy group, counting Nvidia and Apple among its members, even sent a letter to Secretary Hegseth on Wednesday, urging him to refrain from formalizing the “supply chain risk” label.
Industry investors fear that targeting one of America’s largest and most successful AI companies could establish a dangerous precedent, potentially stifling investment and chilling the growth of the U.S. AI sector.
Last Friday, just an hour before a 5 p.m. ET deadline set by Hegseth for an agreement, President Donald Trump announced his intention to bar Anthropic from other federal agencies. Trump asserted that “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.”
Anthropic’s Claude systems are currently integrated into the Pentagon’s operations through an agreement with data analytics company Palantir. Recent reports from The Washington Post and The Wall Street Journal, unconfirmed by NBC News, indicate that Anthropic’s AI has been utilized for intelligence assessment and target identification in the ongoing conflict in Iran.
Anthropic’s initial deal with Palantir in 2024 allowed its services on classified networks, followed by a $200 million contract in July to “prototype frontier AI capabilities that advance U.S. national security.” Earlier negotiations saw Anthropic agree to allow the Pentagon to use its AI systems for cyber and missile defense.
“Essentially No Gain”
Some experts have highlighted a perceived inconsistency in applying the “supply chain risk” label to a major American AI company while refraining from doing so with DeepSeek, a leading Chinese AI company that has faced accusations of unfair practices. DeepSeek has not responded to requests for comment from various news organizations.
“We’re treating an American AI company worse than we’re treating a Chinese Communist Party-controlled AI company,” remarked Michael Sobolik, an expert on AI and China issues and senior fellow at the Hudson Institute. “We cannot hobble the most innovative, successful American companies for asking quintessentially American questions about military use and privacy.”
Sobolik further warned, “The U.S. government risks cutting off the legs of one of our best AI companies in the early years of this AI race. If we do that, where America’s frontier models are qualitatively and quantitatively better than China’s, it does seem like cutting off our nose to spite our own face.”
Tim Fist, director of emerging technology at the Washington-based Institute for Progress think tank, echoed these sentiments, stating that the new designation would be counterproductive to America’s AI aspirations. “The supply chain risk designation, normally used on foreign adversaries, is both hurting one of America’s top AI companies and making other companies much more hesitant to work with the federal government,” Fist said. “The designation hurts the AI industry and thus US national security for essentially no gain.”