Pentagon Deal for AI Shakes Up Tech World

Additional Coverage:

In a surprising turn of events, the Pentagon has found itself at the center of a high-stakes AI showdown, with tech giants OpenAI and Anthropic vying for influence and sparking a furious debate about the government’s relationship with private enterprise. Just hours after the U.S. government slapped OpenAI’s rival, Anthropic, with an unprecedented “supply chain risk” designation, OpenAI announced a deal to integrate its AI models into classified Pentagon systems.

The “supply chain risk” label, a first for an American company, has sent shockwaves through the tech world. Legal and policy experts are raising serious questions about the government’s motives, particularly as the designation appears to be a direct response to Anthropic’s refusal to agree to certain contractual terms. Anthropic has vowed to fight the designation in court.

Adding another layer of intrigue, OpenAI CEO Sam Altman stated that their Pentagon agreement includes the same two limitations on military use that Anthropic had insisted upon and which the government had previously rejected. These limitations concern the use of AI for mass surveillance of Americans and in lethal autonomous weapons, with humans retaining “appropriate levels of human judgment.”

While the exact wording of OpenAI’s agreement remains somewhat opaque, it seems the company found a different way to enshrine these limitations compared to Anthropic’s attempt to spell them out explicitly. OpenAI’s agreement allows the Pentagon to use its tech for “any lawful purpose,” with Altman also asserting that the limitations were “put into our agreement.” This suggests the contract may simply highlight existing U.S. laws and military policies that already prohibit such uses.

OpenAI further clarified that the Pentagon agreed to allow the company to build technical safeguards into its AI models to prevent their misuse for mass surveillance or in autonomous weapons. Altman even went so far as to suggest that the Department of War should offer these same terms to all AI companies, a remark some interpreted as a subtle dig at Anthropic.

In a follow-up statement, OpenAI boasted that its agreement contains more safeguards than any previous deal for classified AI deployments, including Anthropic’s. Beyond the bans on mass domestic surveillance and autonomous weapons, the deal also prohibits the use of OpenAI technology for “high-stakes automated decisions,” such as “social credit” systems.

“In our agreement, we protect our redlines through a more expansive, multi-layered approach,” the company explained, highlighting its discretion over its safety stack, cloud deployment, cleared personnel involvement, and strong contractual protections, all in addition to existing U.S. legal safeguards.

This development is particularly notable given that Altman had previously publicly supported Anthropic’s stance on these limitations, and numerous OpenAI employees had signed an open letter backing Anthropic CEO Dario Amodei’s insistence on such restrictions.

The Fallout of a “Supply Chain Risk”

The full extent of the damage to Anthropic’s business from the “supply chain risk” designation is still unfolding. While a $200 million Pentagon contract was canceled, this is a relatively minor hit for a company reportedly on track to generate at least $18 billion in revenue this year.

The greater concern lies in how this designation will impact Anthropic’s broader business relationships. Former President Trump’s social media announcement that all federal departments are being ordered to stop using Anthropic’s AI, with a six-month phase-in, adds another layer of uncertainty.

However, the most significant threat stems from Secretary of War Pete Hegseth’s interpretation of the designation. Hegseth’s social media post, stating that “effective immediately, no contractor, supplier, or partner that does business with the United State military may conduct any commercial activity with Anthropic,” could have catastrophic consequences.

Many large enterprises that use Anthropic’s Claude models also do business with the U.S. military. This could even force major investors like Amazon, Google, and Nvidia to divest from Anthropic, creating a massive funding gap and hindering future fundraising efforts.

This fight with the Pentagon casts a shadow over Anthropic’s recent $30 billion venture capital funding round, which valued the company at $380 billion, and its reported plans for an IPO.

Many legal analysts and AI policy experts are questioning Hegseth’s broad interpretation. Peter Harrell, a former Biden administration National Security Council official, argued that the designation should only apply to Department of War contracts, not private agreements. Dean Ball, a former AI policy advisor to the Trump administration, called Hegseth’s interpretation “almost surely illegal” and “attempted corporate murder,” sending a chilling message to businesses considering working with the U.S. government.

Legal experts also note that even a narrower interpretation of the “supply chain risk” designation may not withstand a legal challenge. Questions are being raised about whether the government conducted a required risk assessment and notified Congress before taking action.

Furthermore, Amos Toh, a senior counsel at the Brennan Center for Justice, highlighted that the designation requires proof of potential sabotage, subversion, or manipulation by an adversary, and it’s unclear how Anthropic’s usage restrictions could pose such a threat. He also questioned whether the Pentagon genuinely pursued less intrusive measures before escalating the dispute so rapidly.

Even if Anthropic ultimately prevails in court, the damage may already be done. As independent analyst Shenaka Anslem Perera posted on social media, “It will take years to resolve in court. And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask one question: is using Claude worth the risk?”


Read More About This Story:

TRENDING NOW

LATEST LOCAL NEWS