Pentagon and AI Company Clash Over Use of New Technology

Additional Coverage:

AI Company Anthropic Stands Firm Against Pentagon’s Demands, Sparks Heated Debate

San Francisco, CA – A public showdown is brewing between AI powerhouse Anthropic and the Pentagon, as the tech company’s CEO, Dario Amodei, has outright refused to grant the Defense Department unrestricted access to its advanced AI model, Claude. This bold stance comes despite an ultimatum from Defense Secretary Pete Hegseth, threatening to blacklist Anthropic if they didn’t comply.

Amodei articulated his position in a Thursday blog post, stating the company “cannot in good conscience accede” to the military’s terms. This refusal immediately drew sharp criticism from Emil Michael, the DOD’s undersecretary for research and engineering, who took to social media platform X to lambast Amodei, accusing him of having a “God-complex” and “putting our nation’s safety at risk.”

The ongoing dispute has ignited a lively discussion among tech leaders, former military officials, and policy experts, offering a range of perspectives on the implications of Anthropic’s decision.

Here’s what some prominent voices are saying:

**Lt. Gen.

(Retired) John N.T. ‘Jack’ Shanahan**, a former USAF Lt.

Gen. who worked on the Pentagon’s AI efforts and is now a senior fellow at the Center for a New American Security, expressed understanding for Anthropic’s position. He emphasized that current AI models are not suitable for fully lethal autonomous weapon systems or mass surveillance of US citizens, calling these “reasonable redlines.”

Shanahan also voiced regret that the disagreement became so public, advocating for a collaborative, behind-the-scenes approach to establish governance for frontier AI models.

Palmer Luckey, founder of weapons and defense software startup Anduril, referenced historical precedent, citing a 1948 executive order by then-President Harry S. Truman that compelled railway companies to cooperate with the US military during a strike. Luckey later stressed that military policy should remain in the hands of elected leaders, not corporate executives – a foundational principle for his company, which has secured numerous Defense Department contracts for AI and drone systems.

Dean Ball, a former AI advisor to the Trump administration, sharply criticized the Pentagon’s contradictory position. In an interview with Politico, Ball called it “a whole different level of insane” for the DOD to simultaneously designate Anthropic as a security risk while asserting the company’s models are essential for military AI.

Michael McFaul, a Stanford University political science professor and former US ambassador to Russia, lauded Amodei’s statement as “strong, principled, and very reasonable,” offering a simple “Bravo.”

Thomas Wright, who served as senior director for strategic planning at the US National Security Council during the Biden administration, noted that “Many firms folded for a lot less money than what Anthropic stands to lose here,” highlighting the potential financial implications for the AI company.

Jeff Dean, Google’s chief scientist, appeared to back Amodei’s stance, reiterating his long-held opposition to mass surveillance and autonomous weapons. He previously signed a 2018 pledge stating that “the decision to take a human life should never be delegated to a machine.”

Gary Marcus, a cognitive scientist and AI commentator, applauded Amodei’s “incredibly brave statement,” publishing a blog post on the same day warning of “monstrous precedents” if Anthropic were to concede to Hegseth’s demands. Marcus expressed concerns about the Defense Secretary’s understanding of AI’s capabilities and limitations.

As this high-stakes debate unfolds, the tech world and national security experts are closely watching to see how this unprecedented clash between an innovative AI company and the US military will ultimately resolve.


Read More About This Story:

TRENDING NOW

LATEST LOCAL NEWS