Pentagon Threatens Anthropic Over AI Safeguards in Military Use Dispute
US Military Pressures Anthropic on AI Safeguards

Pentagon Threatens Anthropic Over AI Safeguards in Military Use Dispute

US military leaders, including Defense Secretary Pete Hegseth, held a high-stakes meeting with executives from artificial intelligence company Anthropic on Tuesday to resolve a contentious disagreement regarding the military's access to the firm's powerful AI model, Claude. According to an Axios report, Hegseth issued an ultimatum to Anthropic CEO Dario Amodei, demanding compliance with the Department of Defense's terms by Friday's end or face significant penalties.

Clashing Priorities: Safety vs. Military Access

Anthropic, which positions itself as the most safety-conscious among leading AI developers, has been locked in weeks of negotiations with the Pentagon over permissible military applications of its large language model. Defense officials are pushing for unrestricted access to Claude's capabilities, while Anthropic has reportedly resisted allowing its technology to be utilized for mass surveillance or autonomous weapon systems capable of lethal action without human oversight.

The Department of Defense has already integrated Claude into certain operations but has threatened to terminate the relationship over what it views as unnecessary restrictions imposed by the company. "I think if someone wants to make money from the government, from the US Department of War, those guardrails ought to be tuned for our use cases – so long as they're lawful," stated Emil Michael, the Pentagon's chief technology officer and former Uber executive, in an interview with Defense Scoop last week.

Broader Implications for AI Industry

This confrontation raises critical questions about whether the AI industry will resist government demands for military applications of their products—a longstanding point of controversy among researchers and ethical AI advocates. The Pentagon has warned of punitive measures against Anthropic, including canceling a substantial contract and designating the company as a "supply chain risk."

In July of last year, the DoD established agreements with several major AI firms, including Anthropic, Google, and OpenAI, offering contracts valued up to $200 million. Until recently, Anthropic's Claude was the sole AI model authorized for use within the military's classified systems. However, on Monday, the DoD signed a new deal permitting the use of Elon Musk's xAI chatbot in these systems, despite recent backlash over its generation of nonconsensual sexualized images of children.

Political and Ethical Dimensions

The meeting occurs just one month after the US military reportedly employed Claude to assist in the capture of Venezuelan leader Nicolás Maduro. There has been a concerted push from the Trump administration to integrate AI into military operations, with former President Donald Trump repeatedly vowing that the US will dominate the global AI arms race.

Anthropic's Amodei has consistently advocated for stronger AI regulation, and his company supports a political action committee promoting enhanced artificial intelligence safeguards. Amodei opposed Trump during the 2024 presidential campaign, and Anthropic has hired several former Biden staffers—factors that reportedly led a pro-Trump venture capital firm to withdraw from an investment in Anthropic earlier this year.

Escalating Military AI Investments

The Pentagon has invested billions of dollars in recent years to develop AI-enabled technologies, ranging from unmanned aerial drones to automated targeting systems. These advancements have intensified ethical debates about delegating lethal decision-making authority to artificial intelligence. These discussions are no longer theoretical, as evidenced by the use of deadly semiautonomous drones in the Ukraine conflict that can operate without direct human control.

Both xAI and OpenAI have reportedly acquiesced to the government's terms regarding AI usage, with a defense official noting that OpenAI permitted its model to be used for "all lawful purposes." OpenAI did not immediately respond to requests for comment on its agreement with the government. The outcome of the Anthropic-Pentagon negotiations will likely set a precedent for future interactions between AI developers and military entities worldwide.