US Military Used Claude AI in Iran Strikes Despite Trump's Ban
US Military Used Claude AI in Iran Strikes Despite Trump Ban

US Military Deployed Claude AI in Iran Bombardment Despite Trump's Immediate Ban

The United States military reportedly utilized Anthropic's Claude artificial intelligence model during its massive joint bombardment of Iran with Israel, despite former President Donald Trump's explicit order to sever all federal ties with the company just hours before the attack commenced. This revelation, first reported by the Wall Street Journal and Axios, highlights the profound operational challenges involved in disentangling advanced AI tools from modern military infrastructure once they have become deeply embedded in strategic processes.

Intelligence and Targeting Applications

According to detailed reports, US military command employed Claude's capabilities for multiple critical functions during the Iran strikes. The AI system was used for intelligence gathering and analysis, assisting in the selection of high-value targets, and running complex battlefield simulations to predict outcomes and optimize strike effectiveness. This deployment occurred despite Trump's forceful declaration on Friday, where he denounced Anthropic as a "Radical Left AI company run by people who have no idea what the real World is all about" via his Truth Social platform, ordering all federal agencies to immediately cease using Claude.

Escalating Tensions and Ideological Clash

The current conflict traces back to January, when the US military used Claude during its controversial raid targeting Venezuelan President Nicolás Maduro. Anthropic publicly objected to this application, citing its strict terms of service that prohibit using its AI for violent purposes, weapons development, or surveillance operations. Since that incident, relations between the Trump administration, the Pentagon, and Anthropic have deteriorated significantly.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

In a lengthy social media post on Friday, Defense Secretary Pete Hegseth escalated the rhetoric, accusing Anthropic of "arrogance and betrayal" and declaring that "America's warfighters will never be held hostage by the ideological whims of Big Tech." Hegseth demanded unrestricted access to all of Anthropic's AI models for any lawful military purpose, while simultaneously acknowledging the practical difficulties of rapidly removing these tools from operational systems.

Transition Period and Competitive Landscape

Recognizing the integration challenges, Hegseth announced that Anthropic would continue providing services to the military for a maximum of six months to facilitate what he described as "a seamless transition to a better and more patriotic service." This transition period underscores how extensively Claude had been woven into military planning and execution frameworks before the ban was instituted.

Meanwhile, competitor OpenAI has moved swiftly to fill the void created by the rupture with Anthropic. CEO Sam Altman confirmed that his company has reached an agreement with the Pentagon for deploying OpenAI's tools, including ChatGPT, within the military's classified networks. This development signals a significant shift in the defense sector's AI partnerships and highlights the ongoing competition among leading AI firms for lucrative government contracts.

The situation reveals fundamental tensions between technological ethics, corporate policies, national security imperatives, and political ideology. As AI becomes increasingly central to military operations worldwide, the clash between Anthropic's ethical guidelines and the Pentagon's operational requirements exemplifies the complex governance challenges emerging at the intersection of advanced technology and national defense strategy.

Pickt after-article banner — collaborative shopping lists app with family illustration