US Military Reportedly Used Anthropic's AI Claude in Venezuela Raid
US Military Used AI Claude in Venezuela Raid, Report Says

US Military Reportedly Used Anthropic's AI Claude in Venezuela Raid

The Wall Street Journal has revealed that the US military utilized Anthropic's artificial intelligence model, Claude, during a high-profile operation in Venezuela aimed at kidnapping President Nicolás Maduro. This incident marks a significant example of the US Department of Defense integrating AI into its military strategies, according to sources cited in the report.

Details of the Operation and AI Deployment

The raid, which involved extensive bombing in Caracas and resulted in 83 fatalities as reported by Venezuela's defense ministry, allegedly saw Claude employed through Anthropic's partnership with Palantir Technologies. Palantir, a contractor for the US defense and law enforcement agencies, declined to comment on the claims. Anthropic, the developer of Claude, also refrained from confirming its involvement but emphasized that any use of its AI tool must adhere strictly to its policies, which prohibit applications for violence, weapon development, or surveillance.

This deployment represents the first known instance of an AI developer being used in a classified operation by the US Department of Defense. Claude's capabilities, which range from processing documents to piloting autonomous drones, raise questions about how it was specifically utilized in this context. The US defense department has not provided any official statements regarding these allegations.

Broader Implications and Ethical Concerns

The revelation comes amid a growing trend of militaries worldwide, including those of the US and Israel, incorporating AI into their arsenals. For instance, Israel has deployed autonomous drones in Gaza and used AI for targeting purposes, while the US has applied AI in strikes across Iraq and Syria. Critics have voiced strong concerns about the risks associated with AI in warfare, particularly highlighting potential targeting errors and the ethical dilemmas of autonomous weapons systems deciding on life-and-death matters.

AI companies, including Anthropic, are grappling with the ethical implications of their technologies in defense sectors. Anthropic's CEO, Dario Amodei, has advocated for regulatory measures to mitigate harms from AI deployment and expressed caution over its use in lethal autonomous operations and surveillance. This cautious approach has reportedly caused friction with the US defense department, with Secretary of War Pete Hegseth stating in January that the department would not employ AI models that hinder combat capabilities.

Future Developments and Industry Responses

In response to these challenges, the Pentagon announced in January a collaboration with xAI, owned by Elon Musk, and continues to use customized versions of Google's Gemini and OpenAI systems for research support. As AI becomes increasingly integral to military operations, debates over regulation, accountability, and ethical boundaries are expected to intensify, shaping the future of defense technologies and global security policies.