OpenAI Forced to Revise Pentagon AI Agreement Following Intense Public Criticism
OpenAI has been compelled to amend its recently signed contract with the U.S. Department of War after CEO Sam Altman acknowledged that the initial announcement appeared "opportunistic and sloppy." The revision comes in response to widespread fears that the company's advanced artificial intelligence technology could be exploited for domestic mass surveillance purposes, sparking a significant public relations crisis for the San Francisco-based AI giant.
Contract Amendments and Executive Apology
In a late Monday post on X, Altman announced that OpenAI would "explicitly" prohibit its systems from being intentionally used for domestic surveillance of U.S. persons and nationals. Additionally, intelligence agencies, including the National Security Agency, will require a "follow-on modification" before gaining access to OpenAI's models under the contract. Altman expressed regret over the rushed Friday announcement, stating, "We shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication." He emphasized that the company aimed to de-escalate tensions but recognized the poor optics of the initial rollout.
Intensifying Rivalry with Anthropic
The contract revision follows escalating competition between OpenAI and rival Anthropic, highlighting how deeply artificial intelligence has become intertwined with U.S. military strategy. Anthropic's CEO, Dario Amodei, had refused to compromise on corporate "red lines" that forbid using its Claude model for mass domestic surveillance or fully autonomous weapons. In retaliation, U.S. Defense Secretary Pete Hegseth labeled Anthropic a supply chain risk, effectively banning Pentagon contractors from engaging with the company.
Former President Donald Trump supported this move, calling Anthropic "leftwing nut jobs" and ordering federal agencies to phase out its technology within six months. Anthropic plans to challenge the designation in court, arguing it is "legally unsound and sets a dangerous precedent." Despite the setback, Claude surged to the top of Apple's U.S. App Store free app rankings over the weekend, overtaking ChatGPT, with record sign-ups reported daily.
Public Backlash and Employee Concerns
The controversy triggered a substantial public backlash against OpenAI. Day-over-day uninstalls of ChatGPT jumped by nearly 300 percent on Saturday, compared to a typical nine percent, while a "delete ChatGPT" campaign trended on social media as critics accused the company of enabling military surveillance. OpenAI had initially touted the contract as having "more guardrails than any previous agreement for classified AI deployments," including a red line against directing autonomous weapons systems.
However, nearly 900 employees from OpenAI and Google signed an open letter titled "we will not be divided," warning that the Department of Defense was attempting to pressure AI firms into loosening safeguards. The letter urged leaders to stand together and refuse demands for using models in domestic surveillance or autonomous killing. Former OpenAI policy head Miles Brundage questioned the ethics of the deal, suggesting on X that "OpenAI caved and framed it as not caving." This echoes Google's 2018 withdrawal from Project Maven after employee protests over AI analysis of drone footage.
Broader Implications for AI in Military Operations
Artificial intelligence is already deeply embedded in Western military infrastructure, with the U.S., Ukraine, and NATO utilizing platforms like Palantir to analyze large datasets through commercial AI tools. Louis Mosley, head of Palantir's UK operations, noted these systems enable "faster, more efficient, and ultimately more lethal decisions where that's appropriate." While NATO officials claim there is "always a human in the loop" and AI never makes final decisions unsupervised, experts warn that oversight becomes challenging once models are integrated into networks.
Professor Mariarosaria Taddeo of Oxford University expressed concern that with Anthropic excluded, "the most safety-conscious actor" is now "out of the room," posing a significant problem. Meanwhile, Trump has threatened to use the Defense Production Act to compel AI firm compliance and warned Anthropic of potential "major civil and criminal consequences" for non-cooperation. Reports also suggest Google is in discussions to integrate its Gemini AI model into classified Pentagon systems, indicating ongoing expansion of AI-military collaborations.
