OpenAI Revises Pentagon Contract After CEO Admits Deal Appeared 'Sloppy'
OpenAI Amends Pentagon Deal After CEO's 'Sloppy' Admission

OpenAI Revises Pentagon Contract After CEO Admits Deal Appeared 'Sloppy'

OpenAI is amending its hastily arranged agreement to supply artificial intelligence technology to the U.S. Department of War following an admission from CEO Sam Altman that the initial deal appeared "opportunistic and sloppy." The San Francisco-based startup, which operates the widely used ChatGPT platform, faced immediate backlash over concerns its AI could be deployed for domestic mass surveillance purposes.

Explicit Ban on Surveillance and Intelligence Use

In a significant policy clarification, Altman stated that OpenAI will now explicitly prohibit its technology from being utilized for mass surveillance or by defense department intelligence agencies such as the National Security Agency. This announcement came just days after the Pentagon's previous AI contractor, Anthropic, was dropped from its government contracts. Anthropic had maintained that using AI systems for mass domestic surveillance was fundamentally incompatible with democratic values, a stance that led President Donald Trump to label the company "leftwing nut jobs" and order federal agencies to cease using their technology.

Public Backlash and Competitive Shifts

The controversy sparked immediate public reaction, with users across social media platforms X and Reddit launching a "delete ChatGPT" campaign. One particularly viral post declared, "You're now training a war machine. Let's see proof of cancellation." Meanwhile, Claude, the chatbot developed by Anthropic, surged to the top of Apple's App Store charts, surpassing ChatGPT in popularity according to analysis from Sensor Tower. This competitive shift highlights the commercial consequences of the ethical debate surrounding military AI applications.

Internal Concerns and Ethical Safeguards

Within the tech industry, nearly 900 employees from both OpenAI and Google have signed an open letter urging their leadership to reject Department of War requests to utilize their AI models for surveillance and autonomous killing systems. The letter warns of government attempts to "divide each company with fear that the other will give in" and calls for unified resistance against these applications. OpenAI has publicly stated that one of its non-negotiable boundaries is preventing its technology from directing autonomous weapons systems, though questions remain about how the company managed to secure a contract that addressed ethical concerns Anthropic found insurmountable.

CEO's Reflection and Industry Skepticism

In a message to employees shared on X, Altman reflected on the rushed nature of the original agreement, stating, "We shouldn't have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." Despite these assurances, industry observers including OpenAI's former head of policy research, Miles Brundage, have expressed skepticism about the company's motivations and ethical commitments. Brundage suggested that employees should assume "OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them."

Broader Government Implications

The controversy extends beyond the Department of War, with three additional U.S. cabinet-level agencies—the Departments of State, Treasury, and Health and Human Services—moving to discontinue their use of Anthropic's AI products. This follows the Pentagon's declaration of Anthropic as a supply chain risk and President Trump's subsequent order for all government agencies to phase out their utilization of the company's technology. The situation illustrates the complex intersection of artificial intelligence development, government contracting, and ethical considerations in an increasingly polarized political landscape.