OpenAI Delays ChatGPT Adult Mode to Prioritize Performance Upgrades
OpenAI Delays ChatGPT Adult Mode for Performance Focus

OpenAI Postpones ChatGPT Adult Mode to Focus on Core Improvements

OpenAI has announced a delay in the launch of its planned "adult mode" for ChatGPT, shifting its focus to more immediate priorities such as boosting the chatbot's overall performance and intelligence. The startup, which boasts over 900 million users, stated that while it still supports the principle of treating adults like adults, refining the experience requires additional time.

Shifting Priorities Amid Intense Competition

Chief Executive Sam Altman had initially revealed plans last year to introduce adult content with age verification, but the company now emphasizes higher-priority work. This includes gains in intelligence, personality enhancements, personalization, and making ChatGPT more proactive. The decision comes as OpenAI faces fierce competition from rivals like Google and Anthropic, prompting Altman to declare a "code red" to improve the chatbot's capabilities.

Enhanced Safety Measures for Underage Users

In the interim, OpenAI is implementing age prediction tools to identify users under 18, triggering extra safety settings that limit exposure to graphic violence and sexual role-play. In the UK, compliance with the Online Safety Act necessitates strict age checking to shield underage users from potentially pornographic content generated by ChatGPT.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Internal Concerns Over Pentagon Deal

Separately, a senior manager at OpenAI, Caitlin Kalinowski, head of hardware in the robotics division, resigned due to concerns over the company's deal with the US Pentagon. Kalinowski expressed worries about mass surveillance of American citizens and AI-guided autonomous killing machines, citing rushed execution and insufficient guardrails in the agreement.

OpenAI has since amended its contract with the Department of War to explicitly exclude technology use for mass domestic surveillance, acknowledging that the initial deal appeared "opportunistic and sloppy." The company asserts that the agreement establishes a responsible framework for national security AI applications while maintaining red lines against domestic surveillance and autonomous weapons.

Pickt after-article banner — collaborative shopping lists app with family illustration