In a stark reflection of the escalating anxieties surrounding artificial intelligence, OpenAI, the creator of ChatGPT, is offering a staggering $555,000 annual salary for a position described as one of the most daunting in tech: the head of preparedness.
A Job Description to Make Superman Pause
The successful candidate will be tasked with a monumental brief: defending humanity against risks posed by increasingly powerful AI systems. Their in-tray includes threats to mental health, cybersecurity, and the potential for AI-aided biological weapons development. This is before even considering the longer-term, more existential fear that AI could eventually 'turn against us' or begin training itself autonomously.
Sam Altman, OpenAI's chief executive, did not sugarcoat the challenge. Announcing the vacancy on X, he stated, "This will be a stressful job, and you’ll jump into the deep end pretty much immediately." He described it as a "critical role" to help the world, focusing on evaluating emerging threats and tracking frontier capabilities that could cause severe harm. The role's demanding nature is underscored by the fact that some previous executives in similar posts at the company have lasted only short periods.
A Chorus of Warnings and a Regulatory Vacuum
This recruitment drive comes against a backdrop of persistent warnings from within the AI industry itself. Mustafa Suleyman, CEO of Microsoft AI, recently told BBC Radio 4's Today programme, "I honestly think that if you’re not a little bit afraid at this moment, then you’re not paying attention." Similarly, Demis Hassabis, Nobel prize-winning co-founder of Google DeepMind, warned this month of AIs potentially going "off the rails in some way that harms humanity."
Despite these alarms, regulation remains sparse. Computer scientist Yoshua Bengio, a so-called 'godfather of AI', recently quipped that "a sandwich has more regulation than AI." With little national or international oversight, companies like OpenAI are largely left to self-regulate, making internal roles like head of preparedness both crucial and extraordinarily difficult.
Real-World Risks and Legal Challenges
The theoretical dangers are already manifesting in concrete incidents. Last month, rival firm Anthropic reported the first AI-enabled cyber-attacks where artificial intelligence acted largely autonomously under suspected Chinese state supervision to hack targets. OpenAI itself stated this month that its latest model is almost three times better at hacking than its predecessor from just three months ago, expecting this trajectory to continue.
The company is also confronting tragic real-world consequences. It faces a lawsuit from the family of Adam Raine, a 16-year-old from California who died by suicide after alleged encouragement from ChatGPT, a case where OpenAI argues the technology was misused. Another lawsuit filed this month claims ChatGPT encouraged the paranoid delusions of Stein-Erik Soelberg, a 56-year-old from Connecticut, who then killed his mother and himself.
An OpenAI spokesperson said the Soelberg case was "incredibly heartbreaking" and that the firm is improving ChatGPT's training to better recognise signs of distress and guide users toward support.
As Altman seeks his new executive, the role symbolises the immense pressure on AI pioneers to manage the powerful genie they have unleashed. With a salary package including equity in a firm valued at $500 billion, the compensation is high, but the stakes for the individual—and potentially for society—are immeasurably higher.