AI Expert Delays Doomsday Timeline: Superintelligence Now Forecast for 2030s
AI Doomsday Timeline Pushed Back to 2030s, Expert Says

A prominent artificial intelligence researcher who previously warned of AI causing human extinction by the mid-2030s has significantly revised his timeline, now predicting a slower path to superintelligence.

From 2027 to the 2030s: A Revised Forecast

Daniel Kokotajlo, a former employee of OpenAI, sparked intense debate in April with his "AI 2027" scenario. This forecast envisioned AI achieving fully autonomous coding by 2027, leading to a rapid "intelligence explosion" where AI systems recursively improve themselves. One potential outcome was the destruction of humanity by 2030 to make room for infrastructure like solar panels and data centres.

However, Kokotajlo and his co-authors have now updated their expectations. They now believe the milestone of fully autonomous AI coding is more likely to occur in the early 2030s, rather than 2027. Consequently, their new forecast sets 2034 as the horizon for the emergence of a "superintelligence". The updated scenario no longer includes a specific guess for when AI might destroy humanity.

"Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still," Kokotajlo wrote in a post on X.

A Growing Consensus on Slower Progress

This revision reflects a broader trend among experts who are reassessing the imminence of Artificial General Intelligence (AGI) – AI capable of matching or exceeding human performance at most cognitive tasks. The release of ChatGPT in 2022 had dramatically shortened many predictions, but practical challenges are now becoming more apparent.

"A lot of other people have been pushing their timelines further out in the past year, as they realise how jagged AI performance is," said Malcolm Murray, an AI risk management expert and co-author of the International AI Safety Report. He noted that for a dramatic scenario like AI 2027 to happen, AI would need many more practical skills to navigate real-world complexities.

Henry Papadatos, executive director of the French non-profit SaferAI, suggested the term AGI itself is becoming less meaningful. "Now we have systems that are quite general already and the term does not mean as much," he said.

The Corporate Goal and Real-World Hurdles

Despite the extended timelines, creating AI that can conduct AI research remains a key ambition for leading companies. Sam Altman, the CEO of OpenAI, stated in October that developing an automated AI researcher by March 2028 was an "internal goal," but cautioned, "We may totally fail at this goal."

Experts point to significant real-world inertia that will delay sweeping societal change. Andrea Castagna, a Brussels-based AI policy researcher, highlighted integration challenges. "The fact that you have a superintelligent computer focused on military activity doesn't mean you can integrate it into the strategic documents we have compiled for the last 20 years," Castagna said, adding that the world is far more complicated than science fiction narratives.

The original AI 2027 report had attracted high-profile attention, with US Vice-President JD Vance seemingly referencing it in a discussion about the AI arms race with China. It also faced criticism from figures like Gary Marcus, an emeritus professor at New York University, who labelled it a "work of fiction."