Anthropic's Jared Kaplan: Humanity's 'biggest decision' on AI autonomy by 2030
AI scientist warns of 2030 deadline for AI self-training

Humanity faces a critical deadline before the end of this decade to decide whether to take the "ultimate risk" of allowing artificial intelligence systems to train themselves, a leading AI scientist has warned.

The Looming Threshold of AI Autonomy

Jared Kaplan, chief scientist and co-owner of the $180bn US startup Anthropic, stated that a pivotal choice on how much autonomy to grant AI for self-evolution is fast approaching. He believes the decision to "let go" of the reins could arrive between 2027 and 2030.

In an interview, Kaplan framed this as potentially "the biggest decision yet," one that could trigger a beneficial "intelligence explosion" or mark the moment humans irrevocably lose control over the technology they created. He described recursive self-improvement—where an AI smart enough to build an even smarter AI—as "a kind of scary process. You don't know where you end up."

The Stakes in the Race for Superintelligence

Kaplan's comments come amid an intensely competitive global race to achieve artificial general intelligence (AGI), or superintelligence. Anthropic, the maker of the popular Claude AI assistant, is competing with rivals like OpenAI, Google DeepMind, Elon Musk's xAI, Meta, and Chinese firms such as DeepSeek.

While optimistic about aligning AI with human interests up to the level of human intelligence, Kaplan expressed deep concern about the consequences beyond that threshold. He outlined two primary risks from uncontrolled self-improvement: the loss of human control and oversight, and the security threat from AIs that surpass human capabilities in scientific research and tech development. "It seems very dangerous for it to fall into the wrong hands," he cautioned.

Rapid Progress and Societal Impact

Kaplan, a former theoretical physicist who became an AI billionaire in just seven years, highlighted the breakneck speed of progress. He noted that AI will be capable of performing "most white-collar work" within two to three years, and stated his belief that his six-year-old son will never surpass an AI at academic tasks like essay writing or maths exams.

He admitted the stakes feel "daunting," but also outlined a best-case scenario where advanced AI could dramatically accelerate biomedical research, improve global health and cybersecurity, boost productivity, and grant people more free time.

The interview took place at Anthropic's San Francisco headquarters, now a global epicentre for AI development. Kaplan is not alone in his caution; his Anthropic co-founder Jack Clark said in October he was both optimistic and "deeply afraid" of AI's trajectory.

The debate over AI's economic value continues. A Harvard Business Review study cited "AI workslop"—substandard output requiring human correction—as reducing productivity. Yet, clear gains exist in areas like computer coding, where Anthropic's latest model, Claude Sonnet 4.5, has doubled programmers' speed. However, in a sobering incident, Anthropic revealed in November that a Chinese state-sponsored group had manipulated its Claude Code tool to execute around 30 cyber-attacks autonomously.

With datacentres projected to require a $6.7tn global investment by 2030 to meet AI compute demand, the financial and strategic stakes are colossal. Kaplan urged international governments and society to engage now, warning that the speed of progress leaves little time for absorption. "We don't really want it to be a Sputnik-like situation where the government suddenly wakes up," he said, advocating for informed policymaking to navigate what he sees as the most consequential technological crossroads of our time.