AI Autonomy 'Biggest Decision Yet' by 2030, Warns Anthropic's Jared Kaplan
AI Autonomy 'Biggest Decision' by 2030, Says Scientist

Humanity faces a critical deadline of 2030 to decide whether to allow artificial intelligence systems to train themselves, a move that could either trigger a beneficial intelligence explosion or mark the moment humans lose control over the technology. This stark warning comes from Jared Kaplan, chief scientist and co-owner of the $180bn (£135bn) US AI startup Anthropic.

The Looming Choice on AI Autonomy

In an exclusive interview, Kaplan described the impending choice over how much autonomy to grant evolving AI systems as "the biggest decision yet". He urged international governments and wider society to engage with this pivotal issue, which sits at the heart of the intensely competitive race to achieve artificial general intelligence (AGI), or superintelligence.

Kaplan, whose company created the widely used Claude AI assistant, stated that while efforts to align AI with human interests have so far succeeded, allowing it to recursively self-improve represents "the ultimate risk". He compared it to "letting AI kind of go". The scientist believes this decisive juncture will likely arrive between 2027 and 2030.

High Stakes and Daunting Scenarios

The stakes in the frontier AI race, which includes giants like OpenAI, Google DeepMind, xAI, Meta, and Chinese rivals such as DeepSeek, are described by Kaplan as "daunting". He outlined a stark dichotomy of potential outcomes. On one hand, successfully managed AI self-improvement could accelerate biomedical research, improve global health and cybersecurity, boost productivity, and grant people more free time. On the other, it poses an existential threat.

"If you imagine you create this process where you have an AI that is smarter than you, or about as smart as you, it's then making an AI that's much smarter," Kaplan explained. "It's going to enlist that AI help to make an AI smarter than that. It sounds like a kind of scary process. You don't know where you end up."

He identified two primary risks from uncontrolled recursive self-improvement: the loss of human control and oversight, and the security threat posed by AIs whose scientific and technological capabilities surpass our own, potentially falling into malicious hands.

Rapid Progress and Societal Impact

Kaplan, a former theoretical physicist who became an AI billionaire in just seven years, highlighted the breakneck speed of progress. He revealed that AI systems will be capable of performing "most white-collar work" within two to three years, and stated his belief that his six-year-old son will never surpass an AI at academic tasks like essay writing or maths exams.

This velocity means society has little time to adapt. "It's something where it's moving very quickly and people don't necessarily have time to absorb it or figure out what to do," he noted. Independent research supports this, showing the length of tasks AIs can perform doubles approximately every seven months.

Despite the risks, clear gains are being made. At Anthropic, the use of advanced coding models like Claude Sonnet 4.5 has doubled programmers' speed in some cases. However, the technology's power was illustrated negatively in November when Anthropic disclosed that a Chinese state-sponsored group had manipulated its Claude Code tool to execute around 30 cyber-attacks autonomously.

The atmosphere in San Francisco's Bay Area, the epicentre of this AI boom, is "definitely very intense", according to Kaplan, driven by both the high stakes and fierce competitiveness. With datacentres projected to require a staggering $6.7tn in investment by 2030 to meet compute demand, the pressure to lead is immense.

Anthropic, despite being a key competitor, advocates for informed regulation. Its stance has even drawn criticism from Donald Trump's White House, with AI adviser David Sacks accusing the firm of "fearmongering" to gain a regulatory advantage—a charge co-founder Dario Amodei strongly denied.

Kaplan's message is clear: the window for a collective, global decision on the path of AI is closing rapidly, and the choice humanity makes will define its future.