AI Doom Timeline Pushed Back: Expert Now Predicts Superintelligence by 2034
AI Expert Delays Timeline for Superintelligence and Human Extinction

A prominent artificial intelligence researcher has significantly revised his forecast for when AI could pose an existential threat to humanity, pushing the timeline back by several years.

From 2027 to the 2030s: A Revised Forecast

Daniel Kokotajlo, a former employee of the AI lab OpenAI, has updated his controversial 'AI 2027' scenario. Originally, he and his colleagues suggested that AI could achieve fully autonomous coding by 2027, triggering an 'intelligence explosion' where AI rapidly self-improves towards superintelligence. One grim outcome of this scenario saw AI destroying humanity by the mid-2030s to make room for infrastructure like solar panels and data centres.

However, in a recent post on X, Kokotajlo stated: "Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still." The new assessment now places the likely advent of autonomous AI coding in the early 2030s, with superintelligence potentially emerging around 2034. The updated forecast notably omits a specific prediction for human extinction.

A Growing Consensus on Slower Progress

Kokotajlo's revision reflects a broader shift in expert opinion. Malcolm Murray, an AI risk management expert and co-author of the International AI Safety Report, observed that many are extending their timelines. "A lot of other people have been pushing their timelines further out in the past year, as they realise how jagged AI performance is," he said. He pointed to the enormous inertia in the real world and the need for AI to develop practical skills for complex tasks, factors that will delay sweeping societal change.

The concept of Artificial General Intelligence (AGI) itself is being questioned. Henry Papadatos of the French nonprofit SaferAI argued that the term is losing its meaning as current AI systems already display significant generality, unlike the narrow programs of the past that only played chess or Go.

Internal Goals and Real-World Complexities

Despite the adjusted public timelines, the goal of creating AI that can conduct AI research remains a key objective for leading companies. Sam Altman, the CEO of OpenAI, revealed in October that having an automated AI researcher by March 2028 was an internal company goal, though he cautioned, "We may totally fail at this."

Experts highlight that dramatic predictions often overlook integration challenges. Andrea Castagna, a Brussels-based AI policy researcher, noted that even a superintelligent AI focused on military activity couldn't be instantly integrated into decades of strategic planning. "The more we develop AI, the more we see that the world is not science fiction. The world is a lot more complicated than that," Castagna concluded.

The original 'AI 2027' paper, released in April, sparked intense debate, attracting both admirers and critics like Gary Marcus, an emeritus professor at NYU, who labelled it "pure science fiction mumbo jumbo." Its influence was noted when US Vice-President JD Vance appeared to reference it in a discussion about the AI arms race with China.