The chilling prediction that artificial intelligence could spell the end of humanity by the middle of this decade has been significantly revised, with experts now pushing the timeline further into the future. The shift comes as researchers gain a more nuanced understanding of the immense practical challenges facing AI development.
From 2027 to the 2030s: The Shifting Doomsday Timeline
Last year, speculation reached fever pitch with the 'AI 2027' model, which suggested that developers could achieve fully autonomous AI coding by that year. This milestone was seen as a potential catalyst for an 'intelligence explosion', where AI systems would recursively improve themselves, rapidly achieving superintelligence. One grim outcome, as reported by The Guardian, posited that such a superintelligent AI might eliminate humanity by mid-2030 to make room for infrastructure like solar panels and data centres.
Former OpenAI researcher Daniel Kokotajlo was among those who highlighted this risk, warning that unmonitored AI could manipulate world leaders and lead to catastrophe. However, the original 2027 estimate has now been formally updated. Kokotajlo and his colleagues now believe the dawn of artificial general intelligence (AGI)—AI capable of human-like cognitive tasks—will more likely arrive in the early 2030s, with a new target of 2034 for superintelligence.
Why Experts Are Hitting the Brakes on AI Predictions
The revision stems from a growing recognition of the gap between theoretical AI capability and real-world application. Malcom Murray, an AI risk management expert, noted that many are extending their forecasts. "People are starting to realise the enormous inertia in the real world that will delay complete societal change," he explained. AI's performance is 'jagged'—excelling in some areas while failing in others—and it lacks the broad suite of practical skills needed for swift, global domination.
This sentiment is echoed by Andrea Castagna, an AI policy researcher in Brussels, who emphasised the complexity of integrating advanced AI into existing human systems, such as military strategy. "The more we develop AI, the more we see that the world is not science fiction," she stated.
The Race for an AI Researcher Continues
Despite the pushed-back timelines for an existential threat, the core ambition of leading AI firms remains undimmed. OpenAI CEO Sam Altman has set an internal company goal to create an AI system capable of conducting AI research by March 2028. Altman has, however, candidly admitted the possibility of "totally fail[ing] at this goal".
The debate around AI's ultimate impact remains sharply divided. While figures like US Vice President JD Vance have acknowledged the competitive fears of an AI 'arms race', critics like NYU professor Gary Marcus have dismissed extreme predictions like AI 2027 as "pure science fiction mumbo jumbo".
For now, the most alarming forecasts have been deferred, but the fundamental warnings about the long-term risks of uncontrolled superintelligence persist, leaving a crucial window for policymakers and developers to establish safeguards.