From Protests to Petrol Bombs: The Escalating Backlash Against AI Pioneer Sam Altman
When Sam Altman co-founded OpenAI in 2015 with the noble mission of ensuring artificial intelligence "benefits all of humanity," he likely never anticipated that his work would, just over a decade later, incite someone to hurl a Molotov cocktail at his San Francisco residence. This violent incident, followed by reported gunshots at his home in the early hours of a Sunday morning, marks a dramatic and alarming escalation in the resistance to AI development.
The Suspect's Manifesto: A "War" for Humanity
While authorities continue to investigate the motives behind the attacks, one suspect has made his position starkly clear through online posts. On the PauseAI Discord server, which advocates for non-violent AI reform, he identified himself as a "Butlerian Jihadist," referencing the crusade against "thinking machines" from Frank Herbert's Dune series. In a lengthy Substack manifesto, he articulated a belief that AI poses an "existential risk," echoing the very language Altman has often used to describe the technology.
The suspect framed the development of advanced AI as a catastrophic act of self-sabotage, drawing parallels to historical genocides when advanced civilizations encountered less advanced ones. He described it as a "war" between "good and evil," branding figures like Altman as traitors to the human species. This extreme viewpoint, while from a single individual, reflects a growing undercurrent of fear and hostility.
Online Echo Chambers: Sympathy for Violence
Perhaps more telling than the suspect's own words has been the reaction within certain online communities. The rapidly expanding AntiAI subreddit, boasting over half a million members, was flooded with posts expressing understanding, if not outright sympathy, for the attacks. One highly upvoted post remarked that Altman "should probably stop threatening to cause the apocalypse." On platforms like Instagram, some users went further, lamenting that the attacks were unsuccessful and encouraging further action.
This disturbing sentiment draws a direct comparison to the 2024 shooting of a United Healthcare CEO, which saw widespread public support for the alleged perpetrator due to deep-seated anger against the health insurance industry. A similar, potent frustration is now coalescing around major AI firms and their leaders.
The Data: A Notable Shift in Public Perception
This backlash is not confined to the fringes. Recent studies confirm a significant and measurable shift in how the public views AI and those who control it. Stanford University's 2026 AI Index Report revealed that more than half of surveyed individuals feel nervous about products using AI. A Gallup poll focusing on Generation Z found that excitement about AI adoption has dropped sharply, while anger has risen by a comparable margin.
Further findings from the Pew Research Center last month highlighted a growing disconnect: AI insiders remain far more enthusiastic about the technology than the general public. The backlash appears rooted not just in abstract fears of a superintelligent "Skynet," but in tangible, everyday concerns.
"I think a lot of AI leaders are just out of touch with normal people," said US behavioural scientist Caroline Orr Bueno. "Most people are way more concerned with their paycheck and the cost of utilities." The Stanford report corroborates this, noting public anxiety that AI will negatively impact the economy, elections, mental health, and personal relationships.
From Petitions to Direct Action: A Movement Escalates
This mounting frustration has catalyzed a trend of increasingly direct action. What began with petitions and open letters—often signed by Altman and peers like Elon Musk—has evolved into global street protests. Demonstrators in London, Paris, and New York have demanded stricter AI safety regulations and protections for jobs and the climate.
In San Francisco, mobs have vandalized and set fire to self-driving robotaxis. Others have undertaken hunger strikes outside the offices of major AI companies. Anti-AI campaigner Guido Reichstadter, who staged a month-long hunger strike outside Anthropic's headquarters, wrote: "These AIs are being used to inflict serious harm on our society today... We are in an emergency."
While aggressive acts like arson at AI labs or death threats to supportive politicians remain relatively rare, their frequency has increased since late last year. The non-violent PauseAI group has condemned all violence but warns that these extreme incidents should not be used to discredit the broader AI safety movement.
Altman's Response and a Lost Founding Principle
In the wake of the attacks on his home, Sam Altman published an uncharacteristically personal blog post. He expressed anger and a belated recognition of the power of narratives, partly blaming an incendiary New Yorker article that questioned his trustworthiness. "I am awake in the middle of the night and p*ssed, and thinking that I have underestimated the power of words," he wrote.
He acknowledged the growing disillusionment outside the tech elite but offered no concrete solutions. One potential path—a return to OpenAI's original non-profit, obligation-free ethos—seems improbable as the company reportedly prepares for a monumental $1 trillion public offering. Buried in its archives, OpenAI's founding mission stated its research was "free from financial obligations" to better focus on positive human impact. A decade later, Altman's blog declares: "The world deserves huge amounts of AI and we must figure out how to make it happen."
As the backlash intensifies from polite petitions to petrol bombs, these words are unlikely to calm the profound public anxieties surrounding the relentless march of artificial intelligence.



