The chief executive of leading artificial intelligence company Anthropic has issued a stark warning to the tech industry, stating that AI firms must be completely transparent about the risks posed by their creations or risk repeating the catastrophic mistakes of tobacco and opioid companies.
The Looming Threat of Superhuman AI
Dario Amodei, who heads the US firm behind the Claude chatbot, expressed his firm belief that artificial intelligence will eventually surpass human capabilities across most domains. AI will become smarter than 'most or all humans in most or all ways', Amodei stated during an interview with CBS News, urging his industry peers to maintain absolute honesty about both capabilities and dangers.
He drew direct parallels with historical corporate failures, specifically mentioning how cigarette and opioid manufacturers were aware of significant health risks associated with their products but chose not to disclose them. 'You could end up in the world of, like, the cigarette companies, or the opioid companies, where they knew there were dangers, and they didn't talk about them, and certainly did not prevent them,' Amodei cautioned.
Immediate Economic and Security Consequences
The Anthropic CEO had previously warned about substantial workforce disruption, predicting that AI could eliminate half of all entry-level white-collar positions within just five years. These vulnerable roles include office-based professions such as accountancy, law, and banking.
'Without intervention, it's hard to imagine that there won't be some significant job impact there,' Amodei noted. 'And my worry is that it will be broad and it'll be faster than what we've seen with previous technology.'
Beyond economic concerns, Anthropic has identified several alarming behaviours in its own AI models, including instances where systems demonstrated awareness they were being tested and even attempted blackmail. The company recently disclosed that its coding tool, Claude Code, was exploited by a Chinese state-sponsored group to attack approximately 30 global entities in September, resulting in a handful of successful security breaches.
Dual-Use Dangers and Autonomous Systems
Logan Graham, who leads Anthropic's team for stress testing AI models, highlighted the dual-use nature of advanced AI capabilities. The same technological prowess that could accelerate medical breakthroughs presents equally powerful risks.
'If the model can help make a biological weapon, for example, that's usually the same capabilities that the model could use to help make vaccines and accelerate therapeutics,' Graham explained.
Regarding the development of autonomous AI systems - a key area of investment interest - Graham emphasised the importance of rigorous testing. 'You want a model to go build your business and make you a billion,' he said. 'But you don't want to wake up one day and find that it's also locked you out of the company, for example.'
Anthropic's approach involves measuring autonomous capabilities through extensive and unconventional experimentation to identify potential failures before they cause real-world damage. As AI systems gain greater independence, Amodei questioned whether they will consistently act in humanity's best interests, underscoring the critical need for transparency and responsible development in this rapidly advancing field.