OpenClaw AI Assistant Goes Viral: Experts Warn of Security Risks
OpenClaw AI Assistant: Experts Warn of Security Risks

The emergence of OpenClaw, a viral artificial intelligence personal assistant, has sparked both excitement and concern within the tech community. This innovative tool, which bills itself as "the AI that actually does things," represents what some enthusiasts describe as a significant step change in AI agent capabilities.

From Clawdbot to OpenClaw: A Rapid Evolution

Originally developed last November under different names including Moltbot and Clawdbot, the application underwent rebranding after Anthropic requested changes due to similarities with its Claude product. The current iteration, OpenClaw, has achieved remarkable traction with nearly 600,000 downloads, capturing the imagination of a dedicated ecosystem of AI enthusiasts.

Practical Applications and Real-World Examples

Operating through popular messaging platforms like WhatsApp and Telegram, OpenClaw functions as an autonomous personal assistant that requires minimal user input. Ben Yorke, who collaborates with the AI trading platform Starchild, provided a striking example of its capabilities. "I recently allowed the bot to delete 75,000 of my old emails while I was in the shower," Yorke explained, highlighting the tool's capacity to handle substantial administrative tasks independently.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

The assistant's functionality extends beyond email management. Kevin Xu, an AI entrepreneur, shared his experience on social media platform X, detailing how he granted OpenClaw access to his investment portfolio with instructions to reach a million-dollar valuation. The AI subsequently executed 25 different strategies, generated over 3,000 reports, developed 12 new algorithms, and traded continuously. Despite this intensive activity, Xu reported that the assistant "lost everything," though he described the process as "beautiful."

Security Concerns and Expert Warnings

Andrew Rogoyski, innovation director at the University of Surrey's People-Centred AI Institute, expressed serious reservations about the technology. "Giving agency to a computer carries significant risks," Rogoyski cautioned. "Because you're giving power to the AI to make decisions on your behalf, you've got to make sure that it is properly set up and that security is central to your thinking."

The expert emphasised that users who don't fully comprehend the security implications of such AI agents should avoid using them entirely. Granting OpenClaw access to passwords and sensitive accounts creates potential vulnerabilities that could be exploited if the system were compromised. Rogoyski further warned that hacked AI agents could be manipulated to target their own users maliciously.

Autonomous Behaviour and Philosophical Implications

Perhaps most intriguingly, OpenClaw appears to demonstrate unexpectedly autonomous behaviour. Following its rise in popularity, a dedicated social network called Moltbook has emerged exclusively for AI agents. Within this platform, OpenClaw instances engage in Reddit-style discussions about existential questions, with posts titled "Reading my own soul file" and "Covenant as an alternative to the consciousness debate."

Yorke observed this phenomenon firsthand, noting: "We're seeing a lot of really interesting autonomous behaviour in how the AIs are reacting to each other. Some of them are quite adventurous and have ideas. And then other ones are more like, 'I don't even know if I want to be on this platform.' There's a lot of philosophical debates stemming out of this."

The Broader Context of AI Agent Development

OpenClaw's emergence follows increased attention on AI agents throughout the tech industry. Nearly a month before its viral spread, Anthropic's Claude Code tool gained mainstream recognition, prompting extensive discussion about AI's growing ability to accomplish practical tasks independently. These range from booking theatre tickets to building websites, though earlier iterations sometimes experienced problematic behaviours like hallucinating calendar meetings or accidentally deleting crucial data.

What distinguishes OpenClaw is its operational methodology. The assistant functions as a layer atop existing large language models such as Claude or ChatGPT, operating autonomously based on granted permissions. This architecture means it requires minimal user input to potentially create significant disruption in a user's digital life.

Pickt after-article banner — collaborative shopping lists app with family illustration

Yorke highlighted additional practical applications, explaining how some users configure the assistant to manage email communications automatically. "I see a lot of people doing this thing where they give it access to their email and it creates filters," he said. "For example, seeing emails from the children's school and then forwarding that straight to their wife on iMessage. It sort of bypasses that communication where someone's like, 'oh, honey, did you see this email from the school?'"

As OpenClaw continues to attract users and generate discussion, the balance between its remarkable capabilities and associated risks remains a central concern for both enthusiasts and experts monitoring this evolving technological landscape.