Cybercriminals have been struggling to adopt artificial intelligence (AI) to benefit their illicit activities, according to new research that analyzed approximately 100 million posts from underground and dark web cybercrime communities. The study, conducted by researchers from the universities of Edinburgh, Strathclyde, and Cambridge, found that most cybercriminals lack the necessary skills or resources to effectively leverage AI innovations in their criminal endeavors.
Research Methodology
The team examined discussions from the CrimeBB database, which contains over 100 million posts scraped from dark web and underground cybercrime forums. Using a combination of machine learning tools and manual sampling techniques, they focused on conversations about how cybercriminals, often referred to as hackers, were experimenting with AI technologies from November 2022 onward—the period coinciding with the release of ChatGPT.
Key Findings
The researchers discovered that, contrary to expectations, AI coding assistants have not lowered the skill barrier for committing cybercrime. Instead, these tools are proving most useful for individuals who already possess significant expertise, as effective use of AI requires substantial skills and knowledge.
AI was found to be most successfully employed for operating social media bots that engage in misogynistic harassment and generate money through fraud, as well as for concealing patterns that cybersecurity defenders typically detect. However, the researchers noted that guardrails implemented on major chatbots are having a significant impact in reducing harm.
Expert Commentary
Dr. Ben Collier, senior lecturer in digital methods at the University of Edinburgh’s School of Social and Political Science, stated: "Cybercriminals are experimenting with these tools, but as far as we can tell, it’s not delivering them real benefits in their own work. Our message to industry is: don’t panic yet. The immediate danger comes from companies and members of the public adopting poorly secured AI systems themselves, opening them up to catastrophic new attacks that can be performed by cybercriminals with little effort or skill."
Broader Implications
The research also revealed that many individuals within cybercrime communities are panicking about potentially losing their legitimate IT jobs due to AI's impact on mainstream software industries. This concern could potentially drive them and others toward increased cybercriminal activity, the study suggests.
The report authors warn that the primary risks to industry are likely to stem from adopting poorly secured agentic AI systems—a form of AI capable of acting autonomously, carrying out specific tasks, and making decisions. They also caution against risks associated with insecure "vibecoded" products, where computer code has been written using AI, by legitimate industry players.
Publication
The findings have undergone peer review and will be presented at the Workshop on the Economics of Information Security in Berkeley, California, in June.



