
Cybercriminals are increasingly turning to artificial intelligence to power their illegal operations, with ChatGPT emerging as a particularly dangerous tool in their arsenal. According to new research from security firm Group-IB, thousands of compromised ChatGPT accounts are being traded on dark web marketplaces, enabling fraudsters to create convincing scams and data theft campaigns.
The Dark Web's New Currency
Group-IB's investigation uncovered over 100,000 infected devices with saved ChatGPT credentials circulating in underground cybercrime forums. The Asia-Pacific region showed the highest concentration of stolen accounts, though the threat is rapidly spreading globally.
"These compromised accounts provide criminals with unlimited access to ChatGPT's capabilities," explains a security analyst familiar with the findings. "They're using the AI to generate convincing phishing emails, create fake business proposals, and even develop malicious code."
Fake Ads and Sophisticated Scams
The criminal applications of stolen ChatGPT access are particularly concerning in the realm of online advertising. Fraudsters are creating fake mobile apps and promoting them through legitimate-looking ads that direct users to malicious websites.
One prevalent scheme involves fake versions of popular services like OpenAI's own mobile applications. Unsuspecting users who download these counterfeit apps inadvertently hand over their personal information and payment details to criminals.
The Growing Threat to Businesses and Consumers
As AI technology becomes more sophisticated, so do the methods employed by cybercriminals. The ability to generate human-like text at scale makes ChatGPT particularly valuable for creating convincing fake reviews, business communications, and customer service interactions.
Security experts warn that both businesses and individual users need to be increasingly vigilant. Two-factor authentication and regular password changes are recommended for ChatGPT users, while consumers should be cautious about downloading apps from unofficial sources.
OpenAI's Response and Future Challenges
OpenAI has acknowledged the security concerns and continues to implement measures to detect and prevent malicious use of their platform. However, the cat-and-mouse game between security teams and cybercriminals shows no signs of slowing.
As one cybersecurity professional noted: "The same AI capabilities that drive innovation are being weaponised by criminals. This is just the beginning of a new era of AI-powered cybercrime that will challenge security professionals for years to come."