Google Reveals Hackers Used AI to Create Zero-Day Exploit for First Time
Hackers Use AI to Create Zero-Day Exploit, Google Says

Researchers at Google have identified what they believe is the first instance of cyber criminals using artificial intelligence to uncover a zero-day exploit, one of the most dangerous types of security vulnerabilities. The discovery was made by the Google Threat Intelligence Group (GTIG), which reported that the exploit was developed with the assistance of AI tools and was intended for a mass exploitation event.

AI-Powered Threat Discovery

In a report released on Monday, GTIG researchers stated: "For the first time, GTIG has identified a threat actor using a zero-day exploit that we believe was developed with AI. The criminal threat actor planned to use it in a mass exploitation event but our proactive counter discovery may have prevented its use." Zero-day exploits are particularly concerning because they target vulnerabilities unknown to software developers, leaving no time for a protective patch to be created.

Growing Interest from State-Sponsored Hackers

The researchers noted that hackers linked to China and North Korea have shown "significant interest" in leveraging AI to identify zero-day vulnerabilities. This trend is part of a broader surge in cyber attacks driven by new AI tools, which have contributed to record-breaking levels of cybercrime in 2026. A recent report highlighted that AI bot attacks have increased more than tenfold over the past year, rising from 2 million to 25 million incidents globally.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

AI as a Double-Edged Sword

The rise in AI-powered attacks comes as leading AI firms such as Anthropic and OpenAI develop tools capable of detecting security flaws more efficiently than humans. Anthropic's recently unveiled model, Mythos, has been described as a "terrifying superhacker" for its ability to uncover software vulnerabilities in all major operating systems and web browsers. While these models can enhance cyber defences, Google's researchers warn that cyber criminals are exploiting them on a worrying scale.

Illicit Access to AI Models

Google's report highlighted that threat actors are pursuing anonymized, premium-tier access to AI models through professionalized middleware and automated registration pipelines to bypass usage limits. "This infrastructure enables large scale misuse of services while subsidizing operations through trial abuse and programmatic account cycling," the researchers warned.

Pickt after-article banner — collaborative shopping lists app with family illustration