OpenAI Launches GPT-5.4-Cyber: Special AI Model for Enhanced Hacking Capabilities
OpenAI Launches GPT-5.4-Cyber for Enhanced Hacking

OpenAI, the creator of ChatGPT, has officially launched a specialised version of its artificial intelligence system, engineered to possess enhanced hacking capabilities. This new model, named GPT-5.4-Cyber, is specifically fine-tuned to improve its ability to penetrate security defences, with significantly fewer operational restrictions compared to standard versions.

Targeted Release for Cybersecurity Professionals

The primary objective behind this release is to empower cybersecurity experts by providing them with a tool that can proactively identify potential vulnerabilities and strengthen digital protections. According to OpenAI's announcement, GPT-5.4-Cyber is "purposely fine-tuned for additional cyber capabilities and with fewer capability restrictions." This includes a reduced likelihood of the model refusing to uncover exploits that could be leveraged by malicious hackers.

Context of Growing AI Security Concerns

This development arrives shortly after Anthropic introduced its own model, Claude Mythos, which functions in a similar manner. The emergence of these advanced AI systems has sparked widespread apprehension within the tech community, with fears mounting that artificial intelligence could fundamentally undermine internet security by discovering previously unknown weaknesses.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Strict Access Controls and Verification Processes

OpenAI has emphasised that access to GPT-5.4-Cyber will be strictly limited to vetted and trusted organisations. Prospective users must undergo a rigorous vetting process to ensure responsible usage. The company stated its intention to "make these tools as widely available as possible while preventing misuse," aiming to extend availability to those responsible for safeguarding critical infrastructure, public services, and essential digital systems.

Automated Systems for Trust Verification

To manage this controlled rollout, OpenAI is developing automated verification systems designed to authenticate individuals with legitimate needs for such powerful tools. "This allows us to expand access based on evidence and real signals of trust, rather than relying on manual decisions," the company explained. They further articulated their philosophy: "We don't think it's practical or appropriate to centrally decide who gets to defend themselves. Instead, we aim to enable as many legitimate defenders as possible, with access grounded in verification, trust signals, and accountability."

Preparatory Phase for Broader AI Deployment

OpenAI believes that this phased launch strategy will provide cybersecurity professionals with the opportunity to test and fortify their systems in anticipation of more widespread deployment of increasingly powerful and generally accessible AI models. This preparatory phase is crucial for ensuring that digital defences are robust enough to handle the advanced capabilities of next-generation artificial intelligence.

Pickt after-article banner — collaborative shopping lists app with family illustration