By Abdul Wasay ⏐ 14 seconds ago ⏐ Newspaper Icon Newspaper Icon 4 min read
OpenAI

OpenAI has issued one of its strongest warnings yet, cautioning that its next generation of artificial intelligence models may introduce “high” cybersecurity risks as their capabilities accelerate beyond existing safeguards. The company said on Wednesday that rapidly advancing systems could make it easier for attackers to generate sophisticated exploits, automate intrusion attempts and craft targeted cyberattacks, raising urgent questions about how to secure the AI tools now embedded across global enterprises.

In its statement, OpenAI acknowledged that advanced models can strengthen defense by identifying vulnerabilities, improving code quality and supporting security audits, but stressed that the same capabilities could also supercharge offensive operations. The firm warned that future systems may be able to assist in the creation of zero day exploits and other attack methods capable of breaching highly protected environments. To counter this risk, OpenAI said it is expanding access controls, improving infrastructure security, tightening egress monitoring and developing new defensive tooling for sensitive model tiers. The blog says:

Our models are designed and trained to operate safely, supported by proactive systems that detect and respond to cyber abuse. We continuously refine these protections as our capabilities and the threat landscape change. While no system can guarantee complete prevention of misuse in cybersecurity without severely impacting defensive uses, our strategy is to mitigate risk through a layered safety stack.

The concerns echo findings from international AI safety researchers, including contributors to a recent global risk review who warned that increasingly powerful systems may become difficult to reliably supervise. They noted that without adequate oversight, advanced models could expose digital systems to powerful automated attacks or unintentionally reveal weaknesses.

OpenAI plans to introduce a tiered access programme for vetted cybersecurity professionals, giving them enhanced tools for defensive work while limiting high risk capabilities. The company also announced a Frontier Risk Council composed of senior cybersecurity experts who will evaluate threats posed by upcoming models and provide guidance on broader AI safety challenges.

Cyber capabilities in AI models are advancing rapidly, bringing meaningful benefits for cyberdefense as well as new dual-use risks that must be managed carefully. For example, capabilities assessed through capture-the-flag (CTF) challenges have improved from 27% on GPT‑5⁠(opens in a new window) in August 2025 to 76% on GPT‑5.1-Codex-Max⁠(opens in a new window) in November 2025.

The warning arrives amid growing scrutiny of AI misuse. Security analysts have documented vulnerabilities such as jailbreaks and prompt injection attacks that manipulate models into executing harmful actions. Regulators are also stepping in: a coalition of state attorneys general in the United States has called for stronger testing, auditing and safety disclosures from major AI developers.

We expect that upcoming AI models will continue on this trajectory; in preparation, we are planning and evaluating as though each new model could reach ‘High’ levels of cybersecurity capability, as measured by our Preparedness Framework⁠. By this, we mean models that can either develop working zero-day remote exploits against well-defended systems, or meaningfully assist with complex, stealthy enterprise or industrial intrusion operations aimed at real-world effects. This post explains how we think about safeguards for models that reach these levels of capability, and ensure they meaningfully help defenders while limiting misuse.

Aardvark, our agentic security researcher that helps developers and security teams find and fix vulnerabilities at scale, is now in private beta. It scans codebases for vulnerabilities and proposes patches that maintainers can adopt quickly. It has already identified novel CVEs in open-source software by reasoning over entire codebases. We plan to offer free coverage to select non-commercial open source repositories to contribute to the security of the open source software ecosystem and supply chain.

As AI becomes more deeply integrated into critical infrastructure and enterprise operations, experts say only robust governance, transparent risk assessment and industrywide cooperation will prevent emerging models from becoming tools for exploitation rather than innovation.

Just a few hours ago, OpenAI CEO Sam Altman also pointed out how his AI magnum opus might result in the extermination of jobs of various kinds globally. He was talking to Jimmy Fallon in his Tonight Show.