
OpenAI Unveils Its New, More Powerful GPT-5.5 Model
```json { "title": "OpenAI GPT-5.5 Launches With Stronger Coding and Cyber Tools", "metaDescription": "OpenAI releases GPT-5.5 'Spud' on April 23, 2026, with improved coding, agentic tasks, and a divergent cybersecurity strategy from rival Anthropic.", "content": "<h2>OpenAI Launches GPT-5.5, Its Most Capable Model Yet</h2><p>OpenAI released <strong>GPT-5.5</strong> on April 23, 2026, rolling out its latest and most capable AI model to paid subscribers across ChatGPT and its coding assistant Codex. The release comes just six weeks after the company debuted GPT-5.4, underscoring the increasingly rapid cadence of frontier AI development. Internally codenamed <em>Spud</em>, GPT-5.5 is designed to perform better at coding, computer use, agentic tasks, and deeper scientific research — and its launch is drawing as much attention for its cybersecurity posture as for its raw capabilities.</p><p>Access is rolling out to Plus, Pro, Business, and Enterprise subscribers on ChatGPT, as well as to Codex users. A larger variant, GPT-5.5 Pro, is available to users on the $100-per-month Pro tier or ChatGPT Business and Enterprise accounts. For developers accessing the model via API, pricing has doubled compared to GPT-5.4: $5 per million input tokens and $30 per million output tokens, up from $2.50 and $15 respectively.</p><p>According to OpenAI, ChatGPT now serves more than 900 million weekly active users and over 50 million subscribers. The company reported 4 million active Codex users and 9 million paying business users on ChatGPT at the time of the GPT-5.5 launch.</p><h2>Performance Benchmarks: What GPT-5.5 Can Actually Do</h2><p>OpenAI backed the GPT-5.5 release with a series of benchmark results. On Terminal-Bench 2.0, a test of command-line workflow performance, GPT-5.5 scored 82.7%, compared to 75.1% for GPT-5.4. On the Expert-SWE internal evaluation for coding tasks, the new model scored 73.1%, up from 68.5% for its predecessor.</p><p>On GDPVal — a benchmark covering 44 occupations that measures how AI performs against human workers — GPT-5.5 outperformed or tied with human workers on approximately 85% of tasks. That compares to 83% for GPT-5.4 and 80% for Anthropic's Opus 4.7, according to Inc. OpenAI tested GPT-5.5 with approximately 200 early-access partners before the public release.</p><p>One of those early testers was The Bank of New York, which has been evaluating GPT-5.5 in recent weeks alongside models from rival companies including Anthropic. The bank's experience points to a quality that may matter as much as raw benchmark scores in regulated industries.</p><p>"What we're actually seeing from 5.5, that I think is really important for a highly regulated institution, is the response quality — but also a really impressive hallucination resistance," said Leigh-Ann Russell, CIO of The Bank of New York, in comments reported by Fortune.</p><p>Nvidia has also been vocal about the model's potential at enterprise scale. According to Axios, Nvidia vice president of enterprise computing Justin Boitano said GPT-5.5 can act as a "chief of staff," helping power agents that are already acting as employees at Nvidia. Separately, Nvidia said its new chips slash the cost of running advanced AI like GPT-5.5 by up to 35 times per token.</p><h2>Cybersecurity Capabilities: OpenAI's 'High Risk' Classification and a Divergence With Anthropic</h2><p>A central element of the GPT-5.5 announcement is OpenAI's formal cybersecurity risk assessment of the model. The company said GPT-5.5 does not cross its "Critical" cybersecurity risk threshold but does meet the criteria for its "High" risk classification — meaning the model is capable of meaningfully assisting with offensive cyber operations, but not at a level the company considers catastrophic.</p><p>OpenAI said it subjected GPT-5.5 to extensive third-party safeguard testing and red-teaming before release. "GPT-5.5 underwent extensive third-party safeguard testing and red teaming for cyber and bio [risks], and we've been iterating on our cyber safeguards for months with increasingly cyber capable models," said Mia Glaese, Vice President of Research at OpenAI, in comments reported by CNBC.</p><p>The launch also throws into sharper relief a widening philosophical divide between OpenAI and Anthropic over how to handle AI models capable of autonomously identifying and exploiting software vulnerabilities. According to SecurityBrief Australia, the two companies have taken sharply different approaches to releasing such models.</p><p>Earlier in April, OpenAI launched GPT-5.4-Cyber — a variant of GPT-5.4 with fewer restrictions on cybersecurity-related queries — days after Anthropic unveiled Claude Mythos Preview. OpenAI limited GPT-5.4-Cyber's rollout to vetted security vendors through its Trusted Access for Cyber programme, which launched in February alongside a $10 million cybersecurity grant program. The programme features tiered verification levels, with higher tiers unlocking more capable tools, according to PYMNTS. OpenAI's Codex Security product has contributed to fixes on more than 3,000 critical and high-severity vulnerabilities since its launch.</p><p>Anthropic, by contrast, has kept Claude Mythos Preview inside a tightly controlled initiative called Project Glasswing. Participants include AWS, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, and the Linux Foundation, according to SecurityBrief Australia — a relatively narrow group compared to the broader, tiered access OpenAI has extended through its Trusted Access programme. Anthropic's Claude Mythos scored 83.1% on the CyberGym benchmark, up from 66.6% for its predecessor.</p><p>In a blog post attributed to OpenAI executives collectively, the company stated: "Our goal is to make these tools as widely available as possible while preventing misuse."</p><h2>Industry Reaction: A Philosophical Split at the Frontier</h2><p>Observers outside the two companies have begun to characterize the divergence as something more fundamental than a product decision. Frank Dickson, Group Vice President for Security and Trust Research Practice at IDC, offered a pointed assessment of the two approaches. "It seems as though the OpenAI approach is much more considered when it comes to cybersecurity," Dickson said, in comments reported by BankInfoSecurity and ISMG.</p><p>OpenAI co-founder and President Greg Brockman framed the broader significance of the model's release in more expansive terms. "This is a new class of intelligence. It's a big step towards more agentic and intuitive computing," Brockman said, according to Axios. In separate comments to CNBC, he added: "What is really special about this model is how much more it can do with less guidance."</p><p>That last point — the model's capacity to operate with less explicit instruction — connects directly to the agentic computing use case that OpenAI and partners like Nvidia are emphasizing. Models that require less hand-holding to complete complex, multi-step tasks represent a meaningful shift in how AI is being deployed in enterprise environments, though the practical implications for security and oversight are still being worked out across the industry.</p><h2>Context: Why the Pace of This Release Matters</h2><p>Six weeks between major model releases is a striking interval. When OpenAI launched GPT-5.4, it was itself a significant capability milestone. The arrival of GPT-5.5 just over a month later signals that the competitive dynamics between frontier AI labs are compressing development timelines in ways that are beginning to challenge the industry's own safety and evaluation frameworks.</p><p>OpenAI's decision to classify GPT-5.5 under its "High" cybersecurity risk tier — rather than clearing it as low-risk — is notable precisely because the company still chose to release it broadly to paying subscribers. That decision reflects OpenAI's stated belief that wider, verified access to capable defensive AI tools produces better overall security outcomes. Anthropic's tighter controls around Claude Mythos suggest a different calculus: that the potential for misuse outweighs the benefits of broad distribution, at least at the current capability level.</p><p>Both positions carry genuine trade-offs. Security teams at organizations that cannot qualify for Project Glasswing or OpenAI's highest Trusted Access tiers may find themselves relying on less capable tools, while the most sensitive capabilities remain concentrated among a small group of large technology companies and vendors. At the same time, broader access without robust verification creates real risks, given that both GPT-5.5 and Claude Mythos are, by the companies' own accounts, capable of autonomously identifying and exploiting software vulnerabilities at unprecedented scale.</p><p>The Bank of New York's evaluation of GPT-5.5 alongside Anthropic's models offers a glimpse of how large regulated institutions are approaching this moment — not by committing to a single vendor, but by running parallel assessments as the competitive landscape continues to shift rapidly.</p><h2>What Comes Next</h2><p>GPT-5.5 is now live for paid subscribers across ChatGPT Plus, Pro, Business, and Enterprise tiers, as well as Codex. The API is available to developers at the new pricing structure. OpenAI has not announced a specific timeline for what follows GPT-5.5, though the six-week gap between this release and GPT-5.4 suggests the company is operating on an accelerated cadence that shows no signs of slowing.</p><p>The competition between OpenAI and Anthropic over cybersecurity AI distribution models is also likely to intensify. With Claude Mythos Preview still in restricted preview and GPT-5.4-Cyber already deployed to thousands of verified defenders, the gap between the two companies' approaches to access is measurable and consequential for enterprise security teams making vendor decisions now.</p><p>For organizations evaluating either platform, the coming months will likely clarify how each company's cybersecurity philosophy translates into real-world outcomes — and whether the more open approach produces the defensive advantages OpenAI claims, or the risks Anthropic is working to contain.</p><p>For more tech news, visit our <a href=\"/news\">news section</a>.</p><h2>Why This Matters for Your Productivity</h2><p>The capabilities arriving in GPT-5.5 — reduced need for manual guidance, stronger coding performance, and more reliable outputs in high-stakes environments — are directly relevant to how professionals manage complex workloads and stay on top of rapidly changing information. As AI tools become more embedded in daily work, understanding which models are worth integrating, and how to use them safely, is itself a productivity and health imperative. Moccet is built to help you navigate exactly that. <a href=\"/#waitlist\">Join the Moccet waitlist to stay ahead of the curve.</a></p>", "excerpt": "OpenAI released GPT-5.5, internally codenamed 'Spud,' on April 23, 2026 — just six weeks after GPT-5.4 — bringing improved coding, agentic capabilities, and a 'High' cybersecurity risk classification to paid subscribers. The launch deepens a philosophical divide between OpenAI and Anthropic over how broadly to distribute AI models capable of autonomously identifying software vulnerabilities. Early testers including The Bank of New York highlighted the model's hallucination resistance as a standout feature for regulated industries.", "keywords": ["GPT-5.5", "OpenAI", "AI model release 2026", "cybersecurity AI", "Anthropic Claude Mythos"], "slug": "openai-gpt-5-5-launch-coding-cybersecurity-2026" } ```