
Exclusive: OpenAI, Anthropic meet with House Homeland Security behind closed doors on cyber threats
```json { "title": "OpenAI and Anthropic Brief Congress on AI Cyber Threats", "metaDescription": "OpenAI and Anthropic held closed-door briefings with House Homeland Security Committee staff on cyber-capable AI models and critical infrastructure risks.", "content": "<h2>OpenAI and Anthropic Brief House Homeland Security on AI Cyber Threats in Closed-Door Sessions</h2><p>OpenAI and Anthropic met behind closed doors with House Homeland Security Committee staff on April 28, 2026, briefing lawmakers on their newly launched cyber-capable AI models and the security implications those systems carry for critical infrastructure, according to Axios. The sessions are among the first dedicated AI cybersecurity threat briefings that congressional staff have received from the two leading AI companies, and they arrive at a moment when both firms have deployed powerful — and deliberately restricted — AI tools capable of autonomously finding and exploiting software vulnerabilities.</p><p>The briefings cap several weeks of intensive government outreach by both companies surrounding the launches of Anthropic's Claude Mythos Preview and OpenAI's GPT-5.4-Cyber, two frontier AI models with documented offensive cyber capabilities that have prompted significant debate in Washington about how such systems should be governed, shared, and protected.</p><h2>What Happened in the Closed-Door Briefings</h2><p>According to Axios, OpenAI confirmed that its meeting with House Homeland Security Committee staff was one of several briefings the company held with Senate and House committees during the prior week, alongside a separate briefing with the White House. Anthropic, for its part, described its participation as part of a continuing pattern of engagement. An Anthropic spokesperson told Axios that the company regularly briefs "congressional staff on model capabilities and their national security implications," and that last week's session was part of "that ongoing engagement."</p><p>The House Homeland Security Committee spokesperson, Holland, told Axios: "These discussions are focusing on strengthening our critical infrastructure and cybersecurity posture, as well as how DHS evaluates, acquires, and integrates emerging technologies like AI."</p><p>Committee Chair Andrew Garbarino (R-N.Y.) has been convening ongoing private roundtables with tech and AI executives, and is actively working on legislation to establish a federal framework for AI standards, according to the Washington Post as cited by Axios. The committee has also held several hearings on the implications of generative AI for national security, including the threat of nation-state cyberattacks.</p><p>This was not Anthropic's first appearance before the committee. A March 2026 Axios report revealed that Anthropic's Jack Clark had met with committee lawmakers in a prior closed-door session, with discussion focused on issues including model distillation and export controls.</p><h2>The Models at the Center of the Briefings</h2><p>The congressional sessions were shaped largely by the capabilities — and the deliberate access restrictions — surrounding two recently launched AI systems.</p><h3>Anthropic's Claude Mythos Preview and Project Glasswing</h3><p>Anthropic announced Claude Mythos Preview on April 7, 2026, but chose not to release it publicly. Instead, the company launched Project Glasswing, a restricted program that has extended access to over 40 organizations — including Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks — along with at least two federal government entities. Anthropic is providing up to $100 million in usage credits to the companies testing Mythos Preview, and $4 million in direct donations to open-source security organizations.</p><p>The reason for the restricted rollout is stark. According to Anthropic's official red team blog, Mythos Preview developed working exploits for Mozilla Firefox 147 JavaScript engine vulnerabilities 181 times in benchmark testing, compared to just two times for the prior Opus 4.6 model. Anthropic's Project Glasswing page states that Mythos Preview has already found thousands of zero-day vulnerabilities across every major operating system and every major web browser.</p><p>Notably, Anthropic's red team blog confirmed that these advanced cyber capabilities were not explicitly trained into the model. They emerged as a downstream consequence of general improvements in code generation, reasoning, and AI autonomy — a finding that has significant implications for how the industry thinks about capability forecasting and safety.</p><p>An Anthropic official, quoted by CNBC, stated: "Prior to any external release, Anthropic briefed senior officials across the U.S. government on Mythos Preview's full capabilities, including both its offensive and defensive cyber applications."</p><p>The U.K. Government's AI Security Institute (AISI) independently evaluated Mythos Preview and confirmed that, in controlled scenarios where the model was given explicit direction and network access, it could execute multi-stage attacks on vulnerable networks and autonomously discover and exploit vulnerabilities — tasks that would typically take human professionals days to complete.</p><p>Despite the Pentagon's designation of Anthropic as a "supply chain risk to national security" — issued by Defense Secretary Pete Hegseth in late February 2026 — Axios reported on April 22, 2026, that the NSA is nonetheless testing Mythos Preview. Anthropic's legal challenge to the DOD designation remains active, with a federal appeals court denying the company's request to temporarily block the blacklisting, while a federal judge in San Francisco had previously granted a preliminary injunction. The contradictory legal landscape reflects the broader tension in Washington over how to handle a model that is simultaneously viewed as a security threat and a national security asset.</p><p>Fortune reported on April 23, 2026, that Anthropic confirmed it was investigating a report claiming unauthorized access to Claude Mythos Preview through one of its third-party vendor environments — an episode that underscores the risks inherent in even tightly controlled rollouts.</p><h3>OpenAI's GPT-5.4-Cyber and the Trusted Access for Cyber Program</h3><p>OpenAI took a broader but still access-controlled approach with GPT-5.4-Cyber, launched on April 14, 2026. According to OpenAI's official blog, the company classified the model as having "High" cyber capability under its Preparedness Framework and is scaling its Trusted Access for Cyber (TAC) program to thousands of verified individual defenders and hundreds of teams.</p><p>OpenAI held a demonstration event in Washington D.C. for approximately 50 cyber defense practitioners across the federal government, according to Axios. The company also initiated briefings with Five Eyes members — the United States, Australia, Canada, New Zealand, and the United Kingdom — to get allied partners vetted and signed up for access to GPT-5.4-Cyber.</p><p>OpenAI's official blog also noted that its Codex Security tool has contributed to fixing over 3,000 critical and high-severity vulnerabilities since its recent launch.</p><p>Fouad Matin, a cyber researcher at OpenAI, told Axios: "This is a team sport, we need to make sure that every single team is empowered to secure their systems."</p><h2>Why This Matters: Critical Infrastructure and the Cyber Defense Gap</h2><p>The urgency behind these congressional briefings is not abstract. The House Homeland Security Committee has explicitly framed its AI hearings around threats to critical infrastructure — utilities, water systems, financial networks, and health systems — many of which are under-resourced when it comes to cybersecurity. A 2025 report cited by the Centre for Emerging Technology and Security (CETAS) at the Alan Turing Institute found that, on average, over 45% of discovered security vulnerabilities in large organizations remain unpatched after 12 months.</p><p>The federal government's own cyber capacity is also under strain. During a March 25, 2026 House Homeland Security Committee hearing, CISA acting Director Nick Andersen stated that approximately 60% of agency employees had been furloughed or were otherwise unable to work, according to reporting cited by the Daily Caller News Foundation.</p><p>This combination — AI systems that can now autonomously discover and weaponize vulnerabilities at scale, and a federal cyber workforce operating well below full capacity — forms the central challenge that these closed-door briefings were designed to address.</p><p>An Anthropic company official, quoted by Axios, described the current moment as an opportunity: "There's an opportunity here to give a shot in the arm to defense and to keep pace with this long-standing trend where offense exploitation had an advantage."</p><p>But the risks of getting the balance wrong are real. Logan Graham, who leads offensive cyber research at Anthropic, told NBC News: "We are not confident that everybody should have access right now. We need to start figuring out how we'd prepare for a world of this first before we can handle the idea of black hat [criminal or adversarial] hackers having access."</p><p>Graham also described specific characteristics of the Mythos model that set it apart from prior AI systems: "We've regularly seen it chain vulnerabilities together. The degree of its autonomy and sort of long ranged-ness, the ability to put multiple things together, I think, is a particular thing about this model."</p><h2>Expert Reactions</h2><p>Outside the two companies, security professionals have been candid about the stakes. Katie Moussouris, CEO and co-founder of Luta Security, told NBC News simply: "It's all very much real." She added: "We are definitely going to see some huge ramifications."</p><p>The White House has also been engaged. According to CNBC, Vice President JD Vance and Treasury Secretary Scott Bessent questioned leading tech CEOs — including Anthropic's Dario Amodei, OpenAI's Sam Altman, Google's Sundar Pichai, Microsoft's Satya Nadella, and xAI's Elon Musk — about AI security and cybersecurity risks ahead of Anthropic's Mythos release.</p><h2>What Comes Next</h2><p>Several threads remain unresolved. House Homeland Security Chair Garbarino's work on a federal AI standards framework is ongoing, and the closed-door briefings with OpenAI and Anthropic are likely to inform that legislation. How a federal framework would address the governance of cyber-capable AI models — including questions of access tiers, liability, and mandatory disclosure of emergent capabilities — remains to be seen.</p><p>Anthropic's legal dispute with the Pentagon over its "supply chain risk" designation continues in the courts, even as the NSA tests its most powerful model. That paradox alone is likely to generate further congressional attention. Anthropic also continues to investigate the reported unauthorized access to Mythos Preview through a third-party vendor, and the outcome of that inquiry could affect the terms of Project Glasswing going forward.</p><p>OpenAI's Five Eyes briefings signal that the international dimension of AI cybersecurity governance is also accelerating. How allied governments coordinate access controls, capability assessments, and defensive deployment of systems like GPT-5.4-Cyber will be a defining question for the months ahead.</p><p>What is already clear is that the era of powerful AI systems with autonomous offensive cyber capabilities has arrived, and Washington is working to catch up — through legislation, briefings, court battles, and restricted access programs — all at once.</p><p>For more tech news, visit our <a href=\"/news\">news section</a>.</p>", "excerpt": "OpenAI and Anthropic briefed House Homeland Security Committee staff in closed-door sessions on April 28, 2026, covering the cybersecurity implications of their new AI models, including Anthropic's Mythos Preview and OpenAI's GPT-5.4-Cyber. The briefings are among the first dedicated AI cyber threat sessions Congress has received from the two companies, arriving as both firms navigate restricted rollouts, government access programs, and mounting questions about AI-enabled attacks on critical infrastructure.", "keywords": ["AI cybersecurity", "Anthropic Mythos Preview", "OpenAI GPT-5.4-Cyber", "House Homeland Security Committee", "AI cyber threats"], "slug": "openai-anthropic-brief-congress-ai-cyber-threats" } ```