
Former national cyber director: Anthropic’s ‘Mythos’ AI can hack nearly anything and we aren’t ready
```json { "title": "Anthropic's Mythos AI Can Hack Almost Anything—Are We Ready?", "metaDescription": "Anthropic's Claude Mythos Preview exposes critical gaps in U.S. cybersecurity. Former national cyber director Kemba Walden warns the country isn't prepared.", "content": "<h2>Anthropic's Claude Mythos Preview Is the Most Dangerous AI Hacking Tool Ever Built—and the U.S. Isn't Ready</h2><p>When Anthropic announced Claude Mythos Preview on April 7, 2026, it did something virtually unheard of in the AI industry: it refused to release its most powerful model to the public. The reason, the company said, was that Mythos poses <strong>unprecedented cybersecurity risks</strong>—capable of autonomously identifying and exploiting vulnerabilities across every major operating system and web browser, often with no human intervention beyond an initial prompt. Now, as former Acting National Cyber Director Kemba Walden and other policy figures weigh in on the national security implications, the Mythos launch is less a product announcement than a stress test—one that is exposing dangerous gaps in how the United States protects its critical infrastructure.</p><h2>What Claude Mythos Preview Can Actually Do</h2><p>The capabilities documented by Anthropic's own red team are stark. Claude Mythos Preview fully autonomously identified and exploited a 17-year-old remote code execution vulnerability in FreeBSD—triaged as CVE-2026-4747—that allows anyone to gain root access on a machine running NFS. No human was involved after the initial request. The model also found a 27-year-old vulnerability in OpenBSD that would allow hackers to remotely crash any machine running it. These were not isolated findings.</p><p>According to Anthropic's Frontier Red Team blog, Mythos Preview identified thousands of zero-day vulnerabilities—many of them critical—across every major operating system and every major web browser. Over 99% of those vulnerabilities had not yet been patched at the time of the announcement, which is why Anthropic said it could not publicly disclose details about most of them. In a striking internal demonstration, Anthropic engineers with no formal security training used Mythos Preview to find remote code execution vulnerabilities overnight and woke up the following morning to a complete, working exploit.</p><p>Logan Graham, head of Anthropic's frontier red team, described the model's capabilities in stark terms, saying Mythos Preview has the skills of an advanced security researcher and can find "tens of thousands of vulnerabilities" that even the most advanced bug hunters would struggle to find. The U.K. AI Security Institute, which received early access, confirmed that Mythos was the first AI model able to complete its test simulating a full network takeover attack—though it noted the test environments lacked the same security features present in many real-world systems.</p><p>Anthropic itself acknowledged the gravity of the moment in an official statement: <em>"Although Mythos is currently far ahead of any other AI model in cyber capabilities, it presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."</em></p><h2>Project Glasswing: A Controlled Defensive Rollout</h2><p>Rather than a public launch, Anthropic unveiled Project Glasswing—a coordinated defensive security initiative that grants curated access to Mythos Preview for a select group of technology and cybersecurity partners. The 12 named launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, alongside Anthropic itself. In total, over 40 organizations are participating.</p><p>The initiative is backed by up to $100 million in Mythos Preview usage credits and $4 million in direct donations to open-source security organizations. For Project Glasswing participants, Mythos Preview is accessible via the Claude API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry, priced at $25 per million input tokens and $125 per million output tokens.</p><p>Anthropic has also been briefing the Cybersecurity and Infrastructure Security Agency (CISA), the Commerce Department, and a broader array of government actors on the potential risks and benefits of Mythos Preview. In April 2026, Treasury Secretary Scott Bessent convened a meeting with major financial institutions specifically to discuss "the rapid developments taking place in AI" following the Mythos announcement.</p><p>Approximately one week after Anthropic announced Mythos and Project Glasswing, OpenAI announced a similarly limited rollout of its own latest cybersecurity-focused model, underscoring how quickly the competitive landscape is moving.</p><h2>The Leak, the Breach, and the Market Reaction</h2><p>The Mythos story did not begin on April 7. Fortune first reported on the model's existence in late March 2026, after a data leak from a misconfigured content management system exposed nearly 3,000 files, including a draft blog post describing the model. The premature disclosure rattled markets: shares in CrowdStrike, Palo Alto Networks, Zscaler, SentinelOne, Okta, Netskope, and Tenable all slumped between 5% and 11% in the aftermath.</p><p>Then, on or around the day of Mythos's public announcement, an unauthorized group reportedly gained access to Claude Mythos Preview through a third-party vendor environment. Anthropic said it was investigating the claim and that there was no evidence its own systems were impacted—but the incident immediately sharpened concerns about how reliably access to the model could be controlled, even under a restricted rollout framework.</p><p>The access question is not hypothetical. Graham warned in an interview with NBC News: <em>"We should be planning for a world where, within six months to 12 months, capabilities like this could be broadly distributed or made broadly available, not just by companies in the United"</em>—a statement that was cut off in reporting but whose implication was clear.</p><h2>Context: AI-Enabled Hacking Was Already Here</h2><p>The Mythos announcement did not arrive in a vacuum. In September 2025, Anthropic detected what it described as the first reported AI-orchestrated cyber espionage campaign—a operation it assessed with high confidence was conducted by a Chinese state-sponsored group using Claude Code to target approximately 30 global organizations. The finding, detailed in an Anthropic blog post in November 2025, signaled that AI-enabled offensive cyber operations had already moved from theoretical risk to documented reality before Mythos was even publicly known.</p><p>That context matters when evaluating what Mythos represents. The model scored 31 percentage points higher than Anthropic's previous top model, Opus 4.6, on the USAMO 2026 Mathematical Olympiad—a benchmark that illustrates the leap in raw reasoning capability that underlies its security skills. Its hacking abilities are an expression of a broader advance in autonomous, agentic reasoning.</p><h2>Expert Reactions: Alarm, Caution, and Realism</h2><p>The cybersecurity community's response to Mythos has ranged from alarm to measured skepticism—but few are dismissing it outright.</p><p>Casey Ellis, founder of Bugcrowd, framed the problem in terms of the existing vulnerability backlog: <em>"We have way more vulnerabilities than most people like to admit; fixing them all was already difficult, and now they are far more easy to exploit by a far broader variety of potential adversaries."</em></p><p>Cynthia Kaiser, former deputy assistant director of the FBI's Cyber Division and ransomware research lead at Halcyon, drew a direct line to existing threat trends: <em>"We've been talking a long time about how AI has made initial access a lot easier for adversaries to accomplish."</em></p><p>Charlie Eriksen, a security researcher at Aikido Security, cautioned against treating Anthropic's controlled rollout as a lasting firewall: <em>"This technology is moving so fast that it's naive to assume others aren't able to easily replicate similar results, if not already, at least very soon."</em></p><p>Jeff Williams, chief technology officer of Contrast Security, was more blunt about the limits of Project Glasswing: <em>"Anthropic is holding back access to this particular model, but I don't think they're really going to be able to defend that line."</em></p><p>Ciaran Martin, former CEO of the U.K.'s National Cyber Security Center and a professor at Oxford's Blavatnik School of Government, offered a more measured perspective: <em>"It's a big deal, but it's unlikely to prove to be the end of the world."</em></p><p>Anthropic CEO Dario Amodei acknowledged the arms-race dynamic directly: <em>"More powerful models are going to come from us and from others, and so we do need a plan to respond to this."</em></p><h2>Kemba Walden and the Policy Gap</h2><p>Among the policy figures engaging with the Mythos implications is Kemba Walden, who served as Acting National Cyber Director in 2023 after serving as the inaugural Principal Deputy National Cyber Director—a role she entered in May 2022 following a decade at the Department of Homeland Security and a stint as Assistant General Counsel in Microsoft's Digital Crimes Unit, where she launched and led the DCU's Ransomware Program. Since January 2024, Walden has served as President of the Paladin Global Institute, a cybersecurity research and advocacy institute affiliated with Paladin Capital Group.</p><p>Walden's assessment, as reported by Fortune, centers on the gap between what AI models like Mythos can now do and what the United States has in place to protect its critical systems. Her concern is not merely about Mythos itself, but about the structural readiness—or lack thereof—of national cyber defense infrastructure to respond as these capabilities proliferate. The question she and others are raising is whether the frameworks built to protect power grids, financial systems, and communications networks were designed for a world in which a single AI model can autonomously discover and chain together thousands of previously unknown vulnerabilities overnight.</p><h2>What Comes Next</h2><p>Anthropic's Project Glasswing is framed as a temporary defensive measure to buy time for patching and preparedness—not a permanent solution. The company has been explicit that Mythos is the first of a coming wave, not an endpoint. With over 99% of the vulnerabilities it found still unpatched at the time of announcement, and with an unauthorized access incident already reported, the window between controlled access and broader availability may be narrower than the architecture of Project Glasswing assumes.</p><p>Government engagement is intensifying. CISA, the Commerce Department, and Treasury are all in active discussions with Anthropic. Whether those discussions translate into updated regulatory frameworks, mandatory patching timelines, or new defensive infrastructure investment remains to be seen. What is clear is that the pace of AI capability development has outrun the pace of policy response—and the Mythos announcement has made that gap impossible to ignore.</p><p>For more tech news, visit our <a href="/news">news section</a>.</p><h2>What This Means for Your Digital Security—and Your Peace of Mind</h2><p>At Moccet, we know that digital safety is inseparable from personal well-being and productivity. When the tools you rely on for work, health tracking, and daily life sit on infrastructure that may be more exposed than previously understood, staying informed is the first step toward staying protected. Understanding the AI-driven cybersecurity landscape—and the efforts underway to defend it—helps you make smarter decisions about the platforms and services you trust. <a href="/#waitlist">Join the Moccet waitlist to stay ahead of the curve.</a></p>", "excerpt": "Anthropic's Claude Mythos Preview, announced April 7, 2026, autonomously discovered thousands of unpatched zero-day vulnerabilities across every major operating system—and the company refused to release it publicly because of the risk. Former Acting National Cyber Director Kemba Walden and other policy figures are now warning that the United States is not prepared for what comes next.", "keywords": ["Claude Mythos Preview", "Anthropic cybersecurity", "AI hacking", "Project Glasswing", "Kemba Walden national cyber director"], "slug": "anthropic-claude-mythos-preview-cybersecurity-hacking-risks" } ```