A group of users leaked Anthropic’s AI model Mythos by reportedly guessing where it was located

A group of users leaked Anthropic’s AI model Mythos by reportedly guessing where it was located

```json { "title": "Anthropic's Mythos AI Breached by Discord Group", "metaDescription": "A Discord group gained unauthorized access to Anthropic's restricted Mythos AI model by guessing its URL. Experts warn China may already have access.", "content": "<h2>Anthropic's Claude Mythos AI Model Accessed Unauthorized by Discord Group on Launch Day</h2><p>Anthropic's Claude Mythos — described internally as the company's most powerful AI model ever developed and restricted to a curated list of approximately 40 companies due to its advanced cybersecurity capabilities — was accessed without authorization by a private Discord group on the very day it was publicly announced, according to a Bloomberg report published April 21, 2026. The breach, which Anthropic has confirmed it is investigating, raises serious questions about how securely the company has been able to contain a model it deemed too dangerous to release to the general public.</p><p>The incident is the latest in a series of security stumbles surrounding Mythos, a model whose existence was itself first revealed not through an official announcement but through an accidental data leak in late March 2026 — one that exposed close to 3,000 unpublished blog assets from Anthropic's content management system.</p><h2>How the Unauthorized Access Happened</h2><p>According to Fortune's reporting on April 23, 2026, the unauthorized group was able to gain access to Mythos through a combination of insider knowledge and educated guesswork. One member of the Discord group is a third-party contractor for Anthropic, providing a potential foothold into the company's vendor ecosystem. The group then reportedly used knowledge leaked from AI training startup Mercor — which had previously suffered its own data breach — about Anthropic's past naming and deployment practices to make an educated guess about the model's online location.</p><p>TechCrunch corroborated this account, reporting that the group "made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models." In other words, the group essentially reverse-engineered the model's URL, combining leaked institutional knowledge with contractor-level access to circumvent controls on what Anthropic had positioned as its most sensitive AI deployment to date.</p><p>According to Fortune, the Discord group has not been using Mythos for cyberattacks. However, the group has been using the program continuously since its release and, as of the time of reporting, still retains access.</p><p>Anthropic confirmed the incident in a statement issued to multiple outlets: <strong>"We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments."</strong> The company also stated, according to Euronews Next, that there is currently no evidence that its core systems were impacted, nor that the reported activity extended beyond the third-party vendor environment.</p><h2>A Model That Was Already Defined by Leaks</h2><p>The unauthorized Discord access is not the first time Mythos has slipped outside Anthropic's intended boundaries. The model's existence was first revealed publicly on March 26, 2026, not through an official announcement but through a misconfiguration of Anthropic's content management system, which left a draft blog post sitting in a publicly accessible data cache. That cache, Fortune reported at the time, contained close to 3,000 assets linked to Anthropic's blog that had not been previously published but were nonetheless publicly accessible. Anthropic acknowledged the incident as a "human error" and described the leaked material as "early drafts of content considered for publication."</p><p>The draft blog post that surfaced in that leak described Mythos as "by far the most powerful AI model we've ever developed" — a characterization that drew immediate attention given the cybersecurity implications Anthropic had flagged internally.</p><p>A second leak, reported by Breitbart News, exposed approximately 500,000 lines of code contained within roughly 1,900 files, compounding concerns about the company's data security practices in the lead-up to Mythos's controlled release.</p><h2>What Makes Mythos So Sensitive</h2><p>Anthropic did not release Mythos to the public. Instead, on April 10, 2026, the company launched Project Glasswing — a controlled program that limited access to approximately 40 companies, including Microsoft, Apple, Google, Amazon, Cisco, JPMorgan Chase, and Nvidia. According to CBS News, the goal of Project Glasswing was to help these companies harden their defenses before bad actors could gain access to Mythos or similar AI models.</p><p>The rationale for that caution is grounded in the model's demonstrated capabilities. Mozilla CTO Bobby Holley revealed that Mythos was able to find 271 vulnerabilities in the latest build of Firefox. Separately, the model was used to identify a 27-year-old security vulnerability in OpenBSD, an operating system long regarded for its rigorous security standards. These are not theoretical capabilities — they are documented outputs from controlled testing that underscore why Anthropic treated Mythos differently from any model it had previously released.</p><p>The model's power also caught the attention of Washington. According to Fortune, Treasury Secretary Scott Bessent convened a meeting of senior American bankers in April 2026 to discuss the Mythos model, with the meeting encouraging banking executives to use the model to detect vulnerabilities in their own systems before adversaries could exploit them.</p><p>Anthropic, which is valued at approximately $380 billion according to Breitbart News, has staked significant credibility on its safety-first positioning. That positioning is now being tested by a series of incidents that suggest the company's operational security has not kept pace with the sensitivity of the technology it is developing.</p><h2>Expert Reactions: 'It Was Bound to Happen'</h2><p>David Lindner, Chief Information Security Officer at Contrast Security, did not mince words when speaking to Fortune about the breach.</p><p>"It was bound to happen," Lindner said. He elaborated on the structural problem with Anthropic's approach to restricted access: "The more they add to this elite group, the more likely it was to get released to someone who shouldn't probably have access to it."</p><p>Lindner's most pointed warning, however, concerned the geopolitical implications of the breach. "If some group — some random Discord online forum, got access to it," he told Fortune, "it's already been breached by China."</p><p>The logic is straightforward and sobering: if a small, informal group of enthusiasts on a Discord server was able to access Mythos by guessing a URL and leveraging contractor credentials, the assumption that more sophisticated and better-resourced adversaries have not done the same is difficult to sustain.</p><p>Not everyone in the industry has accepted Anthropic's framing of Mythos as an unprecedented security risk. OpenAI's Sam Altman publicly characterized Anthropic's promotion of the model as "fear-based marketing," according to Fortune — a critique that reflects the competitive dynamics at play as the leading AI companies jostle for positioning in what has become an intensely scrutinized sector.</p><h2>What Comes Next</h2><p>Anthropic has confirmed it is actively investigating the breach and has stated that no evidence of impact to its core systems has been found. The investigation remains ongoing, and the company has not announced any changes to Project Glasswing or to its access controls for Mythos as of April 23, 2026.</p><p>The incident raises unresolved questions that will likely define the next phase of Anthropic's public accountability around Mythos. Chief among them: how does a company responsibly contain a model it has described as too powerful for public release, when its own vendor ecosystem — extended across dozens of third-party contractors and partner organizations — creates an inherently porous perimeter? And what obligations, if any, does Anthropic have to the broader cybersecurity community now that Mythos is confirmed to be operating outside its intended boundaries?</p><p>The Discord group's continued access to Mythos, reported by Fortune as of the time of writing, suggests the practical answer to those questions remains unresolved. Whether Anthropic can close off that access — and whether similar groups or more sophisticated actors have already replicated the method — remains to be seen.</p><p>For security teams and technology leaders, particularly those within the Project Glasswing network, the incident is a prompt to treat the model's containment as already compromised and to assess their own exposure accordingly.</p><p>For more tech news, visit our <a href=\"/news\">news section</a>.</p>", "excerpt": "A private Discord group gained unauthorized access to Anthropic's restricted Claude Mythos AI model on its launch day, reportedly by guessing its URL using leaked naming conventions and contractor credentials. Anthropic confirmed it is investigating the breach, stating no core systems were impacted. Cybersecurity experts warn the breach may signal that far more sophisticated adversaries already have access.", "keywords": ["Anthropic Mythos", "Claude Mythos breach", "Project Glasswing", "AI cybersecurity leak", "Anthropic security incident"], "slug": "anthropic-mythos-ai-breached-discord-group" } ```

Share:
← Back to Tech News