Anthropic: No "kill switch" for AI in classified settings

Anthropic: No "kill switch" for AI in classified settings

```json { "title": "Anthropic: No Kill Switch for AI in Pentagon Systems", "metaDescription": "Anthropic told a federal appeals court it has no ability to control or shut down Claude once deployed in classified Pentagon networks. Here's what's at stake.", "content": "<h2>Anthropic Tells Appeals Court It Cannot Control Claude Once Deployed in Classified Military Networks</h2><p>Anthropic, the AI safety company behind the Claude model, has told a federal appeals court in Washington D.C. that it has no technical ability to monitor, manipulate, or shut down its artificial intelligence once it is deployed inside classified Pentagon systems — directly challenging the U.S. Department of Defense's claim that the company poses a risk of future sabotage. The disclosure came in a 96-page filing submitted to the U.S. Court of Appeals ahead of oral arguments scheduled for May 19, 2026, and marks a significant escalation in a legal battle that has pitted one of Silicon Valley's most prominent AI firms against the Trump administration.</p><p>The filing, reported by the Associated Press on April 22, 2026, offers the clearest window yet into Anthropic's legal strategy in a lawsuit that originated after the Pentagon designated the company a supply chain risk — the first such designation ever applied to an American company — and required all defense contractors to certify they were not using Anthropic's products in their Pentagon work.</p><h2>No Remote Access, No Kill Switch, No Backdoor</h2><p>At the center of Anthropic's argument is a technical reality about how AI models are deployed in sensitive government environments. According to sworn testimony submitted as part of the court record, Thiyagu Ramasamy, Anthropic's Head of Public Sector, declared that once Claude is deployed inside a government-secured, air-gapped system operated by a third-party contractor, Anthropic has no access to it — no remote kill switch, no backdoor, and no mechanism to push unauthorized updates.</p><p>Ramasamy further stated that any change to the model would require the Pentagon's explicit approval and action to install, making any notion of an "operational veto" by Anthropic a fiction. He also confirmed that Anthropic cannot see what government users are typing into the system, let alone extract that data.</p><p>These declarations directly contradict the Pentagon's framing of Anthropic as a potential saboteur of U.S. military operations. According to court records cited by CNN, internal Pentagon documents showed the supply chain risk designation was driven not by genuine national security concerns but by Anthropic's "hostile manner through the press" — a detail that proved pivotal in a parallel court proceeding in California.</p><h2>Two Courts, Two Outcomes — So Far</h2><p>The legal fight between Anthropic and the Pentagon is playing out across two separate federal courts, with notably different results.</p><p>In San Francisco, U.S. District Judge Rita Lin granted Anthropic a preliminary injunction in late March 2026, blocking the Pentagon's supply chain risk designation while the case proceeds. In a 43-page ruling, Judge Lin found that the government's actions violated Anthropic's First Amendment and due process rights. She wrote that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government." She also noted, quoting Pentagon records directly, that "The Department of War's records show that it designated Anthropic as a supply chain risk because of its 'hostile manner through the press.'"</p><p>The outcome in Washington D.C. has been less favorable for Anthropic — at least so far. The U.S. Court of Appeals there rejected Anthropic's request for an emergency order that would have blocked the Pentagon's actions while the case was under review. Oral arguments are now scheduled for May 19, 2026, at which point the Trump administration will have the opportunity to file its response to Anthropic's 96-page brief.</p><h2>How the Dispute Began: A $200 Million Contract and Two Red Lines</h2><p>The roots of this conflict stretch back to July 2025, when Anthropic signed a $200 million contract with the U.S. Department of Defense — a deal that made Claude the first frontier AI model approved for use on classified military networks. The contract appeared to represent a major milestone for both the company and the broader effort to integrate advanced AI into national security infrastructure.</p><p>But the relationship collapsed in late February 2026 after Anthropic refused to remove contractual guardrails it had insisted upon from the start: no use of Claude for fully autonomous weapons without human oversight, and no mass domestic surveillance of Americans. According to CBS News, the Trump administration demanded the ability to use Claude for "all lawful purposes" — a formulation Anthropic would not accept.</p><p>On February 27, 2026, Defense Secretary Pete Hegseth formally designated Anthropic a supply chain risk, effective immediately. Within hours, OpenAI announced a deal to deploy its own models in classified military environments under terms that granted the Pentagon use for "all lawful purposes" — effectively stepping into the role Anthropic had vacated.</p><p>According to NPR, the Pentagon continued to use Anthropic's models to support U.S. military operations in the ongoing conflict with Iran even as negotiations between the two sides collapsed — a detail that underscores the practical entanglement that existed even amid the public rupture.</p><h2>What Anthropic Says It Never Claimed</h2><p>A key element of the Pentagon's case against Anthropic is the allegation that the company sought an approval role over military operations — essentially that Anthropic wanted a veto over how the U.S. military could use its own technology. Anthropic has disputed this characterization in sworn declarations.</p><p>Sarah Heck, Anthropic's Head of Policy and a former National Security Council official, submitted a sworn declaration stating: "At no time during Anthropic's negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role."</p><p>Anthropic's lawsuit argues that the supply chain risk designation — described as the first ever applied to an American company — amounts to government retaliation for the company's publicly stated views on AI safety, in violation of the First Amendment. The company has been consistent in framing its position not as an attempt to control military operations, but as a refusal to strip safety limits from its own technology before handing it over.</p><p>Dario Amodei, Anthropic's CEO and co-founder, offered a pointed defense of that stance in remarks to CBS News: "I think we are a good judge of what our models can do reliably and what they cannot do reliably."</p><p>The Trump administration and Pentagon have pushed back forcefully. Acting U.S. Attorney General Todd Blanche stated: "Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company." The Pentagon, for its part, has maintained that "this has been about one fundamental principle: the military being able to use technology for all lawful purposes."</p><h2>Why This Case Matters Beyond Anthropic</h2><p>The Anthropic-Pentagon dispute is more than a contract disagreement between one company and one government agency. It raises fundamental questions about the terms under which private AI companies can — or should — supply technology to military and intelligence clients, and what leverage, if any, those companies retain once their models are embedded in classified infrastructure.</p><p>The technical argument Anthropic is making in its court filing — that it has no kill switch, no backdoor, and no visibility into classified deployments — is, if accurate, a double-edged sword. On one hand, it undercuts the Pentagon's sabotage argument. On the other, it surfaces a broader accountability gap: once a frontier AI model is deployed inside an air-gapped government system, neither the company that built it nor the public has meaningful visibility into how it is being used.</p><p>The first-ever supply chain risk designation applied to an American company, and the competing injunctions from two federal courts, suggest that existing legal frameworks are struggling to keep pace with the speed at which AI has become embedded in national security infrastructure. How the D.C. appeals court rules in May could set precedents that shape how AI companies structure their government contracts for years to come.</p><p>Anthropic's public statement struck a measured tone amid the legal turbulence: "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."</p><h2>What Happens Next</h2><p>The immediate next milestone is May 19, 2026, when oral arguments are scheduled before the U.S. Court of Appeals in Washington D.C. That proceeding will give the Trump administration its formal opportunity to respond to Anthropic's 96-page filing and to make its own case for why the supply chain risk designation should stand.</p><p>Meanwhile, the California preliminary injunction granted by U.S. District Judge Rita Lin remains in effect, providing Anthropic and its government contractor customers a degree of legal protection while the broader case continues. It is not yet clear how the two parallel proceedings will ultimately be reconciled, or whether the cases could eventually be consolidated or escalated further.</p><p>For the AI industry as a whole, the May 19 hearing is worth watching closely. The arguments made — and the court's reception of them — will offer early signals about how the judiciary intends to balance national security claims against First Amendment protections for AI companies that take public positions on how their technology should and should not be used.</p><p>For more tech news, visit our <a href=\"/news\">news section</a>.</p>", "excerpt": "Anthropic has told a federal appeals court in Washington D.C. that it has no technical ability to monitor, control, or shut down its Claude AI model once deployed in classified Pentagon networks — directly challenging the U.S. Department of Defense's claim that the company poses a sabotage risk. The disclosure came in a 96-page filing ahead of oral arguments scheduled for May 19, 2026, in a legal battle that began after the Pentagon designated Anthropic a supply chain risk following a contract dispute over autonomous weapons and domestic surveillance guardrails.", "keywords": ["Anthropic Pentagon kill switch", "Claude AI military deployment", "Anthropic supply chain risk", "AI autonomous weapons lawsuit", "Pentagon AI contract 2026"], "slug": "anthropic-no-kill-switch-ai-pentagon-classified-systems" } ```

Share:
← Back to Tech News