
Pentagon AI chief confirms DOD's expanded use of Google, says reliance on one model 'never a good thing'
```json { "title": "Pentagon AI Chief Confirms Google Gemini Expansion After Anthropic Blacklist", "metaDescription": "The Pentagon's AI chief confirms expanded use of Google Gemini for classified work, warns against overreliance on one vendor after Anthropic's DOD blacklisting.", "content": "<h2>Pentagon Confirms Google Gemini for Classified Work as Anthropic Legal Battle Drags On</h2><p>The Department of Defense is expanding its use of Google's Gemini AI model for classified work, Pentagon chief digital and artificial intelligence officer Cameron Stanley confirmed on April 28, 2026, approximately two months after the DOD designated Anthropic a supply chain risk and effectively barred it from military contracts. The move represents a significant reshuffling of the government's AI vendor landscape — one that has drawn legal challenges, internal dissent at Google, and conflicting rulings from two separate courts.</p><p>Speaking publicly about the DOD's reliance on Google Gemini, Stanley offered a candid warning even as he confirmed the expanded partnership. \"Overreliance on one vendor is never a good thing,\" he said, noting that the Pentagon is also working with OpenAI and other vendors as it modernizes wartime capabilities.</p><h2>Google Gemini Fills the Gap Left by Anthropic's DOD Blacklisting</h2><p>The DOD officially designated Anthropic a supply chain risk in early March 2026, a label that has historically been reserved for companies linked to foreign adversaries. Anthropic became the first American company to be publicly named under this designation. The move came after the Pentagon demanded Anthropic make its Claude model available for what it described as "all lawful purposes," including potential use in fully autonomous weapons and domestic mass surveillance. Anthropic refused, and Defense Secretary Pete Hegseth subsequently declared on X that any contractor or supplier doing business with the U.S. military was barred from commercial activity with Anthropic.</p><p>The timing was significant. Anthropic had signed a $200 million contract with the Pentagon in July 2025, and Claude had become the first major AI model deployed in the government's classified networks. That relationship unraveled rapidly, with the blacklisting triggering widespread disruption across government agencies that had come to depend on the technology.</p><p>Into that gap stepped Google. According to a person familiar with the matter who asked not to be named because the specifics of the arrangement are not public, the DOD has tapped Google's Gemini for classified work. The Information had earlier reported that Google signed a deal with the DOD for classified work, citing a separate person familiar with the matter. According to TechCrunch, Google's new contract agreement with the DOD includes language stating it does not intend for its AI to be used for domestic mass surveillance or autonomous weapons — restrictions that closely mirror the position Anthropic was punished for taking.</p><p>As part of an expanding government AI footprint, Google is also providing the Pentagon with access to a tool called Agent Designer, a low- and no-code platform that allows users to build and deploy AI agents using natural language, according to Built In.</p><p>Stanley highlighted the operational impact of the transition. \"There's a lot of different things that are saving thousands of man hours, literally thousands of man hours on a weekly basis,\" he said of the DOD's use of Gemini.</p><h2>Internal Dissent at Google, Conflicting Court Rulings on Anthropic</h2><p>The Pentagon's pivot to Google has not gone unchallenged inside the company itself. More than 700 Google employees signed a letter sent to CEO Sundar Pichai calling for the company to reject classified workloads from the Pentagon, according to CNBC. The Hill reported the letter was signed by more than 600 employees specifically at Google DeepMind and Cloud. In the letter, employees wrote: \"We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.\"</p><p>The internal pushback echoes similar employee protests that have periodically surfaced at major tech companies over government defense contracts, and it arrives at a moment when the boundaries of acceptable AI use in military contexts are being actively contested in public discourse, in corporate boardrooms, and in federal courts.</p><p>On the legal front, Anthropic's fight against the blacklisting has produced a split outcome. A federal appeals court in Washington, D.C., denied Anthropic's request to temporarily block the DOD's blacklisting while a lawsuit challenging the sanction plays out, leaving the company excluded from DOD contracts. However, in a separate but related case, a judge in San Francisco granted Anthropic a preliminary injunction that bars the Trump administration from enforcing the ban on Claude's use across other government agencies. As a result of these split decisions, Anthropic is excluded from DOD contracts but is able to continue working with other federal agencies during the litigation.</p><p>U.S. District Judge Rita Lin, who issued the San Francisco injunction on March 26, 2026, was pointed in her assessment of the government's justification. \"The record strongly suggests that the reasons given for designating Anthropic a supply chain risk were pretextual and that [the government's] real motive was unlawful retaliation,\" she wrote. Anthropic responded with a statement: \"While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.\"</p><p>The Trump administration, for its part, has defended its actions in strong terms. Acting U.S. Attorney General Todd Blanche stated: \"Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company.\"</p><h2>Anthropic's Mythos Model Adds New Complexity to the Standoff</h2><p>A further wrinkle emerged in April when Axios reported that the National Security Agency is using Anthropic's most powerful model, Mythos Preview, despite top DOD officials insisting the company is a supply chain risk. Anthropic had restricted access to Mythos to around 40 organizations, citing the model's advanced offensive cyber capabilities as too dangerous for wider release.</p><p>Cameron Stanley acknowledged the significance of the Mythos rollout, describing it as a wakeup call for the DOD and noting the powerful model was made available to a limited number of companies due in part to its advanced cyber capabilities and the potential risks they posed.</p><p>Meanwhile, Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles in April to discuss the use of Mythos within government and Anthropic's wider plans and security practices, according to CNN. President Donald Trump said last week that "it's possible" there will be a deal allowing Anthropic's models to be used within the DOD — a signal that the door to reconciliation may not be fully closed.</p><h2>Why This Dispute Matters Beyond the Pentagon</h2><p>The standoff between the DOD and Anthropic is widely seen as a defining moment for how AI companies navigate government relationships — particularly when government demands conflict with a company's stated ethical commitments. The case raises pointed questions about the extent to which AI developers can or should place conditions on how their models are used in classified and military contexts.</p><p>Michael Horowitz, senior fellow for technology and innovation at the Council on Foreign Relations, offered a blunt reading of the situation: \"This feels to me like a dispute that is about politics and personalities. It's masquerading as a policy dispute.\"</p><p>Jacquelyn Schneider, Hargrove Hoover fellow at Stanford University's Hoover Institution, framed the practical stakes in direct terms: \"You're not going to walk away from technologies that are deeply embedded in your wartime processes right before you go to war.\"</p><p>The comment underscores a tension that runs throughout this episode. The DOD has invested heavily in AI-powered warfighting capabilities, and the abrupt disruption of its relationship with Anthropic — its primary classified AI partner — created an urgent need to find alternatives. The speed with which Google and OpenAI stepped into the void illustrates both the commercial demand and the geopolitical pressure shaping the federal AI market in 2026.</p><p>The fact that Google's new contract includes language restricting autonomous weapons and domestic mass surveillance use — essentially the same safeguards Anthropic was penalized for demanding — adds an ironic dimension to the episode that has not gone unnoticed in policy circles.</p><h2>What Comes Next</h2><p>The immediate path forward involves parallel tracks of litigation, diplomacy, and vendor consolidation. Anthropic's lawsuit against the DOD's supply chain risk designation continues in federal court, with Judge Lin's preliminary injunction providing partial protection while the case proceeds. A potential deal that would allow Anthropic models back into DOD systems has been floated by President Trump, but no agreement has been announced.</p><p>Google's expanded role at the Pentagon will face continued scrutiny — from its own employees, from policymakers, and from competitors. The DOD, for its part, appears committed to a multi-vendor strategy. Cameron Stanley's public warning against overreliance on any single AI provider suggests the department is aware of the risks that come with rapid, concentrated transitions in critical technology infrastructure.</p><p>The broader AI industry is watching closely. How the courts ultimately rule on the Anthropic blacklisting — and whether the "supply chain risk" designation survives legal challenge — will have significant implications for how AI companies structure their government contracts and what conditions, if any, they are permitted to attach to the use of their models.</p><p>For now, Google Gemini is the DOD's primary classified AI partner. Whether that arrangement proves durable, and what it means for the future of AI governance in national security contexts, remains an open question.</p><p>For more tech news, visit our <a href=\"/news\">news section</a>.</p>", "excerpt": "Pentagon AI chief Cameron Stanley confirmed on April 28, 2026, that the Department of Defense has expanded its use of Google's Gemini AI model for classified work, approximately two months after the DOD blacklisted Anthropic as a supply chain risk. Stanley warned that 'overreliance on one vendor is never a good thing,' even as the Pentagon navigates conflicting court rulings and ongoing legal battles stemming from Anthropic's removal. More than 700 Google employees have signed a letter opposing the company's participation in classified Pentagon AI work.", "keywords": ["Pentagon AI", "Google Gemini DOD", "Anthropic blacklist", "Department of Defense AI", "Cameron Stanley"], "slug": "pentagon-ai-chief-confirms-google-gemini-expansion-after-anthropic-blacklist" } ```