Google Signs Classified AI Deal With the Pentagon

Google Signs Classified AI Deal With the Pentagon

Google Signs Classified AI Deal With the Pentagon, Joining OpenAI and xAI

Google has signed a classified artificial intelligence agreement with the U.S. Department of Defense, allowing the Pentagon to deploy its Gemini AI models for "any lawful government purpose" — including on classified networks. The deal, reported by The Information on April 28, 2026, marks a significant expansion of Google's existing relationship with the military and places the tech giant alongside OpenAI and Elon Musk's xAI as the dominant commercial AI suppliers to the U.S. defense establishment.

The agreement is an amendment to an earlier contract that permitted Gemini to be used only on unclassified government data. Its extension into classified environments represents one of the most consequential steps yet in the Pentagon's accelerating push to embed frontier AI models across its operations — a drive that has already produced a high-profile legal battle with Anthropic and reshuffled the commercial AI landscape in Washington.

What the Google–Pentagon Deal Actually Covers

According to The Information, the classified agreement grants the Department of Defense access to Google's Gemini models on Google's own infrastructure, with API access extended to classified networks. The contract language mirrors that of similar agreements struck with OpenAI and xAI: the AI can be used for "any lawful government purpose," giving the military broad discretion over its application.

The deal does include some limiting language. The contract states the AI system "is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control." However, unlike the hard contractual prohibitions that Anthropic had insisted upon before its own Pentagon relationship collapsed, these terms are framed as guidance rather than enforceable restrictions — and the agreement requires Google to help adjust its AI safety settings and filters at the government's request.

On the same day the classified deal was reported, the Pentagon also rolled out Google's latest Gemini models on its unclassified generative AI platform, GenAI.mil — a platform that had originally launched in December 2025 under an earlier Google contract.

A Google spokesperson told Reuters: "We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security."

Inside the Pentagon's Broader AI Push — and the Anthropic Fallout

Google's classified deal is the latest chapter in an aggressive campaign by the Department of Defense, under Defense Secretary Pete Hegseth, to secure unrestricted access to the most powerful commercial AI models available. The Pentagon has pursued a stated goal of becoming an "AI-first warfighting force," a posture that has intensified as geopolitical tensions — including the escalating conflict with Iran — have made AI-enabled military capabilities a higher priority, according to MIT Technology Review.

The clearest illustration of that agenda was the Pentagon's dispute with Anthropic, maker of the Claude AI model. Anthropic had signed a $200 million contract with the DoD in July 2025. Negotiations broke down when the Pentagon demanded the removal of contractual prohibitions preventing Claude from being used for fully autonomous weapons or domestic mass surveillance. Anthropic's CEO publicly refused, saying the company could not "in good conscience accede to their request."

The fallout was swift and severe. In late February 2026, President Donald Trump directed federal agencies to cease using Anthropic's products. Hegseth designated the company a national security supply chain risk — a label typically applied to foreign adversaries — and posted on X that "Anthropic delivered a master class in arrogance and betrayal." Anthropic subsequently filed lawsuits in two federal courts. It won a preliminary injunction in San Francisco federal court, but lost a stay request before the D.C. Circuit appeals court in April 2026.

Microsoft weighed in on the supply chain risk designation in a court filing in San Francisco federal court, arguing that "the use of a supply chain risk designation to address a contract dispute may bring severe economic effects that are not in the public interest."

OpenAI moved to fill the gap Anthropic's exit created. On February 28, 2026, the company announced it had reached a deal with the Pentagon permitting U.S. military use of its technologies in classified settings, with CEO Sam Altman stating the DoD agreed the technology would not be used for "domestic mass surveillance" or "autonomous weapon systems." xAI, the AI company founded by Elon Musk, had its classified access to Pentagon networks confirmed in February 2026 under a contract originally signed in July 2025. In March 2026, more than 30 employees from OpenAI and Google DeepMind filed a statement supporting Anthropic's lawsuit against the Pentagon.

Employee Resistance and the Ghost of Project Maven

Google's decision to proceed with the classified Pentagon agreement did not go uncontested internally. More than 600 Google employees signed an open letter urging CEO Sundar Pichai to reject the deal, arguing that "refusing classified work is the only way to ensure Google's AI isn't misused." The letter was published one day before the deal was publicly reported.

The episode draws an unmistakable parallel to 2018, when a wave of employee protest over Project Maven — a Pentagon initiative that used Google AI to analyze drone surveillance footage — forced the company to withdraw from the contract. That decision was, for years, held up as evidence that sustained internal pressure could constrain the commercial AI industry's military ambitions.

This time, Google proceeded. The classified Gemini deal suggests that, at least for now, the commercial imperatives and national security arguments supporting Pentagon partnerships have outweighed the internal opposition that once caused the company to step back. The Pentagon's consolidation of AI contracts around a small number of major providers — and its willingness to use tools like supply chain risk designations to discipline those who resist — has changed the calculus for companies weighing the risks of either participation or refusal.

Why This Matters Beyond the Beltway

The Google–Pentagon classified AI deal reflects a structural shift in how frontier AI models are being deployed and governed. For years, the leading AI companies maintained at least nominal distance from classified military applications, citing safety concerns and the difficulty of maintaining meaningful oversight once AI systems are embedded in classified environments. That distance is now collapsing.

The requirement that Google help adjust its AI safety settings and filters at the government's request is particularly significant. It places the guardrails that AI companies have built into their models — restrictions on harmful outputs, bias mitigations, and content filters — under potential government control within classified deployments. That is precisely the kind of arrangement Anthropic refused, and precisely what the Pentagon's "any lawful government purpose" contract language is designed to enable.

The Pentagon's plan to have AI companies train on classified data, as reported by MIT Technology Review, would deepen these entanglements further. If commercial models are fine-tuned on classified military datasets, the boundary between consumer AI products and military AI systems becomes even harder to draw.

For the technology industry broadly, the sequence of events — Anthropic's refusal, its punishment via supply chain risk designation, and the rapid signing of deals with Google, OpenAI, and xAI — sends a clear signal about the costs of non-compliance with Pentagon demands. Microsoft's court filing suggesting the designation could produce "severe economic effects that are not in the public interest" hints at the anxiety this precedent has generated across the sector.

What Comes Next

Anthropic's legal battles are ongoing. The company won a preliminary injunction in San Francisco federal court blocking elements of the supply chain risk designation, but lost its bid for a stay at the D.C. Circuit appeals court in April 2026. The outcome of those proceedings could have implications for how far the Pentagon can go in using procurement leverage to compel AI companies to remove safety guardrails.

For Google, the immediate question is how the classified Gemini deployment will be implemented and overseen. The contract's language permitting the government to adjust AI safety settings raises questions about what those adjustments will look like in practice and whether any independent accountability mechanism exists for classified use. Google has not publicly detailed the oversight structure.

The Pentagon's GenAI.mil platform — now carrying Google's Gemini models on the unclassified side — is likely to expand. The DoD's stated ambition to become an "AI-first warfighting force" suggests that the current round of contracts is a foundation, not a ceiling. Whether additional AI companies seek classified Pentagon contracts, and on what terms, will be shaped in part by how Anthropic's legal challenge ultimately resolves.

For more tech news, visit our news section.

The Productivity Dimension: AI, Trust, and the Tools We Use

The rapid militarization of commercial AI models is a reminder that the tools shaping daily productivity and decision-making are not neutral. The same Gemini models that power workplace AI assistants are now being deployed on classified Pentagon networks — with safety filters adjustable on government request. Understanding who controls the AI tools you rely on, and under what conditions those tools can be modified, is no longer an abstract policy question. It is a practical concern for anyone whose work, health data, or personal information flows through AI-powered platforms. At Moccet, we believe staying informed about AI governance is as essential as optimizing your workflow. Join the Moccet waitlist to stay ahead of the curve.

Share:
← Back to Tech News