
Google Signs Classified Pentagon AI Deal Amid Staff Backlash
Google Signs Classified Pentagon AI Contract as Over 600 Employees Protest
Google has signed a classified artificial intelligence contract with the U.S. Department of Defense, allowing the Pentagon to deploy Google's AI models for "any lawful governmental purpose," according to a report first published by The Information and confirmed by a Pentagon official speaking anonymously to The Hill in late April 2026. The deal has triggered immediate internal backlash, with more than 600 employees at Google DeepMind and Cloud signing an open letter to CEO Sundar Pichai urging the company to reject classified military AI work.
The agreement is structured as an amendment to an existing contract, according to a Google Public Sector spokesperson. Its broad access language mirrors classified AI agreements the Pentagon has previously signed with OpenAI and Elon Musk's xAI, according to The New York Times as reported by Decrypt and Yahoo News.
What the Classified Pentagon AI Deal Actually Covers
According to reporting by The Information and multiple confirming outlets, the contract grants the Pentagon access to Google's AI models under terms that allow use for any lawful governmental purpose. The agreement also requires Google to assist in adjusting its AI safety settings and filters at the government's request. However, the contract explicitly states it does not confer "any right to control or veto lawful Government operational decision-making," according to The Information.
A Google spokesperson told Gizmodo and The Hill that the company is engaged across a range of national security applications. "We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security," the spokesperson said. The same spokesperson described the company's work as supporting "government agencies across both classified and non-classified projects, applying our expertise to areas like logistics, cybersecurity, diplomatic translation, fleet maintenance, and the defense of critical infrastructure."
In a separate statement to Reuters, reported by Engadget, a Google spokesperson said: "We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security."
Google also addressed concerns about potential misuse in its public statements. "We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight," a Google spokesperson told Gizmodo.
The deal does not exist in isolation. By December 2025, the Pentagon had already launched GenAI.mil, a platform powered by Google's Gemini chatbot, made available to all defense personnel, according to IBTimes UK. Google has deployed Gemini to 3 million Pentagon personnel and holds a share of the $9 billion Joint Warfighting Cloud Capability contract, according to The Next Web.
Internal Backlash Echoes the 2018 Project Maven Crisis
The employee protest over the classified Pentagon deal draws direct comparisons to one of the most consequential internal revolts in Silicon Valley history. In 2018, roughly 4,000 Google employees signed an internal petition opposing Project Maven, a Pentagon program that used AI to detect and analyze objects in drone video feeds. At least 12 employees resigned over the program, according to The Next Web, the Arms Control Association, and CNN. The pressure campaign ultimately led Google not to renew the Maven contract, which expired in March 2019.
In the wake of that episode, Google published a set of AI Principles that included a public pledge not to pursue weapons or surveillance technology. That commitment stood for nearly seven years before Google quietly removed it from its publicly published AI Principles on February 4, 2025, according to CNBC. The updated blog post was co-written by Demis Hassabis, CEO of Google DeepMind.
"There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," Hassabis wrote in the February 2025 post, as reported by CNBC and Fortune.
Now, history appears to be repeating — with a different outcome. More than 600 employees at Google DeepMind and Cloud have signed an open letter to CEO Sundar Pichai opposing the new classified deal, according to The Hill. The signatories warned of the risks of deploying AI systems in contexts where errors or misuse could have severe consequences. "As people working on AI, we know that these systems can centralize power and that they do make mistakes," the letter states. "The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads."
According to IBTimes UK, Google DeepMind's response to employee concerns has been to encourage staff to trust leadership. The gap between that response and the 2018 precedent — where employee pressure produced a concrete policy reversal — marks a notable shift in how Google is handling internal dissent on military AI work.
Google DeepMind Researchers Speak Out
The backlash has not been confined to anonymous petition signatures. Andreas Kirsch, a research scientist at Google DeepMind, stated publicly that he was "incredibly ashamed" after the deal was confirmed, according to IBTimes UK. Kirsch also said: "I'm speechless at Google signing a deal to use our AI models for classified tasks."
Sofia Liguori, an AI research engineer at Google DeepMind in the UK, told Bloomberg that the company's response to worker concerns had been to encourage staff to trust leadership, according to IBTimes UK.
These reactions reflect a broader tension inside Google DeepMind between the company's commercial and national security ambitions and the ethical commitments that many of its researchers say drew them to work there in the first place.
The Pentagon's Accelerating AI Push — and One Major Stumble
The Google deal sits within a rapidly expanding Pentagon effort to integrate AI into classified defense operations. Defense Secretary Pete Hegseth stated in January 2026 that AI technology should be integrated across the military, according to The Hill and Decrypt. The DoD's Chief Digital and Artificial Intelligence Office awarded contracts of up to $200 million each to Anthropic, Google, OpenAI, and xAI in July 2025, according to CNBC.
OpenAI has struck a deal to deploy its AI models on classified Pentagon networks, and Elon Musk's xAI has integrated its systems into GenAI.mil, the military's internal AI platform, according to RT and The Hill.
Not every AI company has complied without resistance. In March 2026, the Pentagon designated Anthropic a "supply chain risk," effectively barring it from classified government work, after CEO Dario Amodei refused to allow unrestricted use of its AI models, according to Decrypt and The Hill. Anthropic has since sued the Pentagon over the designation. That episode underscores the pressure major AI labs are facing to agree to expansive government access — and what can happen when they decline.
Google's classified agreement, by contrast, includes the broad "any lawful governmental purpose" language and explicitly limits Google's ability to override government operational decisions, placing it closer to the terms accepted by OpenAI and xAI than to the position Anthropic refused.
What Comes Next
The classified nature of the agreement means the full scope of how Google's AI models will be used by the Pentagon remains undisclosed. The contract's provision requiring Google to assist in adjusting AI safety settings and filters at the government's request — while denying Google any veto over lawful government decisions — is likely to remain a focal point of employee concern and external scrutiny.
The contrast with 2018 is significant. Then, roughly 4,000 employee signatures and a dozen resignations were enough to reverse Google's course on a single Pentagon program. Today, more than 600 signatures have produced a public statement of company pride. Whether that gap reflects a change in corporate culture, competitive pressure from OpenAI and xAI, or the strategic logic Hassabis articulated in February 2025 — that AI leadership demands engagement with geopolitical realities — is a question the company has not publicly answered in detail.
What is clear is that the Pentagon's classified AI infrastructure is expanding rapidly, that Google is now formally part of it, and that a meaningful segment of the company's own AI researchers believes that is the wrong decision.
For more tech news, visit our news section.