Google Signs Classified Pentagon AI Deal Amid Employee Revolt

Google Signs Classified Pentagon AI Deal Amid Employee Revolt

Google Signs Classified Pentagon AI Deal as Employee Revolt Falls Short

Google has signed a classified agreement with the U.S. Department of Defense allowing the Pentagon to use its AI models for "any lawful government purpose," according to a report by The Information published April 28, 2026. The deal was reported less than a day after more than 580 Google employees — including over 20 directors, senior directors, and vice presidents — sent an open letter to CEO Sundar Pichai demanding he reject precisely this kind of classified military AI work.

A Google Public Sector spokesperson told The Information that the new agreement is an amendment to Google's existing contract with the Department of Defense, not an entirely new contract. The deal places Google alongside OpenAI and Elon Musk's xAI, which also have agreements to supply AI models for classified use, according to Reuters.

What the Deal Says — and What It Doesn't

The contract includes language stating that "the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control." However, the same agreement makes clear that it does not confer "any right to control or veto lawful Government operational decision-making," according to Reuters reporting cited across multiple outlets.

In a statement, a Google spokesperson framed the deal as a measured step: "We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security."

Critics inside the company argue the nominal guardrails are unenforceable. The employee letter, coordinated by staff at Google DeepMind — with two-fifths of signatories working in the AI division and a similar share in the Cloud unit, according to The Irish Times citing the Financial Times — specifically warned that on air-gapped classified networks, Google cannot monitor how its AI is used, making enforcement of ethical guardrails effectively impossible.

The Employee Letter: Timing, Scale, and Substance

According to The Hill, a letter signed by more than 600 employees at Google DeepMind and Cloud was sent on Monday, April 27, 2026 — less than 24 hours before the deal was reported. The Next Web, citing Bloomberg, puts the figure at more than 580 Google employees, with over 20 directors, senior directors, and vice presidents among the signatories. The two figures reflect different counting methodologies across sources; both point to a substantial cross-section of the company's AI and cloud workforce.

The letter invoked the employees' firsthand knowledge of AI systems as the basis for their objection. "As people working on AI, we know that these systems can centralize power and that they do make mistakes," the employees wrote, according to The Hill. Elsewhere, the letter stated: "We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses."

The letter concluded with a stark warning to Pichai: "Making the wrong call right now would cause irreparable damage to Google's reputation, business, and role in the world."

The Anthropic Fallout That Set the Stage

To understand how Google arrived at this deal, it is necessary to understand what happened to Anthropic. In July 2025, Anthropic had signed a $200 million contract with the Pentagon, under which its Claude model became the first frontier AI approved for use on classified networks. Negotiations broke down when the Pentagon demanded Anthropic waive contractual restrictions on the use of its AI for domestic mass surveillance and autonomous weapons, insisting on "all lawful purposes" access.

Anthropic refused. In late February 2026, Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk" — the first such designation ever applied to an American company, according to Reuters and CNN. On February 27, 2026, President Donald Trump ordered all federal agencies to immediately cease use of Anthropic's AI technology, with some agencies granted a six-month phase-out period, CNN reported.

The Pentagon's Chief Technology Officer, Emil Michael, articulated the administration's position bluntly: "We can't have a company that has a different policy preference that is baked into the model… pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection."

Acting U.S. Attorney General Todd Blanche added on X: "Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company."

Anthropic filed two federal lawsuits challenging the designation on March 9, 2026 — the same day OpenAI struck its own deal with the Pentagon, just hours after the Trump administration's order against Anthropic, according to CNN. On March 26, 2026, U.S. District Judge Rita Lin issued a preliminary injunction blocking the Pentagon's supply chain risk designation against Anthropic. In her ruling, Judge Lin wrote: "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."

Google's deal, reported weeks after the injunction, includes the nominal restrictions that Anthropic fought to preserve — but also the veto-waiver language that renders them difficult to enforce in practice.

Google's Long Road From Maven to the Pentagon

The April 28 deal is the culmination of a years-long shift in Google's posture toward defense work. In 2018, approximately 4,000 Google employees signed a petition urging CEO Sundar Pichai to end the company's involvement in Project Maven, a Pentagon program using AI to analyze drone footage. Google chose not to renew that contract following the employee revolt and a dozen resignations, according to the Arms Control Association. The company subsequently published AI Principles that included a pledge not to build weapons or surveillance technology.

By December 2022, however, Google had won a share of the Pentagon's $9 billion Joint Warfighting Cloud Capability (JWCC) contract, alongside Amazon, Microsoft, and Oracle. And in early 2025, Google quietly removed the weapons and surveillance pledge from its AI Principles entirely, according to Al Jazeera reporting from February 5, 2025.

The Pentagon signed agreements worth up to $200 million each with major AI labs in 2025 — including Anthropic, OpenAI, and Google — according to Reuters. The Google deal reported on April 28, 2026 represents the latest and arguably most expansive step in that trajectory: open-ended access to Google's AI models on classified networks, with guardrails that Google itself cannot verify are being followed.

What Comes Next

Anthropic's legal challenge to the supply chain risk designation remains active, with the preliminary injunction in place but the broader litigation unresolved. Whether the courts' scrutiny of the administration's treatment of Anthropic has any bearing on how the Google deal is structured or enforced remains to be seen.

Inside Google, the employee letter represents the most significant internal dissent since the 2018 Project Maven protests — though the comparison is instructive. In 2018, roughly 4,000 employees signed a petition, a dozen resigned, and Google walked away from the contract. In 2026, a smaller but still substantial group of employees, including senior leaders, sent their letter and the deal was reported within hours. The dynamics of the AI industry — and Google's place in it — have shifted considerably in the intervening years.

The question of how AI companies balance commercial and national security imperatives with internal ethics commitments is unlikely to be resolved by any single contract. What the Google-Pentagon deal does establish, clearly, is where the current boundary sits: AI access for any lawful government purpose, with restrictions on paper and no mechanism for enforcement on classified networks.

For more tech news, visit our news section.

Share:
← Back to Tech News