A Google logo is displayed outside an office building on 12 December 2025 in San Diego, California. Photograph: Kevin Carter/Getty ImagesView image in fullscreenA Google logo is displayed outside an office building on 12 December 2025 in San Diego, California. Photograph: Kevin Carter/Getty ImagesGoogle reportedly signs classified AI deal with US PentagonTech company is latest Silicon Valley firm to sign agreement with US military despite widespread employee opposition
Google has reportedly signed a deal with the US Pentagon to use its artificial intelligence models for classified work. The tech company joins a growing list of Silicon Valley firms inking agreements with the US military.
The agreement allows the Pentagon to use Google’s AI for “any lawful government purpose”, the report from the Information added, putting it alongside OpenAI and Elon Musk’s xAI, which also have deals to supply AI models for classified use. Similar agreements, both at Google and other AI firms, have sparked significant disagreements with the Pentagon and major employee pushback.
Classified networks are used to handle a wide range of sensitive work, including mission planning and weapons targeting. The Pentagon signed agreements worth up to $200m each with major AI labs in 2025, including Anthropic, OpenAI and Google. The government agency had been pushing top AI companies such as OpenAI and Anthropic to make their tools available on classified networks without the standard restrictions they apply to users.
Google’s agreement requires it to help in adjusting the company’s AI safety settings and filters at the government’s request, according to the Information report.
The contract includes language stating, “the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control”.
However, the agreement also says it does not give Google the right to control or veto lawful government operational decision-making, the report added.
Google said it supported government agencies across both classified and non-classified projects. A spokesperson for the company said that the company remained committed to the consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.
“We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security,” a spokesperson for Google told Reuters.
The Pentagon has said it has no interest in using AI to conduct mass surveillance of Americans or to develop lethal weapons that operate without human involvement, but wants “any lawful use” of AI to be allowed. Anthropic faced fallout with the Pentagon earlier in the year after the startup refused to remove guardrails against using its AI for autonomous weapons or domestic surveillance, and the department designated the Claude-maker a supply-chain risk.
Google’s agreement with the Pentagon comes despite employees’ fears that their work could be used in “inhumane or extremely harmful ways”, as a letter from Google employees reads.
On Monday, more than 600 Google workers signed an open letter to the CEO, Sundar Pichai, expressing concerns about negotiations between Google and the Pentagon.
“We feel that our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses,” they wrote. “Therefore, we ask you to refuse to make our AI systems available for classified workloads.”
Last year, Google’s owner, Alphabet, lifted a ban on its use of AI for weapons and surveillance tools. The company removed language in its ethical guidelines that promised the company would not pursue “technologies that cause or are likely to cause overall harm”. The company’s AI lead, Demis Hassabis, said in a blogpost that AI had become important for protecting “national security”.
Some Google employees expressed their concerns about the change in language on the company’s internal message board at the time. One asked: “Are we the baddies?” according to Business Insider.
The use of AI and technology in war has long been a source of anxiety for Google employees, whose previous activism on this issue has seen some success. In 2018, thousands of Google employees signed a letter protesting against their company’s involvement in a contract with the Pentagon that used its AI tools to analyze drone surveillance footage. Google chose not to renew the Project Maven contract that year after sweeping internal backlash, and the controversial surveillance analytics company Palantir swooped in to take over.