Google signs classified AI deal with Pentagon

The Information reported on this agreement on Tuesday. The deal allows the DoD to use Google AI models without the restrictions that led to Anthropic being banned in February. Google becomes the latest in a line of AI companies, alongside OpenAI and xAI, to provide classified AI capabilities to the US military.
Google has signed a classified AI agreement with the US Department of Defense that allows the Pentagon to use Google’s AI models “for any legitimate government purpose,” a report said Tuesday, citing a person familiar with the matter.
The deal was reported hours after more than 560 Google employees published an open letter to CEO Sundar Pichai on Monday, urging him to reject such military AI programs. Google has not publicly confirmed or commented on the deal at the time of this article’s publication.
The agreement, as revealed by The Information, was created without the ethical restrictions that Anthropic included in its Pentagon contract, restrictions that led to Anthropic being designated as a national security risk and placed on the Trump administration’s list in February 2026.
When Anthropic refused to remove contractual restrictions on mass surveillance of the home and autonomous weapons without human supervision, Google’s agreement is described as permissive “any legitimate government purpose” without such products.
TNW City Coworking Space – Where your best work happens
A workplace designed for growth, collaboration, and endless networking opportunities at the heart of technology.
That draft aligns Google’s deal with the Trump administration’s preferred open-ended model, rather than the revised model OpenAI negotiated, which includes red lines on domestic hiring while remaining within the Pentagon’s contract framework.
The Pentagon has now signed classified AI agreements with the four largest AI companies in the United States: OpenAI, xAI, Google, and, until it is blacklisted, Anthropic. The sequence is remarkable.
Anthropic was removed from the supplier pool to maintain ethical boundaries; OpenAI also negotiated to stay in the middle while maintaining certain limitations; xAI is signed without any obvious restrictions; and now Google has stepped in with language that appears to give the Pentagon the widest possible insight.
The result is a fragmented AI vendor pool in which Anthropic is excluded, and in which the remaining three suppliers have a diverse but important position in providing AI capabilities for military operations.
The timing associated with the Monday employee letter is the most striking feature. The 560 workers who signed a letter to Photosi on Monday morning, were employees of the company that signed the agreement and were asking Photosi to reject it.
That creates a stark and uncomfortable contrast that Photos will be asked to address in town halls, press conferences, and in court in the Musk v. Altman if the question of the position of Google’s principles of AI is consistent with the evidence.
Google has never confirmed the specific terms of its Pentagon AI involvement, and the “any legitimate government purpose” framing comes from one anonymous source reported by The Information.
The employee letter and the Pentagon agreement together define the fault line that all major AI companies are now navigating. On the other hand: the US government’s need for AI capabilities is not limited to classified military use.
On the other hand: published AI ethics accepted by companies, partly in response to the controversy of Project Maven of 2018, which commits to avoid AI weapons without human supervision. Anthropic chose its principles and was blacklisted.
OpenAI and Google seem to have chosen contracts. Whether that option is temporary, commercially reversible, or permanent will depend on how the political landscape unfolds, and whether the 560 signatories to Monday’s letter, and those who may join them, can change the numbers internally.



