
Google has entered into an agreement with the U.S. Department of Defense to provide its artificial intelligence models for use on classified systems.
Summary
- Google signed a Pentagon deal to deploy AI models on classified networks for “any lawful government purpose,” joining OpenAI and xAI in defense contracts.
- The agreement includes limits against domestic surveillance and autonomous weapons without human oversight, but gives the Pentagon final authority over operational use.
- Tensions persist as Anthropic resists loosening safeguards; despite being labeled a supply-chain risk, its advanced AI tools are still being used by agencies like the NSA.
According to The Information, citing a person familiar with the matter, the Pentagon can deploy Google’s AI tools for “any lawful government purpose” under the terms of the deal. The arrangement places Google alongside OpenAI and xAI, both of which have also secured contracts to supply AI models for classified use.
Such classified networks support highly sensitive operations, including mission planning and weapons targeting, where access to advanced AI systems is increasingly seen as critical.
The agreement is part of a wider push by the Pentagon, which in 2025 signed contracts worth up to $200 million each with leading AI developers, including Anthropic, OpenAI, and Google. Earlier reporting from Reuters indicated that defense officials had been urging AI firms to make their systems available on classified networks without the usual user-facing restrictions.
AI safeguards and usage restrictions
As part of the contract, Google is expected to assist in modifying its AI safety filters at the government’s request, the report said.
The agreement includes explicit safeguards stating that “the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.”
At the same time, the terms clarify that Google does not have the authority to “control or veto lawful government operational decision-making,” leaving final use decisions in the hands of defense officials.
Google said it continues to support government agencies across both classified and unclassified work. A company spokesperson added that it remains aligned with the view that AI should not be used for domestic mass surveillance or autonomous weapons without human oversight.
Friction over AI usage boundaries
The Pentagon has maintained that it does not intend to deploy AI for mass surveillance of Americans or for fully autonomous weapons, while still insisting that “any lawful use” of AI should remain available to the government.
That position has created friction with some AI providers, most notably Anthropic. The company resisted earlier Pentagon demands to remove safeguards from its Claude models that restrict use in autonomous weapons and domestic surveillance.
The dispute escalated to the point where the Defense Department labeled Anthropic a “supply chain risk,” even as interest in its technology persisted within government agencies.
Additional reporting suggests that internal demand has complicated the Pentagon’s stance. The National Security Agency has reportedly secured access to Anthropic’s advanced “Mythos Preview” model despite the designation. The model, known for its strong cyber capabilities, is being used by select organizations to identify vulnerabilities in digital systems.
The situation has led to a broader standoff between AI developers and defense officials over how far usage rights should extend, especially around the scope of “all lawful purposes” and the limits of safety guardrails in military environments.
