Anthropic insists on excluding two specific use cases: mass surveillance of U.S. citizens and fully autonomous weapons. A senior government official told Axios that negotiating individual use cases with Anthropic is impractical. According to the official, OpenAI, Google, and xAI have been more cooperative.

Pentagon and Anthropic clash over AI use in autonomous weapons

The U.S. Department of Defense wants to deploy AI technology without restrictions, while Anthropic is demanding safeguards against autonomous weapons control and domestic surveillance. A contract worth up to $200 million is now in jeopardy.

The U.S. Department of Defense and AI company Anthropic are locked in a conflict over the military use of AI technology, Reuters reports, citing several people familiar with the matter. At the core of the dispute are safeguards: Anthropic is seeking guarantees that its AI tools will not be used to control autonomous weapons without meaningful human oversight or to monitor American citizens.

The Pentagon—renamed the “Department of War” under the Trump administration—has rejected these restrictions. According to a January 9 memo outlining its AI strategy, the department insists on being able to use commercial AI technologies independently of developers’ usage policies, as long as U.S. laws are followed. Negotiations over a contract valued at up to $200 million are currently stalled.

Anthropic walks a fine line

Anthropic CEO Dario Amodei wrote in a blog post this week that AI should support national defense “in every way except those that would make us more like our autocratic adversaries.” He also criticized the fatal shootings of U.S. citizens during protests against immigration measures in Minneapolis as “horrific.” According to Reuters, these incidents have heightened concerns in parts of Silicon Valley about the government’s potential use of tech tools for violence.

Anthropic has contracts with Palantir, which in turn works directly with U.S. Immigration and Customs Enforcement (ICE), the agency involved in the incidents. At the same time, the Pentagon may still need Anthropic’s cooperation, as the company’s models are trained to avoid potentially harmful actions and would likely require Anthropic staff to customize them for military use.

The dispute comes at a sensitive moment for Anthropic, which is preparing for an initial public offering and, according to Reuters, has invested significant resources in national security–related business.