The U.S. Department of Defense and AI company Anthropic are locked in a dispute over the military use of AI technology, Reuters reports, citing multiple sources familiar with the matter. At the heart of the conflict are safeguards: Anthropic is seeking guarantees that its AI tools will not be used to control autonomous weapons without meaningful human oversight or to monitor U.S. citizens.
The Pentagon — renamed the “Department of War” under the Trump administration — has rejected these limitations. According to a January 9 memo outlining its AI strategy, the department insists on the right to use commercial AI technologies regardless of vendors’ usage policies, as long as U.S. laws are followed. As a result, negotiations over a contract worth up to $200 million are currently stalled.
Anthropic walks a tightrope
Anthropic CEO Dario Amodei wrote in a blog post this week that AI should support national defense “in every way except those that would make us resemble our autocratic adversaries.” He also criticized the fatal shootings of U.S. citizens during protests against immigration enforcement in Minneapolis, calling them “horrifying.” According to Reuters, these incidents have intensified concerns among some Silicon Valley leaders about government use of their technologies for potential acts of violence.
Anthropic has contracts with Palantir, which in turn works directly with ICE, the U.S. agency involved in the controversial incidents.
At the same time, the Pentagon would likely require Anthropic’s cooperation, since the company’s models are trained to avoid harmful behavior, and Anthropic engineers would need to customize the AI systems for defense use. The dispute comes at a sensitive moment for Anthropic, which is preparing for a public listing (IPO) and has reportedly committed substantial resources to national security contracts, according to Reuters.
Conclusion:
The standoff between the Pentagon and Anthropic highlights a growing tension at the heart of military AI adoption: balancing strategic advantage with ethical safeguards. As governments push for unrestricted access to advanced AI capabilities, tech companies face mounting pressure to define clear red lines. The outcome of this dispute could set an important precedent for how artificial intelligence is governed in national security contexts worldwide.
ES
EN