The mission reportedly included bombing several facilities in Caracas.

Using the model for such purposes would contradict Anthropic’s public policy. The company’s rules explicitly prohibit applying AI to violence, weapons development, or surveillance operations.

“We cannot comment on whether Claude or any other model was used in any specific operation—classified or otherwise. Any use of LLMs—whether in the private sector or in government—must comply with our policy governing how the model can be deployed. We work closely with partners to help ensure adherence to these rules,” an Anthropic spokesperson said.

Claude’s integration into Defense Department structures reportedly became possible through Anthropic’s partnership with Palantir Technologies. Palantir’s software is widely used by the military and federal law enforcement agencies.

After the raid, an Anthropic employee reportedly asked a Palantir colleague what specific role the neural network played in the operation to capture Maduro, WSJ writes. A spokesperson for the startup said the company did not discuss the use of its models in specific missions “with any partners, including Palantir,” limiting conversations to technical matters.

“Anthropic is committed to using advanced AI in support of U.S. national security,” the company representative added.

Anthropic vs. the Pentagon?

Pentagon spokesperson Sean Parnell said the department is reviewing its relationship with the AI lab.

“Our country needs partners willing to help warfighters win any war,” he said.

In July 2025, the U.S. Department of Defense signed contracts worth up to $200 million with Anthropic, Google, OpenAI, and xAI to develop AI solutions for security. The department’s Chief Digital and AI Office planned to use their work to build agentic security systems.

However, in January 2026, WSJ reported that Anthropic risked losing its agreement with the Pentagon. The dispute stemmed from the startup’s strict ethics policy. Its rules prohibit using Claude for mass surveillance and fully autonomous lethal operations, limiting its use by agencies such as ICE and the FBI.

Officials’ dissatisfaction reportedly grew amid the integration of xAI’s Grok chatbot into the Pentagon’s network. Defense Secretary Pete Hegseth, commenting on the partnership with xAI, emphasized that the department “will not use models that don’t allow us to fight wars.”

Pressure on developers

Axios, citing sources, wrote that the Pentagon is pressuring four major AI companies to allow the U.S. military to use their technologies for “all lawful purposes.” This reportedly includes weapons development, intelligence gathering, and combat operations.

Anthropic refuses to lift restrictions related to surveillance of U.S. citizens and the creation of fully autonomous weapons. Talks have stalled, and quickly replacing Claude is difficult due to the model’s technological advantage in certain specialized government tasks.

In addition to Anthropic’s chatbot, the Pentagon uses OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok for unclassified tasks. All three reportedly agreed to loosen restrictions that apply to ordinary users.

Discussions are now underway about moving LLMs into classified environments and using them “for all lawful purposes.” One of the three companies has reportedly already agreed, while the other two are said to be showing more flexibility than Anthropic.

Militarization of AI

The U.S. is not the only country actively внедряющая (deploying) AI in the defense sector.

China

In June 2024, China reportedly introduced an AI “commander” for large-scale military simulations involving all branches of the PLA. The virtual strategist is described as having broad authority, learning quickly, and improving tactics during digital exercises.

In November, media reported that Chinese researchers adapted Meta’s Llama 13B model to create a tool called ChatBIT. The system was optimized for collecting and analyzing intelligence data and supporting operational decision-making.

India

New Delhi has also positioned AI as a driver of national security. The government has developed national strategies and programs, created dedicated institutes and bodies for AI deployment, and launched projects applying the technology across multiple sectors.

United Kingdom

London has designated AI as a priority area. In the “Defence AI Strategy” (2022), the ministry describes AI as a key component of future armed forces. In the “Strategic Defence Review” (2025), the technology is labeled a foundational element of modern warfare.

Whereas AI was once seen as an auxiliary tool in military contexts, the British Armed Forces now plan a transformation into “technologically integrated forces,” in which AI systems are to be used at every level—from штабная аналитика (staff analytics) to the battlefield.