Previously, Anthropic positioned itself as one of the most safety-focused AI labs in the market, but leadership has now walked back a central commitment. In 2023, the company introduced its Responsible Scaling Policy (RSP), a voluntary set of rules intended to reduce catastrophic risks from advanced AI systems. The original document implied the developer would pause AI development if it judged the technology potentially dangerous.
“We decided we wouldn’t help anyone if we simply stopped training AI models. Given the rapid progress in the industry, it seemed unreasonable to take on unilateral commitments while competitors move ahead,” said Anthropic Chief Scientist Jared Kaplan.
The third version of the RSP states that Anthropic will continue improving AI if it believes it does not have a significant advantage over competitors. In a blog post, the startup said the political environment has shifted toward prioritizing competitiveness and economic growth, and that AI safety discussions have not received meaningful support at the federal level.
The updated policy promises greater transparency, including more detailed publication of model test results. Anthropic says it will keep pace with competitors on system controls, and would slow development only if it became the clear leader in the race while also seeing a significant catastrophic-risk signal. When Anthropic introduced the RSP in 2023, it hoped competitors would follow, but none made an equally explicit pledge to pause AI development.
Anthropic under Pentagon pressure
Anthropic and the U.S. Department of Defense have reportedly clashed over military plans to use AI for domestic surveillance and autonomous weapons development. The startup opposes that approach, but Pentagon officials said they intend to use LLMs “for all lawful scenarios” without restrictions and suggested the contract could be terminated.
Startup CEO Dario Amodei met with Defense Secretary Pete Hegseth to discuss the dispute. The Pentagon reportedly issued an ultimatum: the startup must accept the government’s terms by February 27.
If Anthropic refuses, the government could label the company a threat to supply chains, potentially damaging its business with other U.S. government contractors. Another option would be invoking the Defense Production Act, which could allow the Pentagon to compel use of the startup’s technology.
“This scenario is unprecedented and would almost certainly trigger a wave of litigation if the administration takes adverse action against Anthropic,” said Franklin Turner, a government contracts attorney at McCarter & English.
In a response, company representatives said the parties “continue to engage in dialogue in good faith.”
Anthropic holds its line
Reuters reported that Anthropic does not intend to loosen its restrictions on military use. The startup’s stance was supported by Ethereum co-founder Vitalik Buterin, who said: “If they don’t back down and accept the consequences with honor, it will significantly improve my opinion of Anthropic.”
In February, it emerged that the Claude model was used in an operation to capture Venezuela’s president Nicolás Maduro. Reuters reported that during the meeting with Hegseth, Anthropic’s CEO did not express concern about the use of the company’s products in that initiative.
The Economist argues that the Pentagon’s unusually serious threat suggests an unwillingness to give up Claude for defense purposes, noting the startup’s technology may be difficult to replace for certain military tasks.
ES
EN