Dario Amodei outlines the dangers of powerful AI systems in a new essay. His core demand: democracies should use AI only in ways that do not turn them into what they are fighting against.
Anthropic CEO Dario Amodei has published a wide-ranging essay analyzing the risks of advanced AI systems. Titled “The Adolescence of Technology,” the text describes what Amodei calls a “coming-of-age test for humanity.” The new essay is intended as a companion to his earlier piece, “Machines of Loving Grace,” published in October 2024. While that text focused on the positive potential of powerful AI, the new one concentrates on its risks.
His central thesis can be summarized in one sentence: democracies should use AI for national defense in every way — except those that would make them resemble their autocratic adversaries.
Four tools democracies should not use
Amodei identifies four technologies that autocracies could use to oppress their citizens: fully autonomous weapon swarms, AI-powered mass surveillance, long-term personalized propaganda, and strategic AI advisors — a kind of “virtual Bismarck.”
For two of these applications, he draws an absolute line: AI-driven domestic mass surveillance and large-scale propaganda targeting a country’s own population are entirely illegitimate. He notes that mass surveillance is already illegal in the United States under the Fourth Amendment, but warns that rapid advances in AI could create situations for which existing legal frameworks are not designed. Amodei therefore advocates new legislation to protect civil liberties, or even a constitutional amendment.
Externally, against autocratic rivals, he considers the same tools legitimate. He explicitly supports democracies using their intelligence services to “disrupt and weaken autocracies from within.” Democratic governments, he argues, could deploy superior AI capabilities to “win the information war” and provide information channels that autocratic regimes cannot technically block.
When it comes to fully autonomous weapons and strategic AI decision-making, Amodei sees a more complex picture, acknowledging that they may have legitimate defensive uses. Here, he urges extreme caution. His main concern is that too few “fingers on the button” could allow a handful of people to operate massive drone armies without broader human involvement or oversight.
Are we becoming the villains?
Critics such as AI researcher Yann LeCun accuse Anthropic of deliberately amplifying worst-case scenarios to stoke fear and push for regulations that would disadvantage open AI models, thereby limiting competition. David Sacks, AI adviser to Donald Trump, has similarly claimed that Anthropic is engaging in fear-mongering to influence regulators.
Amodei rejects these accusations, emphasizing close cooperation with the US government. After recent criticism from the White House, he even publicly praised President Donald Trump’s AI policy, portraying Anthropic’s stance as politically neutral and describing the company as a “policy actor” that presents expert positions to all political camps.
At the same time, Anthropic holds a contract worth up to $200 million with the US Department of Defense to develop so-called frontier AI for national security. Its language model Claude is also deployed in classified networks via partners such as Palantir and the Lawrence Livermore National Laboratory. Palantir, in turn, is used by US Immigration and Customs Enforcement (ICE) to track migrants in the United States.
This does not formally contradict Amodei’s stated red lines, and it is clear that his relationship with the Trump administration is at least ambivalent. Yet in his effort to protect democracy from external autocratic threats and dangerous AI, his products could end up helping to strengthen authoritarian tendencies at home.
ES
EN