As reported by AI Wire Media, this was stated by Foreign Affairs magazine on December 29.

“AI already has the potential to deceive key decision-makers and members of the nuclear command hierarchy, leading them to perceive an attack where none exists. In the past, only genuine dialogue and diplomacy prevented misunderstandings between nuclear powers,” the publication said.

The article refers to the 1983 incident in which Soviet officer Stanislav Petrov prevented a possible nuclear strike after a false alarm was triggered by an early-warning system.

According to the magazine, modern AI technologies are capable of producing deepfakes—fake videos, audio recordings, and images—that could mislead the leaders of nuclear-armed states. Fake videos featuring Ukrainian President Volodymyr Zelensky and Russian President Vladimir Putin have already appeared publicly.

The publication stresses that in a critical scenario, such disinformation could provoke an erroneous decision to use nuclear weapons. While the United States and China have agreed to maintain human control over decisions related to nuclear strikes, the integration of AI into early-warning systems still poses serious risks due to algorithms’ tendency toward hallucinations and errors, the article notes.