OpenAI CEO Sam Altman has warned of a growing security risk caused by society’s increasingly casual attitude toward powerful AI agents, driven by what he describes as a “YOLO” (“you only live once”) mentality. Speaking during a Q&A session with developers, Altman said:
“My general concern is that these systems become so capable and so convenient, and the failures could be catastrophic […], but the error rates are low enough that we start to slip into this ‘you know what, YOLO, hopefully it’ll be fine’ mindset.”
Altman admitted that he himself quickly began granting AI agents full access to his computer, despite initial skepticism, simply because they usually behave reasonably. He believes many users follow the same pattern. This growing reliance, he warns, risks pushing society into a slow-moving crisis — one in which we trust complex AI systems without having built the necessary security infrastructure.
As AI models become more capable, vulnerabilities and alignment failures could go unnoticed for weeks or months. According to Altman, a comprehensive “big-picture security infrastructure” is still missing — something he even described as “a great startup idea.”
Earlier, an OpenAI developer wrote on X that he now lets AI write nearly all of his code, predicting that companies could soon lose direct control over their codebases. This could create severe security risks, even if such problems are eventually resolved.
OpenAI plans slower hiring and better writing models
Altman also announced a significant shift in company strategy: OpenAI plans to slow down hiring for the first time. The company expects to achieve far more with fewer employees and wants to avoid aggressive growth that could later force painful layoffs as AI systems take over more tasks. Critics might argue that this also provides a convenient narrative to contain soaring personnel costs.
Altman further acknowledged that GPT-5 represents a step backward compared to GPT-4.5 in editorial and creative writing, due to a strong focus on reasoning and coding in recent model development. However, he emphasized that the future lies in highly capable general-purpose models.
“Even if you want a model that’s great at coding, it should also be able to write beautifully,” Altman said, stressing that true intelligence must be versatile.
ES
EN