Jerry Tworek, former VP of Research at OpenAI, sees a fundamental flaw in today’s AI models: they cannot learn from failure. “When a model fails, you’re pretty much stuck,” Tworek said on the Unsupervised Learning podcast. There is currently no effective way for a model to update its knowledge after a mistake
Tworek, who worked on OpenAI’s reasoning models such as o1 and o3, recently left the company to focus on solving this problem. He has since revised his AGI timeline upward. “As long as models get stuck on problems, I wouldn’t call it AGI,” he explained. AI training, in his view, remains a “fragile process,” whereas human learning is robust and self-correcting. “Intelligence always finds a way,” Tworek said.
Recent research from Apple has shown that even reasoning-focused models can suffer a “reasoning collapse” on complex tasks such as the Towers of Hanoi.
Сonclusion:
Jerry Tworek’s remarks highlight a fundamental weakness of today’s AI models: despite impressive reasoning abilities, they remain fragile systems that cannot reliably learn from their own failures. As long as models break down when confronted with difficult problems and lack a mechanism to update their knowledge after mistakes, it is premature to describe them as true AGI.
ES
EN