Anyone who believes that AI is in a bubble may feel vindicated by a recent CNBC interview with Nvidia CEO Jensen Huang.
The interview aired after Nvidia’s largest customers—Meta, Amazon, and Google—came under pressure in the stock market for announcing that they plan to pour even more money into AI infrastructure, despite already massive budgets
In the interview, Huang repeats a familiar refrain from the AI industry: more compute automatically means more revenue. As a chip supplier, Nvidia directly benefits from this belief being accepted. And there is some truth at the core of the narrative: demand for AI compute is real, models are improving, revenues at major AI providers are indeed rising, and progress in agentic AI is genuinely impressive.
But Huang stretches these observations to the point of distortion—for example, when he claims that the current AI build-out is “the largest infrastructure upgrade in human history,” as if electrification, railroads, road networks, or the global expansion of fossil-fuel infrastructure had never existed.
Huang hallucinates: AI models no longer do?
The exaggeration becomes most obvious at one specific point. Huang states verbatim: “AI is extremely useful and no longer hallucinates.” In other words, language models no longer generate false information. That is simply not true.
One could be charitable and assume that Huang himself misspoke—that he really meant “significantly less,” or “rarely enough that it’s barely noticeable.” That is probably the case.
But this distinction is anything but trivial. Hallucinations are not a bug that can be patched away; they are a byproduct of the probabilistic architectures on which language models are built. This is precisely why many organizations remain cautious and slow to adopt AI: limited reliability, the need for human oversight, and unresolved safety issues.
Huang’s false claim is therefore not a minor slip. Reliability remains the biggest open challenge of generative AI. Even OpenAI has acknowledged that hallucinations will likely never disappear entirely. If current systems could reliably communicate how confident they are in their outputs, that alone would be transformative—but even that does not work consistently today. Whether people actually care about this, and what the consequences would be, is another question.
If hallucinations were truly solved, the “human in the loop” would largely become unnecessary: legal advice could be automated at scale, AI-generated code could go straight into production, medical diagnoses could be made without physician review, and AI systems could improve themselves without compounding errors. We would be living in a very different world. It is no coincidence that a new wave of AI startups is emerging, searching for entirely new architectures because they no longer expect fundamental improvements from today’s models.
That the CEO of the most important AI chip supplier can claim on CNBC that AI no longer makes mistakes—and face no pushback—may be the clearest sign yet of how far the AI hype has drifted from technical reality.
The belief that ever-greater investment in AI infrastructure will automatically translate into reliable, error-free intelligence remains deeply flawed. While demand for compute is real and progress in agentic AI is undeniable, core limitations—especially hallucinations—persist and cannot simply be engineered away. These weaknesses continue to justify human oversight and slow enterprise adoption. Claims to the contrary blur the line between optimism and misinformation, reinforcing a hype cycle that risks misaligning expectations with reality.
ES
EN