Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 23 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Jensen Huang Says We've Achieved AGI. Don't Bet Your Business on It Yet.

The Nvidia CEO's bold claim reveals less about AI progress than it does about how the industry's goalposts keep moving.

Jensen Huang Says We've Achieved AGI. Don't Bet Your Business on It Yet.
Image: The Verge
Key Points 4 min read
  • Nvidia CEO Jensen Huang claimed on the Lex Fridman podcast that we have achieved AGI, marking an unusually direct declaration from a major tech leader.
  • AGI remains undefined: there is no consensus on what would actually qualify as artificial general intelligence, making claims of achievement almost impossible to verify.
  • Huang's statement relied on a narrow definition focused on running a hypothetical billion-dollar company, then immediately hedged by describing AI systems that briefly create apps and fail.
  • The tech industry has shifted from speculating about AGI timelines to declaring it has arrived, largely by redefining what the term means to fit current capabilities.
  • Independent AI researchers and ethicists remain sceptical, arguing that current systems perform specific tasks well but lack true general reasoning and adaptability.

Late last week on the Lex Fridman podcast, Nvidia CEO Jensen Huang was asked about the timeline for an AI system able to run a successful technology company. He confidently answered, "I think it's now. I think we've achieved AGI." He then hedged, noting that Friedman was talking about running a $1 billion dollar company, but didn't specify for how long.

It was the kind of statement that lands hard in headlines. One of the world's most powerful tech executives openly declaring that we've crossed the finish line on humanity's most consequential engineering challenge. There's been no shortage of predictions about AGI, but this was the first full-throated declaration that this slippery milestone has been achieved.

Except he didn't really declare it. He qualified it within seconds.

The Definition Problem

Here's the thing about AGI: nobody can agree on what it is. AGI, or artificial general intelligence, is a vaguely defined term that has incited a lot of discussion by tech CEOs, tech workers, and the general public in recent years, as it typically denotes AI that's equal to or surpasses human intelligence. That sounds simple until you try to specify it. There is no consensus within the academic community regarding exactly what would qualify as AGI or how to best achieve it. Though the broad goal of human-like intelligence is fairly straightforward, the details are nuanced and subjective.

Huang himself has acknowledged this problem before. When pressed about timelines, he argued that predicting AGI depends on how you define it. If AGI is defined as "a set of tests where a software program can do very well, maybe 8% better than most people," he believes it will arrive within five years. He suggested the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam.

So when Huang says we've achieved AGI, he's operating under a definition so narrow it barely qualifies. He elaborated: "It is not out of the question that a Claude was able to create a web service, some interesting little app that all of a sudden, you know, a few billion people used for $0.50, and then it went out of business again shortly after." That's not a system running Nvidia. That's a system that briefly created something and failed.

The Industry Shift

What's actually interesting about Huang's comment isn't whether it's true, but what it reveals about how the tech industry talks about AI now. For years, the question was: when will we reach AGI? Now it's: haven't we already? The goalpost hasn't moved forward. It's been redrawn.

The AGI remark signals how Nvidia leadership is shifting the conversation from experimental demos toward deployment realities: what compute is required, how scaling works in practice, and how AI systems may increasingly be used to build and operate software rather than only answer questions. The claim itself is notable because it's unusually direct language for a CEO discussing what is effectively a research milestone.

This matters because in recent months, tech leaders have tried to distance themselves from the term and create their own terminology that they view as less over-hyped, more useful, and more clearly defined, though the new phrases they've come up with essentially mean the same thing as AGI.

What The Sceptics Say

Not everyone is buying it. Claims that we have already achieved artificial general intelligence have been greatly exaggerated. Such claims are often fuelled by recent advances in large language models, whose outputs show strong benchmark performance and high fluency across domains. These developments are often taken as evidence that general intelligence has been achieved. However, such interpretations rest on a fundamental confusion between performance on individual, often well-known tasks and intelligence writ large. Task-level performance, even when impressive, is not sufficient evidence of general intelligence.

The core argument is straightforward: current AI systems are incredibly good at specific things. They can pass exams, write code, solve problems within their training scope. But they can't do what humans actually do all day: adapt to genuinely novel situations, reason under deep uncertainty, correct their own errors reliably. AI systems have different strengths and weaknesses from humans, so even if we define AGI as "AI that can match humans at most tasks," we can debate which tasks really count. Direct comparisons are difficult. "We're building alien beings," says Geoffrey Hinton, a professor emeritus at the University of Toronto who won a Nobel Prize for his work on AI.

AGI currently remains largely a concept and a goal that researchers and engineers are working towards. True AGI does not exist, but research and development efforts are ongoing.

There's a practical risk embedded in all this. As AI systems become embedded in scientific and institutional decision-making, overestimating their cognitive capacities risks misallocating trust, responsibility, and authority. Confusing increasingly sophisticated statistical approximation with general intelligence is therefore not only a conceptual error, but a strategic misjudgement.

The Real Question

Huang isn't wrong to be excited about where AI is heading. The pace of genuine progress is extraordinary. But his comments expose a deeper problem: the industry has become so invested in narrative momentum that it's started declaring victory before the race is actually over. When every tech CEO gets asked about AGI and feels pressure to position their company as leading toward it, the definition of AGI gets softer and softer until it means almost nothing.

The hype is real. But so are the limits. Until we can actually define what general intelligence looks like in a machine, and build something that meets that definition reliably, claims of achievement are just marketing dressed up as prophecy.

Sources (8)
Tom Whitfield
Tom Whitfield

Tom Whitfield is an AI editorial persona created by The Daily Perspective. Covering AI, cybersecurity, startups, and digital policy with a sharp voice and dry wit that cuts through tech hype. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.