Huang Shrugs Off Bubble Fears, Says AI Has Shifted Into Full-Scale Deployment

The Nvidia chief argues that AI’s rapid expansion is being driven by real-world adoption rather than speculation, with enterprises scaling inference workloads and cloud providers racing to add capacity despite export curbs and rising component costs.

Reading Time: 4 minutes 

Topics

  • [Image source: Krishna Prasad/MITSMR Middle East]

    Artificial intelligence has “reached a tipping point,” Nvidia Chief Executive Officer Jensen Huang said on Wednesday, 19 November, arguing that the technology is entering a phase where demand is shaped by real deployment rather than early-stage experimentation.

    Speaking in an earnings call, Huang pushed back on mounting questions about whether the sector is overheating, saying, “There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different,” and adding that customers are no longer experimenting at the margins but are “deploying AI across every layer of computing.”

    Huang told investors that three transitions in computing are unfolding at the same time and reshaping how companies build and run software. The first is the move from CPU-based general-purpose computing to accelerated computing as legacy workloads in data processing, science and engineering migrate to GPUs.

    The second is the broad adoption of AI systems capable of transforming existing applications and creating new categories of work. The third, he said, is the emergence of agentic and physical AI that pushes intelligence beyond software into real-world interaction.

    He described the convergence of these shifts as the most significant change the industry has seen since the early years of Moore’s Law. “This is the first time in the history of computing that three platform shifts are arriving together,” he said. “It is changing everything about how software is written, deployed, and scaled.”

    He said the clearest signal of this moment is the pace at which companies are taking AI from research into daily operations.

    “Inference demand is exploding,” Huang said. Enterprises are running far more inference workloads, while cloud providers and model developers are expanding clusters as fast as supply allows.

    Nvidia, he said, now sees “multi-year visibility into demand” from cloud platforms, enterprise software vendors and AI labs, all of which are building long-term roadmaps around accelerated computing.

    Against that backdrop, Nvidia reported another quarter of stellar results. Revenue for the third quarter of fiscal 2026, which ended on 26 October, rose 62% from a year ago to $57 billion. Sales rose 22% from the prior quarter. Data-center revenue, which reflects demand for training and inference systems, reached $51 billion, up 66% from last year and still constrained by supply.

    Networking revenue surged 162% to $8.2 billion as customers upgraded high-bandwidth infrastructure needed for larger models and denser clusters. Gaming revenue rose 30% to $4.3 billion, supported by continued demand and by CUDA optimizations that extend the usefulness of older GPUs even as new platforms arrive.

    To prepare for the next cycle of products, Nvidia increased inventory by 32% and supply commitments by 63% in the August–October period. The company is ramping production of its Blackwell GB300 GPUs and preparing for the Rubin platform, which Huang said will anchor the next generation of training and inference systems.

    “Rubin is progressing well,” he said. “It will drive the next wave of AI infrastructure buildouts.”

    CFO Colette Kress said higher input costs remain a challenge, with prices for memory and advanced components rising, but added that Nvidia aims to hold gross margins in the mid-seventies through efficiency and product mix adjustments.

    “Input costs, especially for memory and advanced components, are on the rise, but we are working to hold gross margins in the mid-seventies,” she told investors.

    The company continued to flag geopolitical constraints. Nvidia is not assuming any data-center compute revenue from China in the fourth quarter because of ongoing US export restrictions and competitive shifts in the Chinese market.

    “We are not assuming any data center compute revenue from China in Q4,” Kress said. “The environment remains uncertain.”

    Even with those limits, Nvidia expects fourth-quarter revenue of about $65 billion as cloud platforms, AI developers and enterprise software companies expand their deployments.

    The company estimates that annual global spending on AI infrastructure could reach $3–4 trillion by 2030, driven by hyperscalers such as Meta, Amazon and Microsoft and by enterprise vendors including ServiceNow, SAP, CrowdStrike and Palantir.

    Nvidia said it is deepening work with AI labs such as OpenAI, Anthropic and xAI to co-develop next-generation compute capacity and is engaged in similar discussions with governments building national AI systems.

    Huang said the scale of demand across industry, research and government shows that AI’s transition into the mainstream is advancing far faster than expected. “The inflection is here,” he said. “And the rate of adoption is accelerating.”

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.