MITINDIA PRIVY
Trigent-Banner

Nvidia Puts Partnerships and OpenClaw at the Center of Its AI Strategy

The company expects cumulative purchase orders tied to its Blackwell and next-generation Vera Rubin systems to reach $1 trillion by 2027.

Topics

  • At Nvidia GTC 2026, underway in San Jose, CEO Jensen Huang opened his keynote with a stark update: Nvidia now expects cumulative purchase orders tied to its Blackwell and next-generation Vera Rubin systems to reach $1 trillion by 2027, doubling its earlier $500 billion projection.

    Huang tied the revised outlook directly to surging demand across the AI ecosystem, from startups to large enterprises, while pointing to the persistent constraint of compute supply.

    “If they could just get more capacity, they could generate more tokens; their revenues would go up,” he said.

    The demand surge comes as AI usage evolves from chatbots to more complex, autonomous systems. Huang highlighted the rise of agentic applications, software that can independently execute tasks, which is driving a sharp increase in token generation and inference workloads.

    That demand is already reflected in Nvidia’s growth. The company expects revenue to rise roughly 77% year-over-year this quarter to about $78 billion, extending a streak of double-digit high growth.

    Against this backdrop, Nvidia detailed updates to its next-generation systems:

    1. Vera Rubin, set to launch later this year, is designed to deliver 10x better performance per watt than Grace Blackwell, as energy consumption becomes a central challenge in scaling AI infrastructure.
    2. Kyber, a prototype rack architecture, introduces vertically stacked compute trays with 144 GPUs to improve density and reduce latency.
    3. Groq integration: combining speed and efficiency.

    Huang also introduced the Groq 3 Language Processing Unit (LPU), Nvidia’s first chip from Groq, following a $20 billion asset acquisition.

    The chip is designed to complement GPUs rather than compete with them. A full LPX rack with 256 LPUs will sit alongside Rubin systems, with Nvidia claiming significant efficiency gains.

    “We united, unified two processors of extreme differences, one for high throughput, one for low latency,” Huang said. “It still doesn’t change the fact that we need a lot of memory, so we’re just going to add a whole bunch of Groq chips.”

    Midway through the keynote, Huang turned to OpenClaw, an open-source agentic AI project created by Peter Steinberger and now supported by OpenAI.

    To operationalize such systems, Nvidia introduced NemoClaw, a reference stack designed to simplify the deployment of autonomous agents.

    “It finds OpenClaw, it downloads it. It builds you an AI agent,” Huang said.

    Nvidia also expanded its portfolio of open AI models, including:

    • Nemotron for agentic AI
    • Cosmos for physical AI and simulation
    • Isaac GR00T for robotics
    • BioNeMo for life sciences

    According to Kari Briski, vice president of generative AI software at NVIDIA, the push reflects a broader shift in AI innovation:

    “Open source AI has become a global force for innovation… enabling developers worldwide to build intelligent agents and power breakthroughs across digital and physical industries.”

    Companies such as ServiceNow, CrowdStrike, and Perplexity AI are already deploying these models.

    In the automotive industry, Nvidia expanded its partnership with Uber, with plans to roll out AI-powered fleets across 28 cities by 2028. Automakers, including Nissan, BYD, and Hyundai, are also building Level 4 autonomous vehicles on Nvidia’s platform.

    Several announcements focused on challenges in scaling AI beyond pilots:

    1. IBM is working with Nvidia to integrate GPU acceleration into data analytics and document processing.
    2. Adobe is expanding its partnership to build AI-driven creative workflows and 3D digital twins.
    3. Amazon is collaborating on in-vehicle AI assistants using edge and cloud computing.

    According to the press release, Arvind Krishna, Chairman and CEO, IBM, framed the broader enterprise challenge: “In the next wave of enterprise AI, the model layer will rely on the data, infrastructure, and orchestration layers.”

    Nvidia also highlighted its push into scientific computing and healthcare, including collaborations with Google DeepMind and new simulation tools for drug discovery.

    Huang closed with a broader framing of AI’s trajectory, positioning it as a general-purpose technology reshaping industries. “AI is giving every industry the ability to redefine what’s possible.”

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.