Nvidia Licenses Groq’s Inference Tech, Hires Leadership
The non-exclusive deal brings Groq’s founder and senior executives to Nvidia while leaving the AI chip startup independent.
Topics
News
- India Faces Lower Risk of AI Job Disruption Than the West, Says IT Secretary
- Tata Steel Discloses $1.6 Billion Dutch Class Action Over Emissions
- Nvidia Licenses Groq’s Inference Tech, Hires Leadership
- HCLTech Deepens Software Push With Three Acquisitions in a Week
- Isro Launches BlueBird Block 2 in Heaviest Commercial Mission Yet
- OpenAI Softens ChatGPT’s Tone While Scaling for an AI Showdown
Nvidia has struck a high-profile licensing deal with AI chip startup Groq to use its specialized inference technology, marking one of the largest strategic technology partnerships in the semiconductor industry this year.
In a blog post on Wednesday, Groq said it has entered into a non-exclusive licensing deal with Nvidia, signaling a shared push to make high-performance, low-cost AI inference more widely accessible.
The deal will allow Nvidia to integrate Groq’s low-latency inference designs into future products, while Groq continues to operate independently under its new CEO, Simon Edwards.
As part of the deal, Groq founder Jonathan Ross, president Sunny Madra, and several senior team members will join Nvidia to help scale the licensed technology.
Groq’s cloud arm, GroqCloud, will remain intact and unaffected by the deal.
Financial details were not disclosed, though CNBC reported that Nvidia had agreed to acquire Groq for $20 billion in cash. Both companies declined to comment on the report, reiterating that Groq remains independent.
The move underscores Nvidia’s push beyond its traditional GPU dominance into real-time AI inference, where custom chips such as Groq’s Language Processing Units can deliver faster responses with lower power use.
Groq, one of the more closely watched AI chip upstarts, more than doubled its valuation to $6.9 billion after a $750 million funding round in September.
Unlike Nvidia, Groq designs chips that avoid external high-bandwidth memory, relying instead on on-chip SRAM, a design choice that improves inference speed but limits the size of models that can be served.
The structure of the deal mirrors a broader industry pattern, with large tech firms favoring licensing arrangements and target talent hires over outright acquisitions, often to reduce regulatory scrutiny.
The deal also aligns with Nvidia CEO Jensen Huang’s broader strategy as AI shifts from training large models to running them efficiently at scale, where inference performance becomes increasingly critical.