India Has Cheap Compute. But the Harder Test Comes Next

 India's AI Mission has lowered the barrier to AI development. But the harder management challenge of converting access into capability requires a fundamentally different playbook

Topics

  • The global race for artificial intelligence leadership is increasingly being fought on compute infrastructure. Not models. Not talent pipelines. Not even capital. The availability of processing power, where it sits, who controls it, and at what price, is emerging as a primary variable in which nations, and which organizations within them, can credibly compete at the AI frontier.

    That shift is now visible in India. Under the IndiaAI Mission, the government has been building a subsidized compute layer meant to give startups, researchers, public institutions, and academic teams access to high-end graphics processing units, or GPUs, without forcing them to rely entirely on foreign cloud infrastructure.

    The aim is not simply to add hardware. It is to widen access to AI capacity on terms that support local model development.

    The scale is beginning to show. The government said more than 38,000 high-end GPUs had been made available at subsidized rates by 17 February, with another 20,000 due to be added in the coming weeks, taking the total to about 58,000.

    Officials have also pointed to a 100,000-GPU goal by the end of 2026, while Information Technology Minister Ashwini Vaishnaw has said India may ultimately need to move toward 200,000 GPUs in the coming years to support foundational models and large-scale enterprise AI at home.

    That makes this more than a procurement milestone. It is an early test of whether India can build the compute base that sovereign AI requires.

    Access model

    India’s case is not built on scale alone. It rests on access. While many countries are concentrating advanced compute in the hands of hyperscalers or a few national champions, India is trying to create a broader public-access layer, with subsidized GPU capacity available at around ₹65 an hour.

    That gives the mission a different logic from many other sovereign AI efforts. The goal is not only to secure chips, but to lower the barrier to experimentation and model building across startups, academia, and public-interest institutions.

    Launched in 2024, the IndiaAI Mission was designed to expand access to computing resources and support AI development in the public interest. The mission is backed by an outlay of more than ₹10,000 crore over five years.

    Vaishnaw has framed that design as ideological as much as operational. At the core of the mission, he said, is a “powerful belief” that technology must be democratized.

    In the lead-up to the AI Impact Summit in February, he argued that AI is driving a new industrial era across agriculture, healthcare, education, manufacturing, governance, and climate action. “India is not watching this revolution from the sidelines,” he said. “The country is actively shaping it.”

    Early demand

    That public-access model is already drawing market interest. In April, nine companies cleared the technical evaluation stage in the fourth IndiaAI Mission tender, including Paradigmit Technology Services, Tata Communications, RackBank Datacenters, Netmagic IT Services, E2E Networks, Yotta Data Services, Cyfuture India, Sify Digital Services, and UrsaCompute. The process then moved toward commercial bidding, underscoring how quickly India’s sovereign compute market is taking shape.

    Narendra Sen, Founder and Chief Executive of RackBank Datacenters, said the momentum is real. “We have successfully completed technical evaluations in early April, and the mission now moves into its commercial bidding stage.”

    For startups and model builders, that shift is not abstract. Ankush Sabharwal, founder and chief executive of CoRover.ai, said the infrastructure is already enabling Indian startups, academia, and enterprises to train and deploy sovereign AI models locally.

    In his view, it cuts costs, improves data control, and accelerates India-specific development. 

    “This initiative represents a fundamental shift from compute scarcity to strategic capability,” he said. With more than 500 model proposals emerging and demand projected above 100,000 GPUs, Sabharwal added, “India is moving from consuming AI to building it.”

    That is the optimistic reading, and it is not without substance. Cheap access does change the starting equation for domestic builders. Projects such as Sarvam AI, Gnani.ai, and BharatGen stand to benefit from a local compute pool rather than relying entirely on foreign cloud providers or waiting for scarce global capacity to become available. India’s approach also differs from systems in which advanced infrastructure is effectively gated by a small number of private actors.

    Capability gap

    But access is only the opening advantage. The harder question is whether affordable compute can translate into durable capability. That depends on far more than GPU counts, including power reliability, cooling, network capacity, deployment readiness, research depth, and the ability to retain talent that can build frontier systems rather than merely use them.

    Amit Chadha, Chief Executive and Managing Director of L&T Technology Services Ltd, put that tension clearly.

    “While academia taps the world’s cheapest computing power (costs below $1 per hour per GPU) to democratize AI innovation, experts warn of limits like deployment delays, power instability, and gaps between academic and industry skills that may curb full potential.” 

    Chadha’s point is that the issue is not simply how many GPUs India can line up, but how effectively they can be turned into real AI systems.

    That is where the argument becomes less glamorous and more serious. In industrial AI programs, Chadha said, the bottleneck is not usually compute at the start. It is operationalization. Power and infrastructure reliability, the academia-industry translation gap, and talent specialization remain decisive.

    AI training clusters need stable power, advanced cooling, and high-bandwidth networking. “Even small instabilities can delay large training runs. This becomes critical as clusters scale beyond tens of thousands of GPUs,” he said.

    In that respect, India’s sovereign compute effort starts to look less like a chip story and more like a systems engineering story. 

    Chadha said the government’s push on power generation and infrastructure resilience deserves notice, but he also warned that research strength does not automatically become deployable strength.

    BharatGen, he said, is a good example of India’s capacity to produce strong foundational work. “But the problem is turning research models into deployable systems.” For that, enterprise adoption requires robust data governance, reliable integration with existing workflows, production-grade resilience, security, and compliance.

    Those capabilities often lie outside academia, though Chadha said steps are underway to bridge the gap. Innovation accelerates when research and production ecosystems interact more continuously, he argued.

    Talent bottleneck

    He pointed to efforts such as tech industry lobby Nasscom’s AI Code Sarathi initiative, which aims to train more than 150,000 AI developers across domains.

    India has no shortage of software talent, Chadha said, but large-scale AI systems need something narrower and harder to produce: people who can work across AI research, distributed computing, data engineering, and domain expertise. Access to that hybrid talent is still limited, even if industry and academic programs are now trying to build it.

    Mark Minevich, President and Founding Partner of Going Global Ventures, was blunt about the limits of India’s current advantage.

    “Subsidized access at ₹65 per hour helps startups survive, but survival is not the full definition of sovereignty,” he said. In his assessment, the talent gap may be more damaging than the hardware gap. India continues to produce strong engineers, he argued, but struggles to retain the foundational AI talent required to build frontier systems rather than fine-tune what others create. 

    High-performance computing (HPC), reinforcement learning from human feedback (RLHF), and compiler skills are still developing even at the IITs, he said. “The result is excellence in application platforms with persistent lag on base frontier models.”

    Scale challenge

    That criticism becomes sharper when India is set against other sovereign compute pushes.

    Minevich said India’s currently deployed AI compute sits at roughly 38,000 to 58,000 GPUs, with a target of 100,000 by the end of 2026. 

    “This implies roughly 20,000 GPUs per year. Compare that to France’s 170,000 commitment with a 1.2 million roadmap by 2030, or Saudi Arabia’s NEOM mega-clusters targeting 300,000-plus.”

    In his telling, these are not speculative numbers but signs that other countries are treating compute as sovereign infrastructure and a budget line. France has, in fact, been stepping up support for AI data centers, while French company Sesterce has publicized a 1.2 million GPU roadmap by 2030. Saudi Arabia has also intensified its AI infrastructure drive through NEOM, DataVolt, Humain, and other state-backed projects.

    Minevich argued that India’s position remains fragile because the bottlenecks are exponential rather than linear. Compute scarcity acts as a hard ceiling, he said. 

    H100 and Blackwell clusters are not commercially stocked at scale domestically. Export controls further tighten supply, while researchers in the US and China often get faster access to cutting-edge compute than their Indian counterparts.

    Chadha, however, said “The comparison needs to be nuanced.” India, he argued, is not trying to compete on raw GPU volume alone. Its model rests instead on frugal compute economics, distributed innovation, and a wider social base of access.

    India’s under-$1-per-GPU-hour pricing makes training affordable for startups and academia and ranks among the lowest cost points for advanced AI usage, he said.

    Rather than concentrating resources in one or two giant clusters, India is spreading compute across research institutes, startups, and national platforms. That diversified model reflects a broader AI ecosystem and helps widen access to AI development, he said.

    Even so, the next phase will demand more than procurement rounds and subsidy headlines. Chadha said reforms are needed in grid and energy infrastructure, in academia-industry talent bridges, and in a domain-first strategy that focuses as much on deployment as on compute access.

    “AI clusters are energy-intensive infrastructure projects,” he said. If India wants to sustain its sovereign AI trajectory, it will need dedicated data-center power corridors, renewable energy integration, and advanced cooling technologies, with coordination at a national rather than piecemeal level.

    Sabharwal made a related point from the user side of the ecosystem. “To truly strengthen sovereign compute, we must go beyond capacity to accessibility. This means scaling to 100K+ GPUs, tightly integrating high-quality India-centric datasets, and creating seamless platforms that democratize compute as a utility.”

    In other words, the mission’s success will depend not only on how much capacity it adds, but on how usable that capacity becomes for real builders.

    Funding pressure

    For infrastructure providers, meanwhile, the economics are changing fast. Sen said the shift to Nvidia’s Blackwell generation is making AI infrastructure far more expensive to build and finance.

    “As we transition into this ‘Blackwell era’ of AI, the government’s proactive framework provides a vital foundation for navigating a fundamentally shifted economic landscape.”

    Costs have risen sharply, he said. HBM3e memory used in advanced GPUs is much more expensive than before, while a modern B300 system carries a significantly higher upfront cost than earlier generations.

    That matters for procurement design. Sen argued that longer return-on-investment periods mean providers need greater visibility into financing. In practice, that points to longer contract durations, perhaps two to three years rather than 12 months, so suppliers can service five- to six-year debt cycles while ensuring uninterrupted access for users.

    He also argued that formally granting AI chip infrastructure “Infrastructure Status” would lower financing costs and better reflect the capital intensity of the sector. The issue is not simply whether India can procure advanced systems, but whether domestic providers can keep building and holding them at home rather than seeing capacity drift toward higher-paying overseas demand.

    “By continuing to adapt these frameworks, the government is not only building a robust sovereign cloud but is also securing India’s leadership in the fifth industrial revolution,” Sen said.

    Minevich, for his part, called for bigger changes. He said India needs broader reforms, including power-grid upgrades for AI clusters, stronger research budgets that can help India compete for top AI talent, shared public infrastructure for reinforcement learning and advanced model training, and better incentives for foundational AI research.

    In his view, the country also needs to invest in energy and AI infrastructure at a scale that matches its ambitions. Without that, he warned, India risks remaining dependent on foreign AI systems while doing most of its work at the application layer.

    “But the problem is turning research models into deployable systems,” Chadha said.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.