How India Could Leapfrog to AI-Native Factories
At the India AI Impact Summit, TCS technology chief Naresh Mehta said robotics should augment labor, not replace it, even as costs and legal risks persist.
News
- Why Sovereign AI Demands More Than Just Data Centers and Chips
- How India Could Leapfrog to AI-Native Factories
- AI Is More Than GPUs, Leaders Say, as India Bets on Full-Stack Compute
- How Hospitals Are Turning to AI to Win the Race Against Time
- Blackstone Leads $600 Million Funding Round in AI Cloud Startup Neysa
- When AI Ambitions Meet the Geopolitics of Compute
India could build “AI-native” factories by embedding robotics and intelligent systems from the ground up, rather than layering them onto legacy infrastructure, Tata Consultancy Services Ltd’s (TCS) manufacturing technology chief said, positioning physical AI as a way to augment labor rather than replace it.
Speaking at a robotics-focused masterclass at the India AI Impact Summit in New Delhi, Naresh Mehta, global chief technology and innovation officer for the manufacturing business group at TCS, said Western firms are “bringing AI to basically do transformation with legacy systems” and optimizing existing infrastructure. India, by contrast, has “a very unique opportunity and a significant advantage now to leapfrog… into AI-native factories and AI-native industrial polygons.”
Western firms are “bringing AI to basically do transformation with legacy systems” and optimising existing infrastructure, he said. India’s relative lack of deeply embedded automation could, in this framing, become an advantage rather than a constraint.
Augmenting, Not Replacing
Automation in Europe and the US is frequently justified by labor shortages. India’s equation is different.
“It’s more about amplifying and augmenting the workforce to do more with less,” Mehta said.
He described this as “physical AI,” or digital intelligence translated into industrial action. The applications range from humanoids handling basic sorting tasks to quadrupeds patrolling hazardous environments, drones paired with ground systems, and autonomous mobile robots navigating warehouse floors.
In one processed-food warehouse, safety inspections for oil spills and gas leaks were conducted manually every four hours, with each round lasting more than 90 minutes. A quadruped equipped with lidar sensors and on-board GPUs now performs those checks. The deployment reduced safety incidents by about 90% and operational downtime by roughly 30%, Mehta said, and has since been replicated across multiple global sites.
The Engineering Beneath the Spectacle
The humanoid robot demonstrated on stage relied on three pipelines: vision, a language-model “brain” and a gesture engine. Each spoken response maps to one of roughly 30 pre-built gestures. If an arm movement risks mechanical conflict, the system interrupts the sequence.
“It is not as easy as it appears,” Mehta said.
Cost remains a constraint. A fully configured humanoid can cost between $120,000 and $130,000, while a packaged version starts around $45,000 to $50,000.
“You have to basically qualify the use case from a business ROI standpoint,” he said. In his experience, roughly 50% to 60% of physical AI deployments deliver strong returns. The rest remain experimental.
Matching Complexity to Need
Return on investment also shapes decisions across the broader AI stack. Generative systems can simplify workflows. Agentic AI, in Mehta’s phrasing, “bridges the loop between insights to action.” Physical AI then executes those decisions in the physical world.
But complexity must match the problem. “You don’t need a bazooka to kill an ant,” he said, cautioning against deploying sophisticated agentic systems where simpler tools would suffice.
Technical limits persist. Large language models introduce latency, complicating reflex-like responses in dynamic environments. Determining what computation runs at the edge and what remains in the cloud is critical. In highly unpredictable physical scenarios, Mehta acknowledged, “the use of physical AI… is a distant thing.”
Governance may prove more difficult than engineering. If a probabilistic system causes harm, accountability is unclear.
“It’s a very valid question with no definitive answer from anybody in the world,” Mehta said when asked who bears liability in the event of a serious accident.
“We have very clearly defined legality and liability aspects…within those boundaries, definitely, TCS will own. But the whole nature of AI is probabilistic,” he added.

