India Bets on Sector Regulators for AI Oversight
New framework backs coordination not a fresh law and builds an AI safety stack for testing and incidents.
Topics
News
- California’s Genspark Enters AI Unicorn Club with $200 Million Round
- OpenAI Taps Intel’s Sachin Katti to Build Compute Backbone for AGI
- Flipkart Brings Voice-Led Wholesale Ordering to WhatsApp with Sarvam AI
- Varun Berry Steps Down at Britannia, Rakshit Hargave to Take Over
- OpenAI Issues Warning on AI’s Acceleration From Inside the Fast Lane
- Intel Eyes Deeper India Play in Chips, AI
India has issued national AI Governance Guidelines that aim to promote safe, inclusive and accountable use of artificial intelligence under a simple test that says “Do No Harm.”
The Ministry of Electronics and Information Technology (MeitY) released the framework on Wednesday, 5 November, and positioned it as principle based and sector led.
The package proposes a whole of government structure, new coordination bodies, and an India specific incident system, while relying on existing laws for enforcement.
The government’s blueprint lists seven guiding principles that cover trust, human oversight, responsible innovation, inclusion, clear allocation of responsibility, transparency, and system safety and sustainability.
It also lays out an action plan that is split into short term, medium term and long term steps. Early steps include an India specific risk classification framework, a program for voluntary commitments, a national database to record AI incidents, guidance on liability, and regulatory sandboxes to test tools and rules.
The architecture rests on three core institutions. While an AI Governance Group will coordinate policy across ministries and regulators, a Technology and Policy Expert Committee will advise that group on strategy and implementation, and a third AI Safety Institute will test and evaluate systems, develop draft standards and benchmarks, and provide technical guidance while sector watchdogs continue to enforce domain rules.
Officials framed the intent as human centric and innovation positive. The government’s Principal Scientific Adviser Ajay Kumar Sood said the guiding principle that defines the spirit of the framework is simple and he used the phrase “Do No Harm.” He also pointed to innovation sandboxes and risk mitigation within a flexible and adaptive system.
MeitY Secretary S. Krishnan said the framework will focus on human centric development and on mitigation of potential harms, and that the government prefers to use existing legislation where possible.
The text signals that India is not rushing to write a new horizontal AI law. The guidelines say sector regulators should lead on oversight and that amendments to existing laws may follow after a gap analysis rather than a stand alone statute now.
Coverage of the release noted that the Information Technology Act could be amended to classify AI systems and that a national incident database is on the anvil.
Indian technology industry lobby Nasscom’s early read aligns with that direction and stresses coordination over control. In a policy note, it called the package a balanced blueprint that supports agile and evidence led governance.
The group welcomed the government’s statement that a separate AI law is not needed at this stage and argued for voluntary measures, graded liability and a non punitive incident system.
It also flagged practical next steps that include a single reporting interface that can route across privacy, cybersecurity and safety, clearer conformance pathways for voluntary pledges, and a concrete plan for sandboxes and tool building under the AI Safety Institute and the expert committee.
The guidelines arrive after a multiyear process that included a public consultation report in January and a drafting committee chaired by IIT Madras professor Balaraman Ravindran.
The committee’s briefings emphasized a whole of government approach and positioned the framework as application led and human centered.
Several policy tracks now run in parallel. MeitY has recently proposed updates to the IT Rules that would mandate labeling of deepfakes and tighten oversight of synthetic media.
The new guidelines would sit alongside such measures and provide the coordination, testing and incident reporting stack that supports enforcement by sector regulators.
The package also formalizes India’s plan to learn from real incidents rather than regulate hypothetical harms. The risk chapter calls for an India specific assessment framework that reflects evidence of harm in context. It points to voluntary compliance supported by technology and law, and it leaves room for additional obligations in specific contexts where the risk is higher.
Officials who presented the document used plain language to set expectations. Abhishek Singh, Additional Secretary at MeitY and CEO of the IndiaAI Mission, said the draft went through wide consultation and that the goal is a safe, trustworthy and responsible ecosystem that keeps AI accessible and affordable.