Operationalizing Deepfake Governance at Scale: India’s Regulatory Test Case
The government’s draft rules promise transparency and traceability, but leave significant gaps in scope, enforcement, and feasibility.
Topics
News
- Analog Devices rolls out CodeFusion Studio 2.0
 - US Investment Manager Vanguard Picks Hyderabad for First India GCC
 - Infosys Launches Topaz Fabric to Modernize IT Ops
 - OpenAI Taps AWS in $38 Billion Cloud Partnership Deal
 - Apple Pushes Siri AI Launch to 2026 Amid Technical Challenges
 - Musk, Altman Reignite Feud as OpenAI Reshapes Under New Foundation
 
                    
From scams and smear campaigns to political disinformation, synthetic media is no longer a fringe threat. Now, the Indian government is trying to draw a legal line. On 22 October, the Ministry of Electronics and IT (MeitY) published draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
The proposed changes aim to regulate “synthetically generated information” (SGI) and outline the actions that social media platforms, AI tools, and other intermediaries must take to label, trace, and, when necessary, remove such content.
Public comments are open until 6 November.
What the Draft Proposes
The draft introduces SGI under Rule 2(1)(wa) as any content “artificially or algorithmically created, generated, modified or altered using a computer resource” in a way that “appears to be authentic or true.” That definition would bring AI-generated text, video, voice, and imagery into scope, especially where it could deceive viewers.
Under proposed Rule 3(3), platforms that enable users to create or edit SGI would be required to attach a permanent, unremovable label, or one that’s visibly prominent. For video and image content, the label must cover at least 10% of the visual surface. For audio, it must appear in the first 10% of playback. The rules also mandate a unique identifier that helps trace the source of the content.
In addition, Rule 4(1A) expands the duties of significant social media intermediaries (SSMIs). These platforms would need to ask users whether the content they are uploading is synthetic, use “reasonable technical measures” to verify those claims, and ensure that any such content is properly marked.
Where the Problems Start
Legal experts agree that the intent is sound, but caution that the language is too broad and the mechanisms are too rigid.
Arun Prabhu, partner at Cyril Amarchand Mangaldas, said the proposed definition “covers most data generated or altered by computers,” including trivial edits like filters or colour correction. He argues that it should be narrowed to focus on synthetic content that is likely to mislead a reasonable person.
Khaitan & Co, in a legal analysis, points out that without clear exemptions, even harmless AI-generated content, such as an AI-enhanced family photo, could fall within scope. That ambiguity could expose platforms to inconsistent enforcement and users to unnecessary censorship.
The 10% labelling requirement has also raised eyebrows. Experts warn it may be impractical across different device sizes, formats, and screen types. On an AR headset or smartwatch, a label covering 10% of the display could overwhelm the content.
Cyril Amarchand Mangaldas’ Prabhu suggests moving to a “prominence standard” that would allow flexibility while ensuring the label is still noticeable.
What Platforms Would Have to Do
If notified in its current form, the draft would significantly raise the compliance burden on social media platforms and generative AI tools. They would have to:
– Ask uploaders to declare if their content is synthetic.
– Run proportionate technical checks to verify those declarations.
– Label synthetic content in line with the 10% visibility requirement.
– Ensure that metadata and identifiers linking to the origin are embedded.
Failure to comply could result in the loss of safe harbor protections under Section 79 of the IT Act, exposing platforms to civil or criminal liability for user content.
Ankit Sahni, advocate at Ajay Sahni & Associates, recommends that the government issue a detailed Code of Practice defining provenance standards such as watermarking, cryptographic signatures, and metadata, and update them regularly through subordinate legislation.
He also argues for narrow, well-defined exemptions for satire, art, parody, and research, subject to traceability and clear disclosure.
What’s Missing
The current draft leaves gaps in implementation. While it talks about provenance, it does not set out concrete technical standards. While it gives the government power to order takedowns, it does not define the threshold for harm or due process safeguards for removal, analysts said.
Enforcement across borders is another open question. Section 75 of the IT Act allows extraterritorial application, but the rules don’t yet explain how they’ll apply to foreign platforms accessible in India. Nor is there clarity on whether tools like ChatGPT or Midjourney would count as intermediaries, and if so, how their APIs would be policed.
Sahni also proposes adapting emergency removal provisions from Section 69A of the IT Act. In cases like impersonation deepfakes during elections, he suggests giving nodal officers in law enforcement the power to issue binding removal orders without prior court approval, backed by independent oversight, audit trails, and time-bound review.
What the Draft Gets Right
The draft represents a serious attempt to engage with the risks of synthetic media. It proposes a rebuttable presumption of harm for impersonation deepfakes during polls, which could trigger swift takedown and prompt alerts to the Election Commission.
The draft also encourages platforms to preserve evidence, share it with law enforcement, and coordinate for traceback in cases of financial fraud or non-consensual synthetic pornography. These are long-overdue provisions in a country where such content often spreads faster than it can be reported.
The Bottom Line
India is not alone in trying to police synthetic media. The EU’s AI Act requires clear disclosure when content is AI-generated and when people interact with AI systems. In simple terms, users must be told “this was made by AI” or “you are talking to AI,” and those rules will start applying in stages through 2025–26.
Spain has moved a national bill to enforce those labels with heavy penalties for unlabeled deepfakes, including fines of up to €35 million or 7% of global turnover.
China’s “deep synthesis” regime already mandates labeling or watermarking of AI-altered audio, images and video, and places duties on service providers, or the apps and platforms that let people make or share such content. They must build labeling and tracing into their services.
The US is taking a piecemeal path. Several states now require labels on AI in political ads. A federal judge has blocked parts of one California deepfake law, citing free-speech rights, so not every rule will survive in court.
The US Federal Communications Commission has also ruled that AI-generated voices in robocalls are illegal under the Telephone Consumer Protection Act. In practice, election robocalls that use voice clones are now a clear violation.
On tooling, C2PA “Content Credentials” is a common standard for proving where a photo, video or file came from and how it was edited. Think of it as a tamper-evident label attached to a file. It helps, but it is not enough on its own, so it should be combined with other methods like watermarks and secure logs.
For India, the lesson is to pair clear definitions and narrow exemptions with workable disclosure and provenance tech, plus fast, reviewable takedown powers for election and fraud cases.