India Drafts AI Content Label Rule, Tightens Takedown Powers

New draft rules mandate clear “synthetic content” labels on AI-generated media, and restrict takedown orders to senior officials amid concerns over misuse and due process.

Reading Time: 6 minutes 

Topics

  • India has proposed new rules requiring platforms such as Meta, X, and Google to clearly label AI-generated or manipulated posts as “synthetic content” in a major regulatory step taken to combat the risks posed by deepfakes and generative AI.

    In a draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, released on Wednesday, the Ministry of Electronics and Information Technology (MeitY) defined “synthetically generated information” as any content created, modified, or altered using AI in a way that makes it appear authentic.

    Platforms would be obligated to embed metadata or persistent identifiers in such content, collect declarations from users during uploads, and apply clear visual or audible markers indicating its synthetic nature.

    The draft specifies visibility thresholds: labels must cover at least 10% of the display area in visual content or appear in the first 10% of audio content. The goal, according to MeitY, is to ensure traceability, prevent harm, and maintain transparency without stifling innovation in AI applications.

    The government warned that with the growing availability of generative AI tools, the potential for misuse, ranging from impersonation and misinformation to electoral interference, has increased substantially.

    The proposed rules also mandate that “significant social media intermediaries” (SSMIs) enhance user verification procedures and put in place stronger content moderation practices for synthetic content. Platforms will be expected to ensure that users can distinguish synthetic content from authentic posts across formats.

    In a column published in Mint, Rahul Matthan, partner at law firm Trilegal, argued that the premise behind the draft labelling rule is flawed.

    He noted that nearly all digital content today is influenced by AI, whether in image enhancement on smartphone cameras or automated audio tools, making the blanket labelling of AI-generated content both impractical and misleading.

    Instead, he proposed a reverse approach: marking only content that is known to be real.

    “Fakery in the age of GenAI is best fought by marking out what’s true instead of all that’s made up,” Matthan wrote.

    He endorsed a “provenance manifest” system that would attach secure digital certificates to content captured directly by humans without AI assistance, enabling viewers to trace what’s real in an era where synthetic content is nearly indistinguishable from the real thing.

    Alongside the draft labelling rules, MeitY also notified a major change to the government’s takedown process.

    Starting 15 November, only officers at the level of Joint Secretary or above in government ministries, or Deputy Inspector General (DIG) or above in law enforcement, will be authorized to issue takedown notices to social media platforms.

    The amendment limits powers previously held by junior officials, including Sub-Inspectors, who could send takedown requests under Section 79(3)(b) of the IT Act. The move follows industry complaints and legal challenges over lack of clarity and consistency in content takedown orders.

    In its filing before the Karnataka High Court earlier this year, X (formerly Twitter) argued that the broad language of the IT Act had allowed a wide range of officials to issue takedown orders, leading to inconsistent enforcement.

    The company also contested the government’s use of the Sahyog portal to channel content removal directives.

    Last month, the Karnataka High Court dismissed X’s petition, ruling that the portal was lawful and directing platforms to comply with Indian regulations.

    In response, the government appears to have narrowed enforcement discretion by limiting takedown authority to senior officials and introducing new requirements for documentation and legal justification.

    Under the revised framework, every takedown order must specify the legal provision invoked, describe the nature of the violation, and provide the exact URLs or identifiers of the content in question.

    A secretary-level oversight board will conduct regular reviews of these orders to ensure necessity and proportionality.

    The reforms come at a time when countries worldwide are moving to regulate the spread of deepfakes and generative AI.

    While the European Union’s AI Act focuses on classification and risk tiers, and China has introduced deepfake regulations, India’s draft goes further in setting clear thresholds for visibility and user declarations.

    MeitY has opened the proposed amendments for public feedback until 6 November. If finalized, they could place India among the first countries to codify mandatory, format-specific labelling of AI-generated content.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.