India Tightens Oversight on AI-Generated Content Under IT Rules

May 14, 2026
India Tightens Oversight on AI-Generated Content Under IT Rules

By Anuradha Gandhi and Rachita Thakur

Introduction

The Ministry of Electronics and Information Technology (MeitY) has released proposed amendments to the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (hereinafter referred to as the “Draft Amendment”), with an addition to the clear labeling of synthetically generated content in the Draft Amendments already placed in public domain via notice dated April 21, 2026.[1] Originally set for April 29, 2026, the deadline to submit comments was pushed to May 7, 2026, giving stakeholders additional time to review and respond, reflecting the government’s recognition of the complexity and importance of the proposed changes, particularly those addressing the challenges posed by synthetic and AI-generated content.

[Read more on the Draft Amendment here: https://ssrana.in/articles/government-notifies-information-technology-amendment-rules-2026/ ]

A New Chapter in Global AI Regulation

Enacted on February 25, 2021, the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules 2021”) replaced the earlier IT (Intermediary Guidelines) Rules, 2011 and introduced a significant expansion of intermediary obligations. The IT Rules 2021 brought OTT platforms, digital news publishers, and social media intermediaries under a structured compliance architecture for the first time, creating a tiered framework distinguishing between “intermediaries” and “significant social media intermediaries” (SSMIs) — those with more than 50 lakh registered users in India.[2]

With the Proposed Amendment, India has signaled its intent to take a leading role in shaping the future of digital governance.

The Latest Amendment – Triggering Context: AI, Deepfakes and Misinformation Crisis

In continuation of the February 10, 2026 Notification releasing the Draft Amendment, MeitY has  notified further changes in the Draft Amendments (hereinafter referred to as the “Notification”):

  1. Amendment to Rule 3(3)(a)(ii) Continuous Labeling of Synthetic Content
    As part of their due -diligence process, the intermediaries are required to ensure that synthetically generated information carries an appropriate disclosure.

    The Draft Amendment proposes to strengthen this obligation materially by inserting:

    “continuous and clearly visible display of label for synthetically generated information throughout the duration of the content in visual display”

    1. Continuous display – Unlike a on-time watermark or an opening screen time label/disclaimer, the new proposition requires that the label must persist throughout the entire duration of the content.
    2. Clearly visible – The label must be clearly visible, i.e. it cannot buried under the metadata tag.
    3. Throughout the duration – the label must be visible throughout the duration for which the content is available on the platform. Which means persistent, header/footer requirements when the content is posted online.
    4. Scope of synthetic content – Extends to all types of synthetic content whether whole or in part which includes – AI generated, videos, voice cloning, face-swap modifications, etc.
  2. A new Rule 3(4) is to be inserted
    “(4) Compliance with Clarifications, Advisories and Directions issued by the Ministry: (a) An intermediary shall comply with and give effect to any clarification, advisory, order, direction, standard operating procedure, code of practice or guideline issued by the Ministry, by order in writing, in relation to the implementation, interpretation or operationalization of the requirements prescribed under this Part;

    (b) every such clarification, advisory, order, direction, standard operating procedure, code of practice or guideline referred to in clause (a) shall— (i) be issued in writing; (ii) clearly specify the statutory provision or legal basis under which it is issued; (iii) specify the scope, applicability and compliance requirements in respect of the intermediary or class of intermediaries to whom it applies; and (iv)be consistent with the provisions of the Act and these rules; (c) compliance with any clarification, advisory, order, direction, standard operating procedure, code of practice or guideline issued under clause (a) shall form part of the due diligence obligations of the intermediary under section 79 of the Act.”

    This new rule creates a mechanism for MeitY to issue binding directions, clarifications, advisories, orders, standard operating procedures, code of practice, or guidelines and incorporate compliance with intermediaries’ s statutory due diligence obligations under Section 79 of the Information Technology, 2000.  The said insertion can be perceived as creation of a compliance architecture that attract penalties with no protection under the ‘safe harbour’ provision for third-party content published on the platforms.

What is ‘Safe harbour’ provision?

Section 79 of the IT Act for years has been the layer that has protected Internet Service Providers (ISPs) and Intermediaries from legal, civil and criminal, liabilities for content that is posted, uploaded on the platform by a third-party.

By virtue of Section 79, the Intermediaries are exempted from any liabilities arising from any third-party information, data, or communication link made available or hosted by him, provided they observe due diligence and comply with applicable guidelines. Furthermore, the Safe harbour provision is not absolute, it is conditional upon the requirement that the Intermediary did not initiate transmission, select the receiver or modify any content.

Thus, in effect, tying the compliance requirement with the ‘safe harbour’ provision creates a mechanism that if an intermediary does not comply with the directional notices, advisories, guidelines of the MeitY, such intermediary shall then be exposed to the civil and criminal liability for third-party content.

Why the Amendment?

  1. Transparency and User Awareness – The overriding purpose of the synthetic content labeling obligation is to preserve informational autonomy: the right of users to know whether the content they are consuming is a faithful representation of reality or a synthetic construct. This is not merely an aesthetic preference — in an era where synthetic media can accurately simulate public figures, manipulate political discourse, and engineer social consent, the right to disclosure is foundational to democratic participation.
  2. Combating Deepfakes – Taking cognizance of the recent deepfake audio, video, and synthetic media going viral on the social media platforms, depicting individuals in acts and statements they never made which has resulted in weaponization the spread of misinformation, damage reputations and manipulate elections, and commit financial fraud. Large scale spread of misinformation to influence and manipulate election outcomes was flagged by the Election Commission of India in the previous elections. The Draft Amendment stems from the above compulsion to regulate and create a platform level remediation framework to keep spread of misinformation in check.
    [To read more on this, click here: https://www.barandbench.com/view-point/why-free-and-fair-elections-in-2024-a-challenge ]
  3. Regulatory oversight – Rule 3(4) aims to address a significant gap in India’s digital governance architecture. With the mismatch current pace of technological advancement and regulatory oversight through legislative procedures has created a vacuum in the existing framework while the era demands urgent action. This Rule gives weight to interim advisories, guidelines and directions by enforcing compliance with the same.

What will change?

  1. As part of their due diligence and ensure their safe harbor status, Social Media Intermediaries will now have to deploy AI generated content detection models that are capable of detecting synthetic content and media across the platforms in all formats, including that the content and media is partially synthetic. Though bigger platforms have, to an extent, already deployed such models to detect and label AI generated content.
  2. Additionally, they will also require incorporating persistent on-screen labels, announcements, header/footers for text content, video and audio content available on their platforms which will require relevant app and web modifications by the Intermediaries.
  3. Platforms now shall also be required to display instructions for users to declare whether the content uploaded by them is synthetically generated or not. They may also need to verify or audit such declarations.
  4. Designing infrastructure to comply with the directions of MeitY issued under Rule 3(4) from time-to-time.
  5. Organizations, other than intermediaries, such as OTT Platforms, Advertisers, News Publishing houses and other organizations that rely on synthetically generated content will not have to display continuous and clear disclaimers throughout the duration of such content.

What do the business and Industry Associations have to say?

Industry bodies, the Internet Association of India (IAMAI) and the Broadband India Forum (BIF) representing big social media companies, have submitted their objections to the Ministry of Electronics and Information Technology arguing that the proposal would effectively convert advisories and similar executive instructions into legally binding obligations without parliamentary backing. Stating that the Amendment can expand intermediary liability beyond what is permitted under the IT Act.

BIF, too, has objected that the proposal would turn “soft law” instruments into enforceable obligations linked to safe harbour protections without the procedural safeguards associated with formal rule-making.[3] Citing the Supreme Court judgement in Shreya Singhal v. Union of India, (2015) [4], both the organisations argued that:[5]

  1. Intermediary takedown obligations should arise only through court orders or lawful government notifications;
  2. While IAMAI recommended that Rule 3(4) be withdrawn entirely, BIF suggested that only rules formally be notified under Section 87 of the IT Act[6] should create binding obligations.
  3. The organisations also opposed amendments expanding the scope of Part III of the Rules governing digital news publishers and OTT Platforms.

Global Framework on Regulating Synthetic Content

  1. European Union’s Labeling Standards
    The European Union’s EU AI Act, 2024 under Article 50[7] mandates disclosure requirements on providers of generative AI systems pertaining to marking and detection of AI generated content and labelling of deepfakes and certain AI generated publications. They complement rules for high-risk AI systems and general-purpose AI systems.[8]
  2. China’s Deep Synthesis Regulation 2022
    The People’s Republic of China’s Regulations on the Administration of Deep Synthesis of Internet Information Services (Regulations) notified in 2022 aims at addressing risks related to deep synthesis an AI-based technology that enables content synthesis and the creation of virtual digital “humans” online. Such content is often highly realistic, in such cases which changing augmenting facial features and confusing to general public. These Regulations impose obligations on the providers and users of so-called “deep synthesis technology” (deep learning, machine learning and other algorithmic processing systems) which uses mixed datasets and algorithms to produce synthetic content, such as deepfakes.

    Obligations include, requiring deep synthesis providers to prevent the use of deepfakes content to produce, copy, publish or disseminate information prohibited by laws and administrative regulations; ensure compliance with applicable laws, establish mechanism for user verification and authentication and ethical evaluations among others.[9] Contrary to China’s Regulations, India’s law does not require identity verification of content creators of deep synthesis content creators.

  3. USA’s unconditional Safe Harbour
    Section 230 of the Communications Decency Act, of the United States puts forth an unconditional safe harbour provision for organisations, unlike India and the European Union. At present, the USA lacks single comprehensive federal law that mandates deepfake labelling.

    In May 2025, the U.S. Congress had passed the Take It Down Act, the first major federal statute directly targeting non-consensual intimate imagery including AI generated “deepfakes”, thereby criminalizing distribution of intimate images or videos created or manipulated using AI without consent. The statute mandates platforms to implement a ‘notice and takedown’ process setting a timeline of 48 hours for removing the content and taking reasonable efforts to eliminate duplicates. Non-compliance can result in imprisonment upto three years with fines, with stricter penalties for aggravating factors. Apart from the federal laws, states like Albama, Arizona, California, Colorado, Florida, Hawaii, Idaho, Illinois are amongst the states that have enacted laws to counter deepfake imagery. [10]

Conclusion

India’s Draft Amendment to the IT Intermediary Guidelines — with its continuous labeling mandate and the new ministerial direction power under Rule 3(4) — represents the most significant regulatory intervention in the country’s digital intermediary framework since the 2021 Rules themselves.

The Amendment must be assessed not as an isolated instrument but as part of a larger regulatory architecture that now includes the Digital Personal Data Protection Act 2023, the proposed broadcasting regulation framework, and the continuing evolution of the IT Act. Together, these instruments are reshaping the compliance environment for digital platforms in India in ways that will require sustained legal, technical, and operational adaptation.

The global comparison reveals that India is neither the most permissive nor the most restrictive jurisdiction in its approach to AI content regulation. It is, however, moving faster and more assertively than the United States, while adopting a more direct and less risk-tiered approach than the European Union.

[1] https://www.meity.gov.in/static/uploads/2026/04/ec197f1206279efb4964965f0dede6c1.pdf

[2] https://prsindia.org/billtrack/the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021

[3] https://www.hindustantimes.com/india-news/tech-bodies-oppose-it-rules-amendments-101778229161286.html

[4] AIR 2015 SC 1523

[5] https://www.hindustantimes.com/india-news/tech-bodies-oppose-it-rules-amendments-101778229161286.html

[6] Section 87 of the Information Technology Act, 2000 – Power of Central Government to make rules

[7] Article 50 of the EU AI Act – Transparency Obligations for providers and deployers of generative AI Systems

[8] https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content

[9] https://www.aoshearman.com/en/insights/ao-shearman-on-data/china-brings-into-force-regulations-on-the-administration-of-deep-synthesis-of-internet-technology

[10] https://www.halock.com/what-legislation-protects-against-deepfakes-and-synthetic-media/

For more information please contact us at : info@ssrana.com