2025 IT Rules Amendment: Regulating Synthetically Generated Information in India’s AI and privacy landscape

October 27, 2025

By Vikrant Rana, Anuradha Gandhi and Abhishekta Sharma

Introduction:

The Ministry of Electronic and Information Technology (MeitY) has proposed amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021[1] (IT Rules) marking a significant new phase in the regulation of online content.  The 2025 Amendment focuses on mandatory disclosure and labelling of Artificial Intelligence (AI) generated synthetic content or modified content by all social media user who post such material and that at least 10% of the visual display area, or the initial 10% of an audio clip’s duration, will have to be devoted to such disclaimers. This initiative seeks to enhance transparency, accountability and user awareness in the rapidly evolving digital ecosystem.

Background

Over the past few years, India’s digital landscape has witnessed a rapid rise in generative AI tools capable of producing highly realistic images, voices and narratives, showcasing individuals in acts or statements they never made. The use of synthetic media is projected to grow at a compound annual growth rate (CAGR) of 49.26% from 2025 to 2035 underscoring the urgency of developing robust regulatory mechanism to address its potential misuse.[2]

Rising concerns have been in both the Houses of Parliament regarding the misuse of deepfakes and synthetic content, which can be weaponized to spread misinformation, manipulate public opinion and infringe upon individual privacy.  The government highlighted urgent need to protect individuals and the public from potential harm caused by AI-generated media.[3]

The issue also came to the forefront in the recent case of Sadhguru Jagadish Vausdev & Anr v. Igor Isakov & Ors[4] wherein a suit was filed against several defendants including platforms and individuals alleging violation of Sadhguru’s personality rights through the creation and dissemination of AI-generated deepfakes and misleading content impersonating him.[5]

The Delhi High Court directed Department of Telecommunication (hereinafter referred to as DoT) and MeitY to issue necessary directions calling upon various service providers/social media platforms to block access/suspend various websites, social media accounts, channels, etc. of the primary defendants or such other websites, social media accounts, channels, etc. that may subsequently be notified by the Plaintiffs to be infringing their rights.

What is synthetically generated information?[6]

IT Rules introduces definition of synthetically generated information under Rule 2(1) (wa) as information that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true.

IT rules further clarifies that any reference to information in the context of information being used to commit an unlawful act, including under Rule 3(1)(b) & Rule 3(1)(d) and Rule 4(2) & (4) shall be construed to include synthetically generated information, unless the context otherwise requires.

Key proposed Amendments to the Rule

New provisions related to synthetically generated information:

Labelling requirement:

Proposed Rule 3(3) has been added “due diligence in relation to synthetically generated information” specifying that where an intermediary offers a computer resource which may enable, permit, or facilitate the creation, generation, modification or alteration of information as synthetically generated information, it shall ensure that-

  • Every such information is prominently labelled or embedded with a permanent unique metadata or identifier, in a manner that such label, metadata or identifier is visibly displayed or made audible in a prominent manner on or within that synthetically generated information.
  • Such synthetic content shall be marked and cover at least 10% of the visual or audio content to alert users.
  • Further such label, metadata or unique identifier shall not be modified, suppressed or removed.

Verification Obligation

Under Rule 4 on “Additional due diligence to be observed by significant social media intermediary and online gaming intermediary” Sub Rule 1A has been added which requires Significant Social Media Intermediary (hereinafter referred to as SSMIs) prior uploading any content to confirm and require user to declare whether such content is synthetically generated information.

It further requires SSMIs to have reasonable and appropriate technical measures to verify accuracy of such declaration having regard to nature, format and source of such information.

In case the content is found to be synthetically generated, a clear label or notice shall be displayed indicating the same.

Safe Harbour

A proviso to Rule 3(1)(b) has been added clarifying that intermediary acting in good faith to remove or disable any information including synthetically generated information received as a part of reasonable efforts or on the basis of grievance received shall be protected by the principal of safe harbour under Section 79 of Information Technology Act, 2000.

Amendment to Rule 3(1)(d) of the IT Rules

Under Rule 3(1)(d), intermediaries are required to remove unlawful information upon receiving actual knowledge either through a court order or notification from the Appropriate Government. In reference to the same MeitY clarified that from November 15, 2025 only senior government officers at joint secretary levels and above and law enforcement personnel at deputy inspector general of police rank and above will be able to issue directions and notifications to digital intermediaries including social media platforms and search engines for removing unlawful content.[7] The intimation must clearly specify the legal basis and statutory provision, the nature of the unlawful act, and the specific URL/identifier or other electronic location of the information, data or communication link (“content”) to be remove.

Impact of the Proposed Amendment

The proposed amendments comes is a positive move towards safeguarding against deepfakes, synthetic content and misinformation. However, it also brings several broader implications.  It is likely to increases operational and compliance burden on various social media companies.

On the other hand, the amendment will enhance user protection by ensuring greater transparency and authenticity of online content, thereby strengthening public trust in digital platforms.

Global stances on synthetically generated information and AI labelling

EU AI Act

The EU AI Act requires generative AI like transparency requirement that the content that is either generated or modified with the help of AI such as images, audio or video files shall be clearly labelled as AI generated so that users are aware when they come across such content.[8] Further it requires that Provider of AI model would also have to design their systems in such a way that synthetic audio, video, text and image content is marked in a machine-readable format, and detectable as artificially generated or manipulated.[9]

China

China issued new Regulation on “Measures for Identifying Artificial Intelligence-Generated Synthetic Content” on September 01, 2025 which aims to promote responsible AI development and safeguard public interest. The regulations requires online platforms hosting or disseminating synthetically generated information to have clear labels to identify synthetic content, add warnings for suspected or user declared synthetic content. Users must voluntarily disclose whether their content includes AI-generated elements.

Further platforms distributing AI-enabled apps are also required to verify if those apps use synthesis technologies and confirm they include the necessary identification tools.[10]

Conclusion

The 2025 proposed amendment to the IT Rules marks a significant step toward addressing legal ambiguity by explicitly bringing synthetically generated information within its regulatory scope. By emphasizing transparency, traceability and accountability, the amendment strengthens the governance framework for digital intermediaries. However, certain gaps remain for instance, introducing tiered compliance obligations based on the scale and nature of content could help reduce the burden on smaller intermediaries. Moreover developing detailed technical standards for watermarking and content verification aligned with global best practices would enhance implementation consistency. Further adopting content-specific labelling similar to China’s new regulations could further refine the balance between innovation, user protection and regulatory oversight.

 

https://ssrana.in/articles/government-amend-it-act-introduce-rules-regulating-ai-companies-generative-ai-models/

[1] https://www.meity.gov.in/static/uploads/2025/10/89503cda634da5cc2de011d288638b76.pdf

[2] https://www.marketresearchfuture.com/reports/india-synthetic-data-generation-market-63032

[3] https://www.meity.gov.in/static/uploads/2025/10/8e40cdd134cd92dd783a37556428c370.pdf

[4] https://delhihighcourt.nic.in/app/showlogo/59075600_1748698275_591_5782025.pdf/2025

[5] https://delhihighcourt.nic.in/app/showlogo/1760792237_084924e224ef2d22_589_5782025.pdf/2025

[6] https://www.meity.gov.in/static/uploads/2025/10/9de47fb06522b9e40a61e4731bc7de51.pdf

[7] https://www.pib.gov.in/PressReleasePage.aspx?PRID=2181719

[8] https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[9] https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/757583/EPRS_BRI(2023)757583_EN.pdf

[10] https://cadeproject.org/updates/china-enforces-new-ai-content-identification-rules-starting-today/

For more information please contact us at : info@ssrana.com