AI and Deepfakes: Navigating the digital revolution and its dark side

November 27, 2025
AI and Deepfakes

By Vikrant Rana, Anuradha Gandhi and Abhishekta Sharma

Introduction

Artificial Intelligence (AI) has grown quite fast and has led to great inventions and significantly influencing the economy. AI technologies are expected to boost India’s GDP $6.6T by 2035[1] but it has also seen technologies like Deepfake emerge.

Regardless of the several benefits that AI offers, today, is has emerged the most dangerous weapon in the cybercriminal arsenal. A 2025 Phishing Threat Trends Report mentioned that AI tools were involved in approximately 82.6% of all phishing emails, positioning AI in nearly 8 out of every 10 phishing campaigns.[2] Among its most controversial creations stands the deepfake. The volume of deepfake materials escalated from approximately 500,000 during 2023 to 8 million in 2025.[3] Exhibiting an accelerating expansion rate surpassing most other digital security challenges.[4]

(https://ssrana.in/articles/deepfake-technology-navigating-realm-synthetic-media/)

The legal, ethical and social ramifications of AI generated content has been hard to assess with the ever-deepening crises of Deepfakes. With the dramatic increase in malicious use for fraud, impersonation and non-consensual imagery, detection of deepfakes with human accuracy is relatively low.

What is Deepfake?

The phrase “deepfake” emerges from merging “deep learning” with “fake” representing the technological basis grounded in deep learning computational methods.[5] The proliferation of hyper-realistic audio video content generated through advanced AI techniques such as Generative Adversarial Networks (GANs) that generates synthetic data that looks like real data by training algorithms on extensive data sets of a particular individual’s images or recording producing convincing imitations of an individual.[6]

Responding to the urgent necessity of addressing concerns related to fabricate media materials The Deepfake Prevention and Criminalization Bill, 2023 was submitted as an independent proposal in the Indian Parliament. Though currently awaiting Parliamentary consideration, the statute characterizes deepfakes with exceptional specificity such as:

“digitally manipulated or fabricated digital content, including but not limited to images, videos or audio recordings, created through the use of advanced digital technologies such as artificial intelligence, machine learning, or other advanced technologies, with the intent to convincingly and deceptively depict subjects or issues or represent individuals engaging in actions, making statements, or being in circumstances that did not occur or exist in reality.”[7]

Types of Deepfake

Deepfakes take on numerous appearances such as[8]

  • Face swapping deepfakes it swaps one individual’s face onto another within recorded material like video or image.
  • Mouth-syncing modification and sound swap revise facial articulations to sync with synthetic speech, producing illusion that an individual uttered specific remarks they never articulated.
  • Speech fabrication generates simulated vocal patterns independent of visual elements, frequently utilized in financial scams to mimic company leaders directing money transfers.

Deepfake Related harms and AI exploitation

  1. Identity theft and fraud:
    In National Stock Exchange of India Ltd. vs Meta Platforms, Inc. & Ors,[9] deepfake video falsely depicted the MD CEO of National Stock Exchange (NSE) endorse stock-picking services was streamed on various social media intermediaries.  The Court held that the same jeopardise the genuine investors causing them financial loss.Another such instance involve a well-known celebrity who lodged a complaint about deepfake video of him discussing financial well-being in connection with electoral promises made by a political party during its election campaign.[10](Ref: https://ssrana.in/articles/meity-issues-advisory-to-social-media-companies-to-take-down-fake-misleading-and-deepfake-videos-of-nse-md-and-ceo/)
  2. Privacy violation: AI deepfakes poses significant privacy risks by enabling the creation of realistic by fake videos, images or audio of individuals without their consent leading to unauthorized use of personal identity, emotional harm and reputational harm leading to privacy violation of individuals. Materials that previously represented relatively manageable when disclosed electronically can presently be repurposed to harm individual lacking authorization or recognition.
    Various Bollywood actors such as Rashmika Mandanna, Katrina Kaif, etc. were victims of deep fakes of their faces, which were artificially superimposed into other content that was explicit in nature.(https://ssrana.in/articles/deepfakes-and-breach-personal-data/)
  3. Social, economic and political harm: deepfake represent substantial menace to constitutional democratic functioning, digital protection danger intensify as deepfake transform into implements for confidence exploitation. Deep fakes are used to create social unrest, manipulate the opinion of the public, and invoke elections.Recently, Irish presidential candidate Catherine Connolly filed complaint over a malicious deepfake AI video, in the video Connolly appeared to announce her withdrawal of candidacy which was a way to mislead voters and undermine democracy.[11](https://ssrana.in/articles/pil-eci-response-deepfakes/)

India’s legal Response

India presently lack specialized legislative provisions focusing on tackling deepfakes, despite lacking they have recognized these challenges and growing misuse of synthetically generated information including deepfakes, misinformation and other unlawful content, which is capable of harming user, violating privacy or threatening national integrity, the Ministry of Electronic and Information Technology (MeitY) proposed amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021)[12] mandating that digital platforms eliminate prohibited communication following notification and grievance handling procedure. Ministry further proposed changes in the regulation, particularly addressing mandatory disclosure and labelling of Artificial Intelligence (AI) generated synthetic content or modified content. The Rules mandate intermediaries to exercise due diligence and prevent hosting or transmission of unlawful content by themselves or their users.

(For ref. read our article: https://ssrana.in/articles/2025-it-rules-amendment-regulating-synthetically-generated-information-in-indias-ai-and-privacy-landscape/)

Other provisions under the Information Technology Act, 2000[13] (hereinafter referred to as IT Act, 2000) are-

  • Section 66C addresses electronic identity theft permitting punishment involving a maximum sentence of three years and monetary penalty of upto one lakh.
  • Section 66D addresses punishment for cheating by personation by using computer resource imposing penalty of upto 3 years and fine upto one lakh.
  • Section 66E handles privacy violation.
  • Section 67 and 67A establish regulation and punitive measures for distribution or circulation of obscene or sexually explicit content.[14]
  • Further Section 69A deals with power of government to issue directions for blocking public access of any information through any computer resource.

Digital Personal Data Protection Act, 2023 (DPDP Act)

The DPDP Act guarantees ethical handling of personal information and ensure that data fiduciaries (any person who alone or in conjunction with other persons determine the purpose and means of processing personal data) have in place technical and reasonable safeguards and shall take consent from data principals in reference to their Personal information. The DPDP Act further penalizes for breach of obligation imposing penalty upto 250 crores.

Bharatiya Nyay Sanhita, 2023 (BNS)

The New Criminal Laws framework take into account the computer-generated crimes or cybercrime. BNS penalizes the act of spreading misinformation and disinformation causing public mischief under Section 353 of the Act shall be punished with imprisonment upto 3 years or with fine or both. Section 111 mentions about organized cybercrimes including cyber-crime involving deepfake content can also be prosecuted.

Constitution of India

Article 21 of the Indian Constitution[15] guarantees the right to life and personal liberty. This has been judicially interpreted to include the right to privacy and dignity, forming a constitutional foundation against unauthorized use of a person’s likeness in deepfakes. Also, Freedom of speech, guaranteed under Article 19(1)(a) is protected but subject to reasonable restrictions, allowing government to limit speech for reasons such as public order, decency, morality and individual dignity balancing free expression with harm caused by deepfake.[16]

What is Safe harbor?

A safe harbor clause typically outlines actions or conduct that protect a company from being held legally responsible for the outcomes arising from actions taken or statement made in good faith.[17]

Under Indian law, the safe harbor provisions are specifically designed for intermediaries, as outlined in Section 79 of the Information Technology Act, 2000 (IT Act, 2000) and its corresponding Rules. These provisions were introduced to protect intermediaries from liability for acts committed by third parties, provided that the intermediary has observed the required due diligence.

An intermediary[18] with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes.

Scope of Safe Harbour: protection to Intermediaries

Following the 2008 amendment to the IT Act, it was clarified that an intermediary’s eligibility to claim safe harbor protection depends primarily on 2 factors:

  1. Whether they had actual knowledge of any unlawful activity
  2. Whether they fulfilled their due diligence obligations as prescribed by the law.

Judicial Precedents

Shreya Singhal vs Union of India [19]

The scope and interpretation of the Safe Harbour provision was later examined in this landmark judgement that significantly transformed India’s electronic freedoms legal landscape and fundamentally shaped how section 69A and 79 of the Information Technology Act, 2000. The case challenged the validity of Section 69A and the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009[20] due to its vague and broad powers of the government to block online content arguing that it hinders freedom of speech.

The court held that blocking can only take place by a reasoned order after complying with several procedural safeguards including a hearing to the originator and intermediary and that the blocking order can either only be passed by designated officer after complying with Rules 2009 or the other by the Designated Officer when he has to follow an order passed by a competent court.

Further, Section 79 which provides safe harbor protection to internet intermediaries or third-party content and Section 79 (3)(b) which says provision shall not apply if

“(b) upon receiving actual knowledge, or on being notified by the appropriate Government or its agency that any information, data or communication link residing in or connected to a computer resource controlled by the intermediary is being used to commit the unlawful act, the intermediary fails to expeditiously remove or disable access to that material on that resource without vitiating the evidence in any manner.”

The court held that the section shall be read narrowly to limit the intermediary liability safeguard. It held that intermediaries are only required to remove content upon receiving a court order or a government notification that complies with constitutional restrictions on freedom of speech. The ruling emphasized that any content take down must follow due process and cannot be arbitrary.

The court also said that under S.79 the intermediary in addition to publishing rules and regulations, privacy policy and user agreement for access or usage of their service has to also inform users of the due diligence requirements including content restriction policy under rule 3(2).

Kunal Kamra vs Union of India[21]

In case challenged the constitutionality of amendment to Rule 3(1)(b) of the IT Rules 2021 which empowered Public Information Bureau of the Ministry of Information and Broadcasting as the Fact Check Unit to identify the content as fake or false or misleading. Kunal Kamra and ors. challenged the constitutionality of the said provision arguing that the same violates of freedom of speech under Article 19(1)(a) of the Indian Constitution giving excessive power to government. The petitioner further contended that the provision took away safe harbor as provided under Section 79 of the IT Act, 2000.

The Bombay High Court split verdict made the matter to be placed before a third judge for adjudication, wherein he refused to grant an interim stay on the amended rule.

The Petitioner approached the Supreme Court (SC) appealing the decision of Single Judge Bench. Before the matter was listed in SC government notified the Fact Check Unit to be PIB.

In conclusion Supreme Court later put a stay on the notification of the Fact Check Unit. (To read more refer our article: https://ssrana.in/articles/supreme-court-stays-centres-notification-fact-check-unit-it-rules/ )

India’s Reporting stance on Deepfake

National Cyber Crime Reporting Portal: the portal provides reporting of two kinds of cyber-crime incident (i) crime related to women/child (ii) other crimes. The crime related to women/child can be reported anonymously.[22]

Indian Cybercrime Coordination Centre (I4C): it acts as nodal point to curb cybercrime, facilitate easy filing of complaints and identify trends and patterns. It also empowers agencies to issue notices for removal or disabling access to unlawful content including deepfakes.[23]

Sahyog Portal: it automates the process of sending notices to intermediaries by the Appropriate Government or its agency under IT Act, 2000 to facilitate the removal or disabling of access to any information, data or communication link being used to commit an unlawful act.[24]

CERT-In – The Indian Computer Emergency Response Team (CERT-In) regularly issues guidelines on AI-related threats and countermeasures, including deepfake. CERT-In has published an advisory in November 2024 on deepfake threats and measures that need to be followed to stay protected against deepfakes.

Grievance Appellate Committee (GACs): the committee deals with the appeals of users aggrieved by decisions of Grievance Officers of social media intermediaries and other intermediaries on complaints of users or victims against violation of the IT Rules and any other matters pertaining to the computer resources made available by the intermediaries.[25]

AI, Deepfake and Personality Right

AI deepfake technology also presents significant challenges for personality rights, which protect an individual’s name, likeness, voice and overall persona from unauthorized use. The creation and dissemination of AI-generated deepfakes often occur without the individual infringing upon their privacy. Indian judicial system have recognized and addressed these infringements and misuse of AI generated content-

  1. Arijit Singh vs Codible Ventures LLP[26]– the Bombay High Court rules in favour of Arijit Singh against the misuse of AI technology to synthesize his voice and likeness without authorization. The Court held that making AI tools available that enable the conversion of any voice into that of a celebrity without his/her permission constitutes a violation of the celebrity’s personality rights.
  2. Ankur Warikoo vs John Doe and Ors[27]– the Delhi High Court granted interim relief against AI generated deepfake videos circulating financially fraudulent content using petitioner’s image and voice. The court issued John Doe order[28]
  3. Akshay Kumar’s case[29]– Bombay High Court recently ordered removal of deepfake content infringing Akshay Kumar’s personality rights. One of the website was generating speech in his tone and style from any text input. The Court noted that AI deepfake videos are distorting personality and public perception and causing potential societal harm.

Major AI deepfake regulatory framework around the world

TIMELINE OF EVENTS
S. No. Country/Jurisdiction Key highlights
1. Denmark Deepfake Law Government has amended its copyright law to ensure that every person “has the right to their own body, facial features and voice”.

 

The performers and regular individuals alike would enjoy protection extending 50 years after their death for unauthorized AI reproductions of their work or likeness.[30]

 

2. US Take It Down Act The Act makes it illegal to “knowingly publish” or threaten to publish intimate images without a person’s consent, including AI-created “deepfakes”[31]
3. China’s AI content labeling regulations These measures refine the labeling obligations and introduce mandatory technical requirements.

 

Under the law, platforms must apply both explicit and implicit labeling systems. Explicit labels must be clearly visible to users, allowing them to recognition synthetic content. Implicit identifiers must be embedded within the content itself to enable automated detection and compliance monitoring.[32]

4. EU AI Act and Code of practice on marking and labelling AI-generated content[33] Article 50[34] of EU AI Act and recitals thereunder provider and deployers of AI system that generate synthetic audio, video or text are required to ensure that such outputs are marked in a machine-readable format and detectable as artificially generated or manipulated.

Conclusion

Deepfake represent a new type of conflict that infringes upon individual rights ranging from personality right of celebrities to the privacy and bodily autonomy of ordinary citizens. Landmark cases such as Shreya Singhal and Kunal Kamra highlights the critical importance of the addressing this issue. The Indian government is striving to keep pace with the rapidly evolving technologies by introducing various legal measure. The draft amendment to the Intermediary Guideline, DPDP Act are significant steps in this direction.

Related Post:

https://ssrana.in/articles/deepfakes-financial-fraud/

https://ssrana.in/articles/pil-filed-by-a-journalist-to-curb-deepfake-menace/

https://ssrana.in/articles/techethics-deepfakes-morality-and-values/

https://ssrana.in/articles/nobody-is-safe-deepfake/

[1] https://niti.gov.in/sites/default/files/2025-09/AI-for-Viksit-Bharat-the-opportunity-for-accelerated-economic-growth.pdf

[2] https://www.newindianexpress.com/states/karnataka/2025/Jun/26/ai-driving-force-behind-828-per-cent-of-phishing-emails-in-karnataka

[3] https://deepstrike.io/blog/deepfake-statistics-2025

[4] https://www.cert-in.org.in/s2cMainServlet?pageid=PUBVLNOTES02&VLCODE=CIAD-2024-0060

[5] https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.sciencedirect.com/science/article/pii/S0957417424011266&ved=2ahUKEwjU_66RxueQAxVBTWwGHQ0hBcwQFnoECBgQAQ&usg=AOvVaw10lf0yTs1EYUACLZbYmUMW

[6] https://assets.kpmg.com/content/dam/kpmgsites/in/pdf/2024/12/deepfake-how-real-is-it.pdf

[7] https://sansad.in/getFile/BillsTexts/RSBillTexts/Asintroduced/2e214202543549PM.pdf?source=legislation

[8] https://bombayhighcourt.nic.in/generatenewauth.php?bhcpar=cGF0aD0uL3dyaXRlcmVhZGRhdGEvZGF0YS9vcmlnaW5hbC8yMDI0LyZmbmFtZT1GMjkwNzAwMjE0NTYyMDI0XzEucGRmJnNtZmxhZz1OJnJqdWRkYXRlPSZ1cGxvYWRkdD0xOS8wNy8yMDI0JnNwYXNzcGhyYXNlPTIwMDcyNDIwMjQyNCZuY2l0YXRpb249JnNtY2l0YXRpb249JmRpZ2NlcnRmbGc9WSZpbnRlcmZhY2U9Tw==

[9] https://bombayhighcourt.nic.in/generatenewauth.php?bhcpar=cGF0aD0uL3dyaXRlcmVhZGRhdGEvZGF0YS9vcmlnaW5hbC8yMDI0LyZmbmFtZT1GMjkwNzAwMjE0NTYyMDI0XzEucGRmJnNtZmxhZz1OJnJqdWRkYXRlPSZ1cGxvYWRkdD0xOS8wNy8yMDI0JnNwYXNzcGhyYXNlPTIwMDcyNDIwMjQyNCZuY2l0YXRpb249JnNtY2l0YXRpb249JmRpZ2NlcnRmbGc9WSZpbnRlcmZhY2U9Tw==

[10] https://www.hindustantimes.com/cities/mumbai-news/fir-registered-over-aamir-khan-s-deepfake-video-promoting-congress-101713642207781.html

[11] https://www.bbc.com/news/articles/czxkn504lqpo

[12] https://www.meity.gov.in/static/uploads/2025/10/8e40cdd134cd92dd783a37556428c370.pdf

[13] https://www.indiacode.nic.in/bitstream/123456789/13116/1/it_act_2000_updated.pdf

[14] https://www.pib.gov.in/PressReleasePage.aspx?PRID=2154268

[15] https://indiankanoon.org/doc/1199182/

[16] https://www.scconline.com/blog/post/2025/11/08/deepfake-regulation-rights/

[17] https://dictionary.cambridge.org/dictionary/english/safe-harbourv

[18] Section 2(w) of IT Act, 2000

[19] https://indiankanoon.org/doc/110813550/

[20] https://www.meity.gov.in/static/uploads/2024/10/91f628cb778f94e76df356bc3fd3ac60.pdf

[21] https://globalfreedomofexpression.columbia.edu/cases/kunal-kamra-v-union-of-india/

[22] https://cybercrime.gov.in/Webform/FAQ.aspx

[23] https://i4c.mha.gov.in/

[24] https://sahyog.mha.gov.in/

[25] https://gac.gov.in/CMSData/CMSContent?qs=h/yhm1mKjnDu/Wy+5eb5/g==

[26] https://www.livelaw.in/pdf_upload/arijit-singh-vs-codible-ventures-llp-552701.pdf

[27] 2025 SCC OnLine Del 3727, https://www.scconline.com/blog/post/2025/05/29/delhi-high-court-ankur-warikoo-john-doe-injunction-deepfake-ai-misuse-legal-news/

[28] type of legal order that allows a person or entity to take legal action against an unknown party or parties.

[29] https://www.thehindu.com/news/cities/mumbai/actor-akshay-kumar-seeks-bombay-high-courts-protection-against-deepfake-misuse/article70168087.ece

[30] https://www.bechbruun.com/en/news/news/the-danish-copyright-act-new-ban-on-deepfakes-and-protection-of-artistic-performancesv

[31] https://www.thehindu.com/news/international/what-is-take-it-down-act-donald-trump-to-combat-revenge-porn/article69596373.ece

[32] https://www.chinalawtranslate.com/en/ai-labeling/#gsc.tab=0

[33] https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content

[34] https://artificialintelligenceact.eu/article/50/

For more information please contact us at : info@ssrana.com