By Anuradha Gandhi and Isha Sharma
Introduction
The proliferation of deepfakes has become an escalating concern across the world. According to a recent survey by McAfee, over 75% of Indians have encountered some form of deepfake content in the past 12 months, with a staggering 38% falling victim to deepfake scams. [1]
It is now believed that with the recent Lok Sabha elections and sporting events like the Indian Premier League (IPL), the actual number of people exposed to deepfakes could be much higher given that many Indians are not able to decipher what is real versus fake due to the sophistication of artificial intelligence (AI) technologies.
The research was conducted in early 2024 to find out the impact of AI and the rise of deepfakes in consumers’ daily lives. During this survey, the team found that nearly 1 in 4 Indians (22 per cent) said they recently came across videos that they later discovered to be fake
Further, data revealed that nearly 8 out of 10 (80 per cent) people are more concerned about deepfakes than they were a year ago. More than half (64 per cent) of respondents say AI has made it harder to spot online scams, while about 30 per cent of people feel confident they could tell real from fake if someone shared a voicemail or voice note that was generated with AI.
Moreover, there has been a concerning surge in cases of deepfake scams impersonating both ordinary users and prominent public figures across various domains, including businesses, politics, entertainment and sports.
Apart from being a prevalent issue of misinformation, the rapid spread of deepfakes comes with its own economic burdens. A study conducted by the University of Baltimore and Cybersecurity firm CHEQ, found that fake news cost the global economy $78 billion in 2020[3].
PIL filed before the Hon’ble Delhi High Court
According to a report on May 08, 2024, senior journalist, Mr. Rajat Sharma, had initiated a legal action against the unregulated proliferation of deepfake technology in India by filing a Public Interest Litigation (PIL)[4] in the Hon’ble Delhi High Court[5].
The PIL came at the wake on Mr. Sharma himself being a target of a deepfake.[6] The journalist’s likeness was used in a fake video that was circulated on social media promoting suspicious medical advice on diabetes and weight loss treatment. [7]
The PIL filed by Mr. Sharma highlights the urgent need for regulatory measures to address the growing threat posed by deepfake technology to various aspects of society.
“All these threats are compounded when a deepfake is made of an influential person such as politician, sportsman, actor or any other public figure capable of influencing public opinion,” the petitioner said, and added, “The threat increases more in the case of a person who is visible on television daily and on whose statements the public has come to place faith in.”[8]
In the plea, Mr. Sharma emphasized the detrimental impact of deepfakes on individuals and their fundamental rights guaranteed under the Constitution of India, including the right to freedom of expression, right to privacy and the right to a fair trial.
Mr. Sharma seek to urge the court to direct the MeitY to take an immediate action by blocking public access to mobile applications, software, platforms and websites facilitating the creation of deepfake content and urged the government to establish regulatory frameworks to define and classify deepfakes.
Contentions in the PIL
- In the plea, Mr. Sharma advocates not only for the blocking of platforms facilitating deepfake creation but also, for the establishment of a dedicated government Nodal Officer to promptly address complaints pertaining to deepfakes.
- The PIL calls for stringent measures to ensure the swift removal of deepfake content and transparency regarding its AI-generated nature.
- The PIL seeks to urge the government to mandate that apps and platforms enabling the production of deepfakes clearly disclose their AI-generated nature, using methods such as watermarks or other effective means of identification.
The High Court’s Response to the PIL
The court had taken cognizance of this pressing issue highlighted by Mr. Sharma.
The High Court said it was a “major problem” and sought to know from the central government if it was willing to act on the issue.
“Political parties are complaining about this as well,” the High Court said
The division bench, comprising of Hon’ble the Acting Chief Justice Manmohan and Hon’ble Ms. Justice Manmeet Pritam Singh Arora, issued notice to the Union Government via the Ministry of Electronics and Information Technology, seeking its response on the matter within a period of 4 weeks and listed the matter for next hearing on July 09, 2024. [9]
Lacuna in the DPDPA
The failure to regulate and provide redressal to victims of deepfakes not only jeopardizes citizen’s rights and safety but also undermines the integrity of democratic institution and societal trust. Mr. Sharma claims that despite the Centre’s stated intent to formulate regulations for dealing with deepfakes and synthetic content, no concrete step have been taken thus far, highlighting the urgency of the issue.
The plea, further drew attention to the limitations of current data protection laws, citing the Digital Personal Data Protection Act, 2023, which excludes publicly available data from its purview.
“India’s data protection legislation, the Digital Personal Data Protection Act, 2023, does not protect publicly available data. According to Section 3(c)(ii) of the Act, it does not apply to personal data that users have intentionally made publicly available. For instance, if a blogger shares personal information on social media, this data processing falls outside the data protection law’s jurisdiction,[10]” the plea said
The Lawyer’s Voice Case: Another Deepfake PIL
This is not the only petition being filed to curb the menace of deepfakes. The circulation of a deepfake video featuring the Hon’ble Home Minister making controversial statements, gained significant traction across social media platforms during the recent elections. Taking that into consideration, a PIL was filed by the organization Lawyers Voice seeking direction to Election Commission of India and Union of India to formulate and implement the necessary guidelines over the pervasive use of deepfake technologies in political campaigns. [11]
Though no specific guidelines have been issued to the ECI, the court suggested measures such as taking action against accounts repeatedly posting fake videos and exploring dynamic injunctions to disable retweets of such content.
To know further on what measures had been initiated by the ECI in response to the spread of misinformation, refer to our article titled “PIL and ECI Response on Deepfakes”.
Where does the Indian law stand when it comes to deepfakes?
Through this current PIL, the petitioner, Mr. Sharma contends that the current legislation is insufficient in effectively addressing the emerging threat of deepfakes, highlighting the lack of a dedicated mechanism to combat them in India. While, there does not exist a particular legislation on the matter, deepfakes can tackled by a multitude of provisions available in existing legislation. These include the Indian Penal Code and the Information Technology Act.
- Under the IPC, deepfakes can be prosecuted under impersonation (Section 419), cheating (Section 420), Forgery for the purpose of cheating (Section 468) and Defamation (Section 499).
- Under the IT Act, Sections 66, 66E and 67 can be applied, which deal with identity theft, violation of privacy and publishing and transmitting of obscene material, respectively.
- Further, on November 07, 2023, the Union Government issued an advisory to all social media platforms, directing them to identify misinformation and deepfakes and remove such content from their platforms within 36 hours of it being reported. [12][13]
Legal stance on Deepfakes in other countries
On April 16, 2024, the UK Government had introduced a new law criminalizing the creation of sexually explicit deepfakes which is to be brought through an amendment to the Criminal Justice Bill. As a result, those who create sexual images of people without their consent shall face criminal record and unlimited fine. In case the image is further distributed or circulated, offenders could be sent to imprisonment as well. [14]
While, the EU had taken a proactive approach to deepfake regulations as the EU AI Act had been finally adopted by the European Parliament, serving a landmark legislation that would not only ensure the regulation of AI but shall also serve as a template for multiple jurisdictions.[15]
In January 2023, China adopted expansive rules mandating that manipulated materials have the subject’s consent and bear digital signatures or watermarks.[16][17]
Conclusion
As the legal proceedings unfold, it is imperative for the government to heed the court’s concerns and take decisive action to regulate deepfake technology effectively. This entails not only enacting legislation to prohibit the creation, distribution, and dissemination of deepfakes for malicious purposes but also implementing mechanisms to provide recourse to victims of deepfake-related offenses.
In conclusion, the Delhi High Court’s intervention in response to petitioner’s PIL highlights the gravity of the threat posed by deepfake technology and the urgent need for regulatory interventions to safeguard against its misuse.
Ahana bag , Junior associate at S.S. Rana & Co. has assisted in the research of this article.
[5] Ibid
[7] Ibid
[8] http://www.uniindia.net/~/journalist-rajat-sharma-moves-delhi-hc-on-govt-s-no-control-on-deepfake-technology/India/news/3195262.html
https://www.cnbctv18.com/technology/delhi-hc-pil-from-journalist-rajat-sharma-over-deepfake-videos-19408682.htm
[10] Ibid
[11] https://ssrana.in/articles/pil-and-eci-response-on-deepfakes/
[12] https://ssrana.in/articles/remedies-and-deepfakes-prevention-protection-and-redressal/#_ftn10
[13] https://ssrana.in/articles/remedies-and-deepfakes-prevention-protection-and-redressal/
[14] https://ssrana.in/articles/uk-government-criminalizing-sexually-explicit-deepfakes/
[15] https://ssrana.in/articles/eu-parliament-final-nod-landmark-artificial-intelligence-law/
[16] https://www.nytimes.com/2023/01/22/business/media/deepfake-regulation-difficulty.html
[17] https://iapp.org/news/b/chinas-deepfake-regulation-takes-effect-jan-10