An analysis of the use of deepfakes as a tool of malevolence to target women: with a special reference to the Taylor Swift incident

  • Posted on February 12, 2024
Deepfakes as a tool of malevolence

By Anuradha Gandhi and Isha Sharma

Introduction:

Taylor Alison Swift, called Taylor swift by her fans, is an American Singer-songwriter. She is primarily known for her Pop music and has also been renowned for her ability to capitalize upon her talent. She is estimated to have a net worth of a billion USD by Bloomberg[1] and is often described as one of the richest musicians on earth. Her influence over the world of music is said to be astronomical and just recently, she has been nominated as time magazine’s person of the year for 2023.

On January 24, 2024, several lascivious deep-fake images of Taylor Swift surfaced on several social media websites thereby causing mass hysteria. These deepfake images showed Taylor in compromising situations and reports suggested that one image on X, remained active for about 17 hours, gathering 37 million views before being deleted by the social media platform.

Microsoft CEO Satya Nadella condemned these images as “alarming and terrible,” emphasizing the urgency to address this swiftly. In an interview, Nadella expressed, “I think it is imperative for us to act swiftly on this matter.”

I would say two things: One, is again I go back to what I think’s our responsibility, which is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced. And there’s a lot to be done and a lot being done there. But it is about global, societal- you know, I’ll say, convergence on certain norms. And we can do- especially when you have law and law enforcement and tech platforms that can come together- I think we can govern a lot more than we think-we give ourselves credit for. [2]

While theses social media platforms issued condemnations of this act, it is pertinent to question the dangers that are posed to women by these miscreants. This article seeks to explore the same while also suggesting actions against such deep-fakes.

AI deep-fakes, a gendered issue of pressing concern:

The display of the deepfakes of a woman with great influence such as Taylor Swift on multiple social media platforms begs the question as to how ordinary women could protect themselves from AI generated deepfakes of themselves. To understand this, we must firstly understand the concept of deepfakes.

“Deepfake” as the name implies, is an artificially generated output from a computer program that involves the use of Artificial Intelligence or AI. The use of deep-fakes has been on the rise and as per a study, there were over 95 thousand Deep-fake videos available as of 2023. This represents a growth of over 550% when compared to 2019. In addition to this, 98% of Deep-fakes are pornographic in nature and 99% of the individuals targeted in Deep-fake pornography are women. 94% of deepfakes featured women who work in the entertainment industry. The availability of deepfake porn has also ensured its popularity around the world with over 48% of men surveyed in the US stating that they have indeed witnessed Deep-fake porn at least once[3].

In addition to availability, anonymity and relative immunity ensures the impunity to the miscreants, enticing them to engage in the creation of these AI based deepfakes.1 in 3 deepfake tools allow their users to create pornography via these apps and as per reports, it takes less than 25 minutes and costs 0 USD to create a 60 second deepfake video of a person.

Impact of deepfake Porn on women: a societal analysis:

In early March of 2023, a streamer who went by the name of “Atrioc” was caught red handed viewing deepfakes of fellow Twitch streamers and his colleagues. In an apology video, he said that he was “morbidly curious” about the tool that he used and that the accessibility of the AI tool that allowed him to engage in the same was to blame to a large extent[4].

The use of deepfakes to silence women[5]:

To enable the use of the faces and bodies of women in the aforementioned manner not only degrades the woman concerned but also reiterates a toxic culture of misogyny. Take for instance the use of an online deepfake porn tool called “DeepNude”. This application uses Generative Adversarial Network or GANs to generate natural looking depictions of the nude bodies of the victims concerned. Ever since its inception, over 104,000 fake nudes of female victims have been created by this app with absolutely no consent from the victim whatsoever. The most shocking part of this ordeal was the fact that many of these victims were children and therefore, the application stood as a platform to create images that fueled pedophilia[6].

Due to their ability to incite fear and chaos, deepfakes are now being used to blackmail women and subvert their public image. Due to their availability and ease of use, several people are now employing deepfakes to harass and inhibit women who hold influence like Taylor Swift or women who reject their advances, as in the case of various cases of revenge porn.

Rana Ayyub, an investigative journalist in India had recently become a victim of deepfake technology[7]. Morphed images of her were circulated on the internet and when asked about this, she stated that she was in disbelief of the same.

Deepfakes have been used to silence women by spreading fake content as well. In 2019, a deepfake video of Nancy Pelosi was shared widely on social media[8]. In This deepfake video, it appeared as if Nancy Pelosi was drunk and slumped over. The miscreant also edited her voice to make it look like she was slurring in her speech, thereby undermining her stature as a strong political leader.

The social impacts of Deepfakes include an increase in threats to victim’s concerned as well as psychological distress that ranges from psycho-affective harm, mental health issues, suicidal ideation, damaged self-perception etc. These issues were best highlighted in the case of Noelle Martin, who was forced to live with the consequences of a person creating deepfake pornography of her during her time in law school.

To learn more on deepfakes, please click on the links attached below:

https://ssrana.in/articles/deepfakes-and-breach-personal-data/

https://www.livelaw.in/law-firms/law-firm-articles-/deepfakes-personal-data-artificial-intelligence-machine-learning-ministry-of-electronics-and-information-technology-information-technology-act-242916

https://ssrana.in/articles/meitys-advisory-unveiled-to-tackle-deepfake-menace/

https://ssrana.in/articles/nobody-is-safe-deepfake/

https://ssrana.in/articles/manipulation-ai-lack-of-regulations/

Regulations to combat Deepfakes:

Many countries do not have specific laws to combat deepfakes. However, the following countries do have some regulations in place for the same:

1) USA:

  • The USA had pushed for a federal AI bill of rights on October 23 2023. Article 7 of these rights specifically emphasized upon the protection of civil rights via AI technology and recognizes AI based discrimination, including discrimination on basis of sex, as a problem of concern.
  • Article 2 of this bill of rights underlines 8 basic principles that would guide the growth of AI and one of these principles is the advancing equality via its use.
  • The laws relating to AI vary from state to state in the USA and some states, like California, have incorporated laws relating to AI whereas a large number of states have not done so, but are in the process of doing so.

2) China[9]:

  • On the 25 November 2022, The Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology (MIIT), and the Ministry of Public Security (MPS) jointly issued the Provisions on the Administration of Deep Synthesis of Internet-based Information Services.
  • These guidelines specifically identify deepfake service providers and users as per the act. They also specify the utility of deepfakes and ensure the protection of privacy, personal information and the users (minors have been specifically mentioned as well).

3) Korea[10]:

  • Korea banned the dissemination of deepfakes in 2020 via a special legislation for the same. People who engage in the creation of deepfakes can now be punished for up to 5 years or with a fine that could extend to 43,000 USD.
  • Korea has also become one of the first countries to convict a person for the creation of AI deepfakes. The unnamed individual was about 40 years of age when he was awarded a 2 and a half year sentence by the Busan district court for the generation of over 360 sexually explicit images of Children via AI[11].

4) India:

  • The current laws to combat deepfakes range from section 66A of the IT act (specifically prohibits a person from sending false information which he knows to be false via a computer device, specifically to intimidate or cause annoyance), 66D (which prohibits cheating via a computer resource), and section 3 of the Digital Personal Data Protection Act (DPDP) act of 2023 which states that personal information cannot be used without the consent of the user. They would also be guilty of identity theft under section 66C and In addition to this, the publisher of the deepfake could also be held guilty for the publication of obscene material under section 67 of the IT Act.
  • In addition to this, The IT rules of 2021 specifically prohibit the dissemination of “deepfakes” via rule 3(b) (ii) which asks intermediaries to not allow the publishing of defamatory, pornographic or obscene material with regards to a person. These rules also stated that these intermediaries must have an effective redressal mechanism to address the complaints of the users.
  • India has recently released an advisory on the use of deepfakes on December 26, 2023[12]. This advisory calls on all Intermediaries to ensure that they prohibit the dissemination of Deepfakes on their websites as per the IT Rules amendment from 2023, which is based on the IT rules 2021. They also state that these intermediaries must make it clear, in their privacy policy that the dissemination of deepfakes is against the IT rules and thereby, the privacy policy of the intermediary as well.

The rules of this advisory are yet to be notified. To learn more about the advisory on deepfakes, please click on the link attached below:https://ssrana.in/articles/meitys-advisory-unveiled-to-tackle-deepfake-menace/

Industry Initiatives:

While, the focus is now being shifted to how deepfakes can be detected and what can realistically be done by platforms (especially social media), which are the most popular and effective way of disseminating deepfakes. Of late, quite a few leading tech companies have released statements about what is being done to move in this director. To learn more about the current state of deepfake regulation across media with a reference to data on patents of deepfake technology as well as patents of deepfake detection and regulation technology, kindly refer to our article: https://www.barandbench.com/law-firms/view-point/how-to-not-get-away-with-deepfakes-patents-lead-the-way

Various social media platforms and companies such as Alphabet, Meta, Microsoft etc., have planned to incorporate measure to deal with deepfakes. Take for instance the actions of Meta that were announced on Feb 6 2024[13]. Meta stated that they would start labelling deepfake or artificial intelligence- generated images posted on Facebook, Instagram and Threads as “imagined with AI”. Google hopes to ensure that they could prevent the use of deepfakes via their algorithms for doing the same and many companies have been following suit[14].

Conclusion:

With AI getting better at pattern recognition and image processing, it is only a matter of time before the AI generation seems to be as real as a video. Many governments have been reluctant to regulate or curtail this technology as they believe that they would be inhibiting the development of AI. However, with such serious issues to consider, it can concluded that such a stance is not only unwise but also dangerous[15].

Deepfakes misuse could force governments to stop companies from developing this technology further and this would only stifle innovation in the long run.

In a bid to address the rising concerns surrounding deepfake technology, social media giants have recently suggested a nuanced approach to government regulation. Rather than enforcing a sweeping ban on all instances of deepfake content across the internet, these social media companies advocate for targeted measures to combat content released with criminal or ill intent, thereby aiming to strike a balance between preserving innovation and safeguarding against the potential harms posed by deepfakes. BSA, a Washington DC-headquartered software industry group, said that business to business and enterprise software services may not pose the same risk and that the government must consider authenticity solutions than a blanket ban. The company stated in a letter that all intermediaries neither have the same ability nor the risks associated to justify this blanket ban and that the government must approach the problem with mitigation strategies[16]

Akshay Krishna P, Assessment Intern at S.S. Rana & Co. has assisted in the research of this Article.

[1] https://edition.cnn.com/taylor-swift-billionaire/index.html#:~:text=The%20outlet%2C%20which%20runs%20the,just%20her%20music%20and%20performance.

[2] https://timesofindia.indiatimes.com/gadgets-news/what-microsoft-ceo-satya-nadella-has-to-say-on-taylor-swifts-explicit-ai-images/articleshow/107192174.cms

[3] https://www.homesecurityheroes.com/state-of-deepfakes/#key-findings

[4] https://www.dexerto.com/twitch/atrioc-returns-to-twitch-six-weeks-after-deepfake-controversy-working-on-dmcatakedowns2086445/#:~:text=Atrioc%20was%20embroiled%20in%20a,after%20seeing%20ads%20promoting%20it

[5] https://ojs.scholarsportal.info/ontariotechu/index.php/dll/article/view/218/144

[6] https://ojs.scholarsportal.info/ontariotechu/index.php/dll/article/view/218/144

[7] https://ojs.scholarsportal.info/ontariotechu/index.php/dll/article/view/218/144

[8] https://ojs.scholarsportal.info/ontariotechu/index.php/dll/article/view/218/144

[9] https://www.china-briefing.com/news/china-to-regulate-deep-synthesis-deep-fake-technology-starting-january-2023/

[10] https://www.responsible.ai/post/a-look-at-global-deepfake-regulation-approaches

[11] https://edition.cnn.com/2023/09/27/asia/south-korea-child-abuse-ai-sentenced-intl-hnk/index.html

[12]https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1990542#:~:text=On%2017th%20November%2C%20the%20Prime,and%20amended%20in%20April%202023

[13] https://economictimes.indiatimes.com/tech/technology/meta-to-start-labelling-ai-generated-deepfake-images-hopes-move-will-pressure-industry-to-follow-suit/articleshow/107462481.cms?from=mdr

[14] https://timesofindia.indiatimes.com/gadgets-news/deepfakes-in-india-google-explains-how-it-plans-to-fight-fake-ai-generated-content/articleshow/105589233.cms

[15] https://www.homesecurityheroes.com/state-of-deepfakes/#key-findings

[16] https://economictimes.indiatimes.com/tech/technology/firms-say-instead-of-blanket-ban-axe-deepfakes-with-ill-intent/articleshow/107407547.cms?from=mdr