SSRana Newsletter 2023 Issue 13

December 7, 2023
Artificial Intelligence and Machine Learning

AI’s Got Talent! IP Protection in the times of Deep Fakes and Machine Learning

AI's Got Talent

By Nihit Nagpal and Akif Abidi

“Imitation is the sincerest form of flattery that mediocrity can pay to greatness”. -Oscar Wilde

The transformative developments in Artificial Intelligence and Machine Learning technology have once again opened another avenue which the policy thinkers and IP advocacy groups have to align with the existing IP regime.

The rise of Generative AI has taken the media space by the storm with various A-List actors, announcers, broadcast journalists, dancers, DJs, news writers, news editors, program hosts, puppeteers, recording artists, singers, stunt performers, voiceover artists and other media professionals are looking out for ways to protect their likeness, voice, face and bring restrictions regarding the use of artificial intelligence.

SAG AFTRA Protests, Deepfakes and Personality Rights

In the third quarter of 2023, ‘The Screen Actors Guild – American Federation of Television and Radio Artists’ (SAG-AFTRA) had been on a strike for over 140 days. They had been protesting against the loss of pay, when AI was used to create scripts and actors working on such scripts were paid differently. As per the truce with Writers’ Guild of America, it was decided that ‘Hollywood can use generative AI to create scripts or stories, but human writers asked to work on them will be paid as if they worked on any other gig. Writers can’t be compelled to use AI, but can choose to do so by agreement.’

The SAG-AFTRA protests have immense significance, it denotes resistance to the changing paradigms of generative AI which have affected the livelihood of the SAG-AFTRA members.

Another raging development is the rise of deep fake technology. Deep fakes are the artificial digital media that is created using superimposed images of a person, manipulating the facial appearance and giving it likeness of the person, thereby giving it an authentic digital look making it impossible to know which media is real and which synthetic.

The widespread usage of deep fake content has created problems such as attacks on personal rights, violations of rights of intellectual property and personal data protection. Deep fakes have also been used in cases of revenge porn, political manipulated media, intervention in elections, et. al.

Owing to the aggressive proliferation of deep fakes in media, we are observing that various celebrities have decided to copyright their face, voice, likeness to protect against unauthorized commercialisation of their Intellectual Property.

Whether it is Bruce Willis’ deep faked into speaking in Russian accent, or Robbin Williams posthumously being recreated in social media or Nicki Minaj and Tom Holland being starred in a deep fake clip titled ‘Deep Fake Neighbour Wars’; the celebrities are battling deep fakes of themselves leading to the rise in the need for protection of personality rights of the celebrities. There are further concerns that the Studios such as Netflix, Universal, Amazon, Apple are proposing to use AI to replace humans with digital scans.

Kiera Knightley while discussing the SAG-AFTRA protests also claimed that she will be pursuing to copyright her face as she is concerned about the rise of Artificial intelligence. The deep fake technology has threatened to use the voices, likeness of celebrities in creating social media adverts, fake songs and in many instances pornography.

Recently, a deep fake media was created where Bollywood actor Ayushman Khurana’s image and media was interacting with the shoppers. This out of the box marketing strategy is expected to soon be replicated by other brands, hence the need to understand and regulate deep fake technology is imperative.

Deep Troubles of Deepfake Technology

Deep fake technology has been fast spreading and falling in nefarious hands. These threats come in the form of fraudulent transactions, false representation of high-profile figures and, in the worst-case scenario, extortion.

Recently in July 2023, Indian police had to deal with their first reported case involving impersonation with the use of deep fake technology. An elderly person was duped of his bank’s savings account when an AI generated deep fake video call was received by him resembling a colleague who requested for a large sum of money. On verification, it was later found out that this was a deep fake generated video and the purported colleague had no information of the video call.

Various sporadic cases of phishing, defamation, disinformation campaigns have been observed causing considerable problems.

The issues around Deep Fake Technology again surfaced when Bollywood actress Rashmika Madanna’s deep fake video began doing rounds on various social media platforms. In the absence of any specific legislation against Deep fake technology an FIR was registered by Delhi Police’s Special Cell under Sections 465 (forgery) and 469 (forgery for purpose of harming reputation) of the Indian Penal Code and Sections 66C and 66E of the Information Technology Act.

This issue spurred the Ministry of Electronics and Information Technology which notified another advisory reiterating that all social media intermediaries must ensure that the users of their platform do not host any content which impersonates another person, and any such content that is uploaded must be taken down within 24 hours of receipt of a complaint against such content, as per Rule 3(1)(b)(vii) and Rule 3(2)(b) of the Intermediary Guidelines and Digital Media Ethics Code Rules, 2021.

Unsurprisingly, various other deep fake videos featuring different actors have surfaced such as Katrina Kaif, Kajol, within days since Rashmika Madanna’s issue. The Ministry of Electronics and Information Technology has in response flung into action and sent advisory to social media platforms warning that the safe harbour clause shall remain ineffective, should there be any lapses from intermediaries’ side, in failure to remove the deep fakes doing rounds online.

Copyright of Face, Voice, Likeness

One creating way to counter the advances of Artificial Intelligence taking over likeness and moving images of celebrities is through registering personality rights.

With the development of jurisprudence regarding the protection granted to personality rights, the Courts globally, have been proactive in granting protection to exclusive commercial rights to the personality of the celebrity.

Recently, in Anil Kapoor vs. Simply Life India and Ors.[1] the Hon’ble Delhi High Court undertook an expansive approach and restrained the defendants from “utilising his name, image, voice, likeliness or personality to make any merchandise, ringtones, or in any manner misuse the plaintiff’s name, voice and other elements by using technological tools such as artificial intelligence, face morphing, GIFs, either for monetary gains or for creating any videos for commercial purpose so as to result in violation of plaintiff’s rights.”

The Hon’ble Delhi High Court directed that adequate charges to be paid to Anil Kapoor, regarding the usage of his name, voice, and likeliness. The Court however maintained a distinction that the satirical writing and genuine criticism shall be protected and would not be held to violate the personality rights. The key distinguishing factor is the scope and extent of commercialisation of the personality rights without any charges being paid to the celebrity.

Way Ahead

With the overabundance of content churned out by Generative AI, there is an overwhelming need to holistically understand how to utilize the potential of Generative AI in the right direction.

The recognition of exclusive commercial/marketing rights over one’s face, voice, likeness, etc. would enable that person to restrain any organization or individual from using the personality traits of that person for commercial benefits.

This exclusive marketing right would also correspond to the legal right of the individual to provide for the assignment/licensing of the said legal right to any company in lieu of compensation.

In the era marked by rapid technological advancement, the protection of intellectual property rights faces new and complex challenges. Machine learning, artificial intelligence and the advent of deep fakes have re-written the rulebook for creators, innovators and business worldwide. As we navigate these uncharted waters, it is clear that the future of IP protection demands adaptability and forward thinking solutions.

While the road ahead may be fraught with obstacles, it is also ripe with opportunities for innovation and collaboration. By embracing change, updating our legal frameworks, harnessing technology, and promoting ethical awareness, we can ensure that intellectual property rights remain a cornerstone of creativity, invention, and progress in the digital age.

[1] CS(COMM) 652 / 2023

Related Posts

Nobody is Safe: Deepfake

SEBI’s Regulatory Initiative: Navigating Landscape of Finfluencers for Investor Protection

SEBI's Regulatory Initiative

By Rupin Chopra and Shantam Sharma


In the vast expanse of our social media feeds, where memes and cat videos compete for attention, a new breed has emerged—financial influencers, affectionately known as ‘finfluencers.’ These are the folks who claim to have cracked the code to financial success while we mere mortals are still struggling to decipher our monthly utility bills. Almost all of us, while endlessly scrolling through our social media feed, have heard phrases like, “How do you know all this? Because I follow Finance with ABC/XYZ!” It’s as if financial wisdom is hidden in the depths of Instagram stories and Twitter threads.

However, the surge in popularity of finfluencers has brought to light various concerns, prompting regulatory bodies like the Securities and Exchange Board of India (SEBI)1 to take proactive measures. This article explores SEBI’s recently released consultation paper2 , aiming to critically analyze the implications for investor protection and market integrity.

The Finfluencer Landscape

Finfluencers are individuals who, through digital platforms such as YouTube, Instagram, Telegram, or other mediums, provide information and advice on financial topics. The rapid growth of retail investors3 in the stock market, especially during the uncertainties of the Covid-19 pandemic, fueled the ascent of finfluencers in 2020. With followers ranging from thousands to millions, these influencers have become influential players in shaping financial decisions.

Concerns Surrounding Unregistered Finfluencers

SEBI’s attention has been drawn to the activities of unregistered finfluencers who operate without the necessary regulatory oversight. While some finfluencers genuinely aim to educate their followers, many operate as unauthorized Investment Advisers (IAs) or Research Analysts (RAs). The lack of regulatory scrutiny raises questions about the qualifications, expertise, and potential conflicts of interest these influencers may have.

For instance, consider a scenario where an unregistered finfluencer endorses a particular financial product without disclosing their association with it. This lack of transparency poses a risk to investors who may unknowingly act on biased or misleading advice.

Disruption of the Revenue Model

SEBI’s proposed measures are designed to disrupt the revenue model for unregistered finfluencers, particularly those who receive compensation for promoting financial products or services. This compensation may come in various forms, including referral fees, non-cash benefits, or profit-sharing arrangements with the underlying product or service.

To illustrate, an unregistered finfluencer might receive a referral fee for each user who engages with a promoted product or service. The undisclosed nature of such arrangements raises concerns about the objectivity of the financial advice provided, potentially leading investors to make decisions influenced by hidden incentives.

Guidelines and Registration for Finfluencers

SEBI’s regulatory framework suggests that finfluencers should register with the regulatory body and adhere to specific guidelines. This includes displaying their registration number, contact details, and investor grievance redressal helpline on their social media platforms. Additionally, finfluencers are expected to make appropriate disclosures and disclaimers in their posts.

For instance, a registered finfluencer might include a disclaimer highlighting their commitment to SEBI’s code of conduct and regulatory guidelines. This move aims to enhance transparency and accountability in the content shared by finfluencers, setting a standard for responsible financial advice.

Addressing Conflicts of Interest

Conflicts of interest have emerged as a central concern in SEBI’s regulatory framework. The consultation paper proposes restrictions on SEBI-registered intermediaries sharing confidential client information with unregistered entities, including finfluencers.

Consider a scenario where a registered intermediary collaborates with an unregistered finfluencer without adequately safeguarding client information. This lack of protection may compromise investor interests, emphasizing the need for stringent measures to ensure the confidentiality of client data.

Creating a Closed Ecosystem for Fee Collection

SEBI’s proposal includes the creation of a closed ecosystem for fee collection by SEBI-registered intermediaries. This ecosystem aims to ensure that investors’ payments reach only registered IAs and RAs, preventing unregistered entities, including finfluencers, from accessing the closed ecosystem.

This safeguard allows investors to identify and avoid potential risks associated with unregulated finfluencers. By restricting fee collection to registered intermediaries, SEBI aims to create a more secure and reliable environment for investors seeking financial guidance.

Industry Responses and Expert Opinions

Industry experts and market participants have responded positively to SEBI’s move to regulate finfluencers. A spokesperson4 from a leading wealth management firm emphasized the significance of the regulatory initiative in ensuring that investors receive accurate and unbiased information.

For instance, the capital markets regulator’s proposal to ban unregistered finfluencers from partnering with mutual funds and stockbrokers for promotional activities has been viewed as a step toward preserving authenticity and reducing fraudulent practices.

A representative from an online wealth management platform5 highlighted the potential conflict of interest within the finfluencer community. They emphasized the importance of clear disclosure regarding the financial arrangements between finfluencers and platforms they promote.

Another industry expert acknowledged the impact of finfluencers on their followers’ financial decisions. The proposed regulatory framework by SEBI is seen as a crucial step in holding finfluencers accountable and responsible for the advice they provide.


In conclusion, SEBI’s proactive regulatory measures regarding finfluencers underscore the commitment to investor protection and market integrity. As the financial landscape continues to evolve, regulatory frameworks must adapt to address the challenges posed by the dynamic nature of digital financial advice dissemination.

SEBI’s proposed framework, with its focus on registration, disclosure, and restrictions on associations, represents a significant step toward creating a transparent and accountable environment for investors. By disrupting the revenue model for unregistered finfluencers and ensuring stringent guidelines, SEBI aims to mitigate the risks associated with biased or misleading financial advice.

As the consultation process unfolds, the financial industry awaits a robust regulatory framework that not only safeguards investor interests but also fosters a culture of responsible and ethical financial content dissemination in the digital age.

[1] Available at:
[2] Ibid(1)
[3] Available at:
[4] Available at:
[5] Ibid (5)

Related Posts

New Endorsement Guidelines for Social Media Influencers- India

India sets the deepfake crackdown in motion

Deepfake crackdown in motion

By Anuradha Gandhi and Rachita Thakur


In its attempt to curb the menace of deepfakes, the Union Government has set the deepfake crackdown in motion. The Union Minister of State for Electronics and Information Technology (MeitY) Rajeev Chandrasekhar on November 24, 2023 said that the IT Ministry and the Centre will nominate a Rule Seven Officer and will take 100% compliance expectation from all the platforms.

In continuation, the MeitY has further announced that it will give the social media intermediaries just seven days to align their terms and services and other policies with the Indian laws and regulations to address the issues to the hosting of deepfakes on their platforms.

Meeting with Social Media Intermediaries

The decisions comes in line a day after the Union IT Minister Ashwin Vaishnav had a meeting with the social media and technology companies, an industry body and academics on November 23, 2023. It must thus be noted that the government has initiated the drafting of regulations and will notify the same within few weeks, as informed by the IT Minister, Ashwin Vaishnav.

The proposed legislation will have four pillars that is, detection, prevention, reporting and awareness along with more proactive and time-sensitive reporting mechanism to mitigate damage. The social media intermediaries have also been instructed to submit their plans about dealing with the deepfakes and give suggestions for the regulation. The next meeting is likely to be held in the first week of December, 2023.

However, on November 24, 2023, citing the aforementioned discussion the Union Minister of State, Rajeev Chandrasekhar talked about the way to crackdown deepfakes, thereby expecting 100% compliance form the social media platforms as far as deepfakes are concerned. While addressing the media, the Union Minster of State informed that the intermediaries have agreed that the current regime under the Information Technology Act and rules thereunder provide for adequate compliance requirements on their part to deal with the issue of deepfakes even though certain additional regulations are required given the IT Act is 23 years old.

Rule Seven Officer

As reported the Rule Seven Officer will be a person who will create a platform where it will be easy for the citizens to bring to the kind attention of the Government of India the allegations or reports of violation of law by the platforms. The Officer will be responsible to take digital platform information and respond accordingly.1

Assistance for filing of FIRs

The government will aid and assist citizens to file the First Information Report (FIRs) against social media platforms for violating the Information Technology Rules (IT Rules) due to objectionable content like deepfakes.

[To read more about the deepfakes, kindly refer to our article:,access%20to%20the%20content%20%2F%20information.]


Related Posts

Deepfakes and breach of Personal Data – A bigger picture

Supreme Court mandates implementation of Hybrid hearings- India

Hybrid hearings

By Bindra Rana and Rima Majumdar

“The Time and the World do not stand still, for change is the law of life and those who look only to the past or the present are certain to miss the future.”– John F. Kennedy, 1963

The aforesaid testimonial is more valid and relevant post COVID-19, which pushed industries across the world to adopt to a remote working setup; including the legal fraternity. Recently, the Hon’ble Supreme Court of India took note of the shifting dynamics of the Indian Judicial System vis-à-vis the status of hybrid mode of hearings, in the case of Sarvesh Mathur v. The Registrar General of Punjab and Haryana High Court1 .

Hybrid Hearings: A mandate

The present Writ Petition was filed in the Hon’ble Supreme Court of India by an individual, Mr. Sarvesh Mathur, who raised a concern about the discontinuation of virtual hearings by the Hon’ble Punjab and Haryana High Court.

It was averred by the Petitioner that, for the duration of the COVID-19 pandemic, virtual hearings were widely adopted as a safety measure, so that hearings could be conducted from anywhere within the country. The same proved to be a step forward in the right direction. However, the Petitioner brought to the Hon’ble Court’s notice that the Hon’ble Punjab and Haryana High Court had ceased to offer the facility of virtual hearings as the pandemic has now come to an end.

Considering the matter from a broader aspect instead of limiting it to the Hon’ble Punjab and Haryana High Court, vide its order dated September 15, 2023 the Apex Court had first issued notice to the Registrars General of all the High Courts, the National Company Law Appellate Tribunal, the National Consumer Disputes Redressal Commission, and the National Green Tribunal, and directed them to file an Affidavit detailing:

  • How many video conferencing hearings have taken place in the last three months; and
  • Whether any courts are declining to permit video conferencing hearings.

The Solicitor General was also requested to assist the court with data on hybrid hearings in the tribunals under various ministries of the Union Government on the next date of hearing.

Re: the High Courts

Pursuant to these directions, on October 06, 2023, the High Courts of Allahabad, Bombay, Calcutta, Chhattisgarh, Gujarat, Guwahati, Himachal Pradesh, Jharkhand, Karnataka, Kerala, Madhya Pradesh, Madras, Meghalaya, Orissa, Patna, Punjab & Haryana, Rajasthan, Sikkim, Andhra Pradesh, Telangana, Uttarakhand, Jammu & Kashmir and Ladakh filed their Affidavits.

Vide these Affidavits and the submissions of the Standing Counsels for the High Courts, it was observed that, while some High Courts are providing hearings through the hybrid mode, other High Courts were not. The inconsistency in this regard was identified as under:

  • The absence of a uniform Standard of Practice (SOP), which brings clarity to the manner in which access to the electronic mode of hearing can be obtained.
  • Application for VC link has to be submitted in advance, sometimes even 3 days before the hearing.
  • The existing SOPs are also arbitrary. Some High Courts have VC Rules wherein hybrid hearings are allowed for advocates/parties-in-person who are 65 years of age or above, thereby being an unfair disadvantage to young lawyers.
  • Most High Courts do not provide Wi-Fi or internet connectivity to the members of the Bar and litigants within the precincts of the High Court.
  • Links for video conferencing hearings are not provided in the cause-list.
  • Many High Courts do not have online filing system, which complements hybrid hearings.
  • Several High Court, despite after having VC facilities, do not provide hybrid hearings.

Re: the Tribunals

The Standing Counsel for NGT submitted that both the NGT Principal Bench and the Regional Benches provide hybrid hearings. The Additional Solicitor General appearing on behalf of the NCDRC also submitted that the tribunal is holding hybrid hearings.

The Standing Counsel for NCLAT submitted that, infrastructural requirements have to be upgraded to provide hybrid hearings, and funds for the same have been sought from the Union Government. Pursuant to giving directions to the Finance Ministry and the President of NCLAT to hold meetings to expedite the disbursement of funds, it was also directed that the NCLAT shall hold hybrid hearings within a period of 4 weeks from date of the present order.

However, prima facie a simple search on the internet for VC links at NCDRC showed that there are no links for hybrid hearings on the website.

Directions of the Hon’ble Court

In view of the aforesaid, the bench appointed two amicus curiae who were requested to collate all the information which has been provided in the Affidavits of the High Courts and Tribunals, in a tabulated chart so that further effective orders can be passed by the Court. The Amicus were also directed to contact the Registrar Generals of all the High Courts to obtain necessary information regarding the steps which have been taken by them to facilitate e-filing.

In the interim, the bench passed the following directions:

  • No more denial of VC Links: Within two weeks of this order (October 06, 2023 order) being passed, no High Court shall deny access to hybrid mode of hearing to any member of the bar who is desirous of obtaining the same. For this, all State Governments were directed to provide necessary funds to the High Courts to put into place the facilities necessary for that purpose.
  • Sufficient internet Facilities: All the High Courts were instructed to ensure that adequate internet facilities, including Wi-Fi with sufficient bandwidth, is available free of charge to all advocates and litigants appearing before them.
  • Visibility in Cause Lists: VC links must be provided in the daily cause list of each Court, eliminating the need for a separate application for the same. Any age restriction or other arbitrary requirement for accessing VC Link is also to be removed.
  • Standard Operating Procedure (SOP): All the High Courts are required to establish a SOP to allow litigants access to hybrid hearings. For this, Justice Rajiv Shakdher, Hon’ble Judge of the High Court of Delhi has been requested to prepare a model SOP, in conjunction with the Amicus.

The case is now tentatively listed on December 11, 2023.

Author’s note

These directions come as a welcome change in the hybrid hearing spectrum of Indian Courts. It is a known fact that, litigants as well as advocates may have cases which are pending all across the country. Providing hybrid modes of hearing to them makes the legal process easier, and therefore it was of utmost importance that such intervention be done by the Supreme Court.

A uniform SOP for conducting hybrid hearings along with the necessary technological facilities at the concerned Courts, including District Courts, will be highly beneficial for the legal profession as well as the litigants.

Aditya Sharma Vats, Associate at S.S. Rana has assisted in the research of this Article.

1. W.P. (Crl) 351/2023

Related Posts


For more information please contact us at :