The AI Conundrum: Protecting Intellectual Property in the Age of Generative Technology

November 29, 2024
Generative Technology

By Arghya Samaddar and Mandeep Singh

Introduction

Generative Artificial Intelligence (GenAI), a subset of artificial intelligence, is designed to create new content, such as text, images, music, or even code, by learning from existing data and generating original outputs. Unlike traditional AI, which typically focuses on analyzing data and making predictions or decisions based on patterns, Generative AI goes a step further by producing novel content that mimics the style and structure of the input data. On the other hand, Traditional AI excels in tasks that require precision, optimization, and data-driven insights, such as fraud detection, recommendation systems, and predictive analytics. In essence, while both forms of AI harness the power of machine learning, Generative AI stands out for its ability to invent and innovate, blurring the lines between machine-generated and human-created content.

GenAI, a mysterious creator of digital wonders, is now at the intersection of creativity and legal discussions. In this digital age, where algorithms perform intricate dances, we face the challenge of navigating the complex legal landscape of Generative AI and Intellectual Property Rights. The voices of famous celebrities and iconic characters, once unique to human expression, are now replicated by lines of code, blending the boundaries between human and machine. These digital echoes, however, also challenge the established norms of intellectual property rights.

Recent cases have highlighted the complexity of the ownership and infringement involving AI-generated works. For example, in the matter of Sarah Andersen versus Stability AI Ltd. in the United States District Court for the Northern District of California, various class-action lawsuits were filed against various AI companies regarding infringement of copyrights, etc. (which have been discussed in the following paragraphs). As we peer into this digital-looking glass, we grapple with profound questions: Who claims ownership over these algorithmic creations? How do we discern infringement in this realm of intangible artistry? And in this delicate dance between innovation and preservation, how do we honour both the creators and the collective imagination?

Navigating the Legal Labyrinth: GenAI and Intellectual Property Rights

Generative AI, a captivating fusion of artificial intelligence and creativity, has ushered in a new era of innovation. By harnessing the algorithm, GenAI can create mesmerizing artwork and write melodious symphonies that strike a chord with your heart, with further limitless possibilities to experience. Generative AI extends beyond the predictive capability of traditional AI models which only provide projected results. In recent years, this amazing technical wonder has ascended, acquiring the ability to generate new data—images, music, text, or software code. Given the many ways in which Generative AI can be leveraged and possibly without limits, people delve into its myriad applications, often overlooking the importance of responsible use to ensure positive outcomes. Generative AI can inadvertently produce biased or inaccurate content, emphasizing the necessity for human validation and ethical guidelines. One must acknowledge the risks and proactively address any misuse of data when considering the creative potential of Generative AI across various domains of work and art.

At the junction of Generative Artificial Intelligence, there’s a closely connected sphere involving Intellectual Property (IP) Rights and their protection. This convergence offers numerous possibilities while also posing significant challenges, such as:

  1. Ownership Ambiguities: The growth of Generative AI blurs the line between art made by humans and AI-generated art. Courts worldwide are grappling with copyright issues related to AI-generated content, and the legal implications of ownership remain uncertain.
  2. Copyright Grey Areas: Conventional copyright laws, originally made to safeguard human-created works, now encounter difficulties arising due to the rise of AI-generated works. Further, the intersection of AI-generated works and copyright infringement remains a grey area, especially as datasets influencing AI-generated creations are based on various protected human creations.
  3. Unauthorized Content: The GenAI tools use wide-ranging datasets that are extremely diverse in the sorts of information they provide. These data sets serve as a rich source for learning patterns in creation-machine models themselves, but this close approximation to the input data. However, drawing inspiration from copyrighted works without proper authorization would lead to creators and data contributors using those tools being liable to claims for copyright infringement against creators and users who contribute data to these tools. Recently, in the ANI vs. OpenAI case, the Delhi High Court addressed similar issues, highlighting the complexities of copyright in AI-generated content.
  4. Data Security Concerns: GenAI content originates from datasets, which can involve personal or confidential information. This often raises concerns about privacy, data security and the potential leak of sensitive information.

Personality rights, also referred to as the right of publicity, pertain to an individual’s ability to control the commercial utilization of their identity, including their name, image, likeness, and voice. The advent of GenAI has significantly impacted these rights, as AI technologies can create highly realistic synthetic content that replicates or modifies an individual’s persona without their consent. This raises complex legal and ethical questions about the protection and enforcement of personality rights in the digital age. Personality rights are a crucial aspect of intellectual property rights, as they allow individuals to control the commercial use of their identities. Celebrities and public figures, whose personas hold substantial economic value, can ensure they secure their brand and receive appropriate compensation for its use. However, generative AI poses a significant threat to these rights by enabling the unauthorized creation, use, and distribution of content that mimics a celebrity’s or public figure’s identity without their consent. The rise of deepfakes and other synthetic media generated by AI can falsely associate individuals with actions or statements they never made, potentially damaging their reputation and causing emotional distress.

The rapid and ever-evolving world of Generative AI is raising complex legal questions, which has led to a rise in high-profile cases involving AI and Intellectual Property. This article will also explore the legal landscape for protecting Intellectual Property Rights in India in the era of Generative AI, comparing it to the global scenario. We’ll examine several well-known recent cases, including the Scarlett Johansson- OpenAI Sky Dispute and the class action lawsuit involving AI corporations. Finally, we shall explore the present legal landscape in India to navigate through the legal labyrinth and address the legal challenges to harnessing the true potential of Generative AI while ensuring protection against any potential drawback.

From Code to Court: How Courts are Addressing Generative AI and IP Issues

  • In 2022, the case of Andersen v. Stability AI saw three artists file a class-action lawsuit against companies in the digital art and AI industries. These companies had allegedly used the artists’ copyrighted works without permission to train AI models. The plaintiffs accused the companies of direct and vicarious copyright infringement, violating publicity rights, and engaging in unfair competition practices. This landmark lawsuit is pivotal for both AI development and creators’ rights, as its outcome will establish important precedents for how AI can utilize copyrighted materials in our increasingly digital age.
  • In a recent case, China’s first on Gen-AI output infringement involving Ultraman, the Guangzhou Internet Court found an AI company guilty of copyright infringement for producing images that closely resembled the Japanese superhero Ultraman. The court determined that the AI-generated images were substantially similar to the original character, setting a significant legal precedent. This case highlights the critical necessity to secure proper licenses and permissions before using copyrighted materials for training AI models.
  • In May 2024, the Hon’ble Delhi High Court sought the Centre’s response to a Public Interest Litigation (PIL) filed by veteran journalist ‘Rajat Sharma’. The petition addresses the absence of regulations concerning deep-fake technologies and seeks directives to restrict public access to applications and software that facilitate the creation of such content. Additionally, the plea underscores the inadequacies of the current data protection laws in India and emphasizes the necessity for comprehensive regulatory frameworks.
  • In November 2024, the Delhi High Court issued summons to OpenAI in response to a copyright infringement plea filed by news agency ANI. ANI alleged that ChatGPT incorrectly credited political news to the agency, potentially leading to the dissemination of false information and causing public unrest. The court noted that the case is complex and requires further deliberation, appointing an amicus curiae to assist in the matter. This case is significant as it addresses the use of copyrighted content by AI models and the potential implications for misinformation and public trust and the need for a specification legal framework in this regard.

India’s Generative AI and IP Legal Framework

Global investment in AI is projected to reach an impressive $422.37 billion by 2028. Recent developments have brought significant attention to generative AI, a subset of artificial intelligence (AI). Artificial intelligence significantly influences our daily lives, from offering tailored suggestions on shopping and streaming platforms to solving complex problems in fields such as education, healthcare, and agriculture. However, GenAI also presents a unique challenge by enabling realistic face swaps and creating fake content, which can contribute to the spread of misinformation. It therefore becomes more important to authenticate digital content and to protect intellectual property rights. Therefore, nations are realizing the need to mitigate the risks associated with AI as technology advances.

As reliance on AI grows, individuals are increasingly turning to GenAI for design and creative endeavours—whether seeking inspiration or aiming for quick and cost-effective outcomes. However, relying heavily on existing content as a dataset can cause close imitation of previously created material, popularizing copying that breaches copyright. Like many other countries, unfortunately, the Indian legal system does not have proper provisions and legislation to prevent such IP rights violations. In a recent case, Bollywood actor Jackie Shroff approached the Delhi High Court to protect his personality and publicity rights against unauthorized use. The plea emphasized that “Until there is regulation or law on the use and limitations of technology [generative AI, specifically], any unlawful appropriation of one’s IP needs to be eschewed under the tort of unfair competition and misappropriation for which there is an abundance of precedent”. In India, GenAI content was also identified as a threat to democracy, a concern which amplified in November 2023 after a series of deep fake videos featuring actors and prominent public figures, including Prime Minister Narendra Modi. These incidents highlighted the imperativeness of immediate public awareness and regulations to restrict the spreading of deep fakes through social media.

Comparative Analysis of General AI Regulations: India vs. Global Jurisdictions

While AI regulations are still nascent, every country faces the challenge of creating IPR laws specifically for artificial intelligence. In 2021, at a momentous juncture, the European Commission proposed the EU Artificial Intelligence Act (AI Act), which was the world’s first comprehensive set of regulations for AI that aims to balance innovation while safeguarding intellectual property rights. The said comprehensive framework is based on a risk-based tactic labelling AI systems based on the potential risks posed to its users and securing the fundamental rights i.e. privacy, non-discrimination, and human dignity ensuring that such systems are transparent, accountable, and do not perpetuate biases. Whereas, the United States of America adopted a varied approach to AI regulations, by adopting a laissez-faire approach keeping in mind the promotion of innovation while still maintaining global competitiveness. Rather than implementing a comprehensive AI-specific legal framework, the U.S. counts on the existing laws and sector-specific regulations to address AI-related issues. Meanwhile, China has taken a proactive approach, with the government playing a central role in AI development and regulations. China emphasizes state control and the integration of AI into various sections, with a focus on achieving technological supremacy.

To recapitulate, different jurisdictions adopt varying frameworks or approaches to AI regulations reflecting a complex interplay of economic priorities, political systems, social and cultural factors, and strategic factors, which have been highlighted herein below:

  • Economically, countries with robust tech industries, like the United States, often prioritize innovation and competitiveness intending to maintain their global leadership in AI, which is evident from the flexibility of the laws related to AI and regulatory actions being carried out by Federal Trade Commission (FTC) against companies that surreptitiously harvest data for AI models and investigating potential antitrust violations.

In contrast, the European Union, which prioritizes consumer protection and ethical standards, implements more stringent regulations emphasizing excellence, trust, transparency and accountability, requiring AI systems to inform users when they are interacting with AI and ensuring that AI-generated content is clearly labeled. This ensure the AI development aligns with its values.

  • Politically, the nature of a country’s system can influence its regulatory approach. For instance, China’s centralized government allows for a top-down, state-controlled approach, focusing on strategic development and integration of AI into various sectors. The same is evident from the introduction of the Interim Measures for the Management of Generative Artificial Intelligence Services, which jointly approved by major government agencies, regulate generative AI services to ensure the laws align with national strategic goals.

Meanwhile, the EU’s multi-layered governance structure necessitates a collaborative and consensus-driven approach, resulting in comprehensive frameworks like the AI Act (coming into force on August 1, 2024) prioritizing transparency, accountability, and protection of fundamental rights.

  • Societal attitudes towards technology and privacy can shape regulatory approaches. In the EU, there is a strong emphasis on protecting individual rights and ensuring ethical AI, leading to regulations that prioritize transparency, accountability, and non-discrimination. In contrast, countries with different cultural attitudes towards privacy and state intervention may adopt less stringent regulations. Whereas in countries like China, government goals and approach are given priority over individual rights.
  • Historical experiences with technology and regulation can also play a role. For example, the EU’s history of strong data protection laws, such as the General Data Protection Regulation (GDPR), influences its approach to AI regulation, emphasizing the protection of personal data and privacy.
  • Concerns about national security and the strategic importance of AI can also drive regulatory approaches. Countries like China view AI as a critical component of their national security strategy, leading to a more controlled and strategic approach to AI development and regulation.

Conversely, India is still finding its way and developing its approach to AI regulation, which remains in a state of evolution. In 2018, the NITI Aayog released the National Strategy for Artificial Intelligence, which accentuated the importance of utilizing AI to foster inclusive development and societal benefit. India’s approach being limited to guidelines lacks the enforceable regulatory mechanism seen in other jurisdictions and is much more flexible, allowing for rapid development and deployment of AI technologies but may also lead to challenges in ensuring accountability and protecting fundamental rights. Recent actions by the Ministry of Electronics and IT (MeitY) demonstrate India’s commitment to introducing laws and regulations related to AI. As part of this effort, in March 2024, MeitY released a fresh advisory regulating developers of the GenAI models to self-regulate. The key takeaways of the above-mentioned advisory are provided herein below:

  1. Due Diligence: Intermediaries and platforms must ensure that their AI models do not host, display, or share any unlawful content. They are required to adhere to the Information Technology Act of 2000 and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules of 2021.
  2. Bias and Discrimination: AI models must be free from any bias or discrimination and should not compromise the integrity of the electoral process.
  3. Labeling and Consent: Under-tested or unreliable AI models must be appropriately labeled to inform users about their potential unreliability. Consent popups or equivalent mechanisms should be used to explicitly inform users about the possible fallibility of the AI-generated output.
  4. Metadata for Misinformation: Any synthetic creation, generation, or modification of information that could be used as misinformation or deep-fake must be labeled with permanent unique metadata or identifiers.
  5. Government Approval: AI models undergoing testing or deemed unreliable must obtain explicit prior approval from the government before deployment in India.
  6. User Agreements: Intermediaries must inform users about the consequences of dealing with unlawful information, including disabling access, removal of such information, suspension or termination of access, and punishment under applicable laws.

The advisory reflects the government’s philosophy that is aimed at balancing promoting innovation and avoiding the risks likely to arise from AI technologies as opposed to acting as a watchdog and issuing laws that may not be suited for the growth of this technology. While the efforts of MeitY are a positive step forward, there is still much progress to be made, especially when it comes to practical implementation of checks and balances, regarding things such preventing ‘unlawful’ content and avoiding bias.

At present, India does not have a specific regulatory authority, laws, or regulations specifically addressing AI. Instead, the enforcement and penalties related to the creation, dissemination, and use of AI are managed under existing non-AI legislation. These include the Information Technology Act, 2000, the Digital Personal Data Protection Act, 2023, and various Intellectual Property Laws, depending on the nature of the case.

Key Actions for Generative AI and Intellectual Property Rights in India

India is actively exploring legal adaptations to address the challenges posed by Generative AI, particularly in the realms of Intellectual Property (IP) rights and personal rights. To ensure the responsible development and deployment of regulation in respect of Generative AI in India, several actions are essential.

  • Establish a Regulatory Framework: Despite the challenges previously discussed, the Government of India must develop a comprehensive regulatory framework to ensure the responsible use of AI technologies. This framework should encompass clear guidelines on the ethical application of AI, ensure transparency in AI operations, and establish accountability for their outcomes, in alignment with existing data privacy and intellectual property regulations.
  • Updating the Existing Laws: Supplementary regulations such as the Indian Copyright Act, 1957 do not explicitly address the AI-generated works and the ambiguity in respect of the ownership of GenAI works. Updating the laws to provide provisions and guidelines concerning the authorship and ownership of AI-generated content would help guarantee that the rights of the creators and developers are protected. Further, the implementation of a ‘Significant Human Input’ test can be introduced, to evaluate the human involvement in the creation process and ensure that there is substantial human contribution to copyright protection of the works using or contribution of Generative AI systems. The Significant Human Input test is inspired by the recent standard for granting AI-generated works developed in the USA, albeit with modifications to better suit the Indian Copyright Law. The test has two components – a) Determining whether the AI-generated product is “original” and b) whether the extent of human involvement in the process is significant
  • Foster Collaboration: It is not just the Government alone, but this situation requires partnership with the government, industry, and academia to drive innovation while ensuring that AI development aligns with societal values. Such corroborative efforts would encourage best practices for AI development and deployment, ensuring that AI technologies are beneficial and fair. Collaboration with International bodies and other countries would also help harmonize the efforts related to IPR laws and personal rights related to AI. This can help in creating a consistent global framework and addressing cross-border issues, if any arise in the future.
  • Promotion of Awareness amongst the Public, Stakeholders and Workforce: Conduct awareness programs and training sessions for the public including creators, developers and legal professionals for the implications of AI to various fields of work. This would help spread awareness about the benefits and risks of AI, effectively addressing the concerns related to AI use, and encouraging the responsibilities of the stakeholders in the evolving landscape of AI. With the growing investment in the AI industry, it is also essential that the investing is done in education programs that teach AI-related skills and provide upskilling opportunities for current workers, thereby ensuring that its workforce remains competitive and capable of leveraging AI technologies.

In a nutshell, any proposed legislation or framework regarding GenAI should at the least, attempt to cover the below aspects:

  1. Ownership;
  2. Morality; and
  3. Remuneration.

CONCLUSION

In summation, despite India’s praise-worthy navigation through the intricacies of GenAI and Intellectual Property Rights, a substantial path still lies ahead. The comparative analysis and juxtaposition with global standards reveal both, the strides made and yet, also shine a light on the deficiencies in India’s legal framework governing the AI domain. As the AI landscape continues its relentless evolution, India must remain attuned to international developments and adapt its regulations accordingly. The ongoing endeavors and prospective amendments in India’s legal framework will be pivotal in ensuring robust protection for intellectual property in the era of generative technology. This evolving narrative promises to be an absorbing odyssey, one that will undoubtedly keep stakeholders engaged and eagerly anticipating the next chapter in India’s legal evolution

For more information please contact us at : info@ssrana.com