The Legal Conundrum surrounding Generative AI: An EU and Indian Perspective

April 12, 2024
Legal Conundrum surrounding Generative AI

By Anuradha Gandhi and Rachita Thakur

In a lawsuit filed on the last day of February, Elon Musk, sued OpenAI maker Sam Altman, for having abandoned the startup’s founding mission. Musk levelled a hard edged accusation that Altman, had set the founding agreement aflame when he released GPT-4, a powerful language model and essentially, as per Musk, a Microsoft product. According to the tech billionaire, the three founders of OpenAI, that included himself and Altman, had agreed to make an open-sourced, not-for-profit company that harnessed the power of Artificial General Intelligence (AGI), in a way that would benefit humanity, and Altman had breached it by putting commercial interests in the way.[1]

“OpenAI has been transformed into a closed-source de facto subsidiary of the largest technology company, Microsoft,” the lawsuit filed in Superior Court in San Francisco, California read.[2] OpenAI has responded to the same, by calling the lawsuit as resting on “convoluted-often incoherent-factual premises.”[3] They have also cited Musk’s own business interests in levelling the claims on a presumed moral high ground. While, Musk’s intentions may be up for debate, the questions posed by him on the dangers of the unbridled commercial use of generative AI are entirely pertinent.

Generative AI- When Machines Create

Generative AI, in its simplest terms, is a form of Artificial Intelligence that creates or generates new content.[4] This form of machine learning is a veritable new technology that has radically transformed the way in which content creation of any kind was perceived by the public at large, and has brought with it, a lieu of legal landmines. The primary question of law rests on the manner in which data is sourced and used in these models, and if at all the output of the same should come under an intellectual property rights scanner.

Generative AI tools are trained on massive data sets, such as Large Language Module or LLM for ChatGPT, which is a corpus of texts. These data sets vary and form the input for the output of the generative AI. It is ideally trained on open-sourced data, however, it often dips, into copyrighted work, as alleged, in a lawsuit by a group of visual artists.

The lawsuit read that, “All AI Image Products operate in substantially the same way and store and incorporate countless copyrighted images as Training Images. Defendants, by and through the use of their AI Image Products, benefit commercially and profit richly from the use of copyrighted images. The harm to artists is not hypothetical—works generated by AI Image Products “in the style” of a particular artist are already sold on the internet, siphoning commissions from the artists themselves.”[5]

In another similar lawsuit, brought by coders against GitHub, OpenAI and Microsoft, was primarily hinged against GitHub’s Copilot, which converted English into computer code in several coding languages. Open sourced codes are easily attributable, which is also a requirement for being able to use the software, and while Copilot was trained on these open codes, the AI model governing it did not realize that there are other legal requirements to be complied with. The suit therefore alleged that companies of having breached software licensing terms.[6]

The other prong of legal problem is data protection coupled with personality rights. Personality rights or the right of publicity does not find mention in statutory law, but has evolved through judgements. Misuse of AI in this domain, usually presents itself in the form of deepfakes, where celebrities and performing artists are dubbed, and morphed content is created in their likeness, robbing them of the economic associative value that their persona brings and often going down the path of defamation.

Predictive bias, is the last addition in this legal quagmire. Machine learning builds itself on datasets and datasets are inherently full of human biases. The growth of AI decision making in sensitive human centric areas, such as hiring, criminal justice and law enforcement has raised questions about inherent biases and fairness. In an ideal scenario, AI can help bleach decision making of human biases, by reducing the subjective interpretation of data. However, there is also a good possibility of these biases to be baked in and solidified in the machine learning framework. COMPAS, an AI used to predict recidivism in Florida, wrongly labelled African American defendants as high risk at almost twice the rate of mislabeled white defendants.[7]

Eu Ai Act: a Risk Based Approach

The global standard for AI regulation has been set by the European Union with the formalization of the AI Act, and other international efforts have followed suit. These include the Hiroshima process[8] by the G7 and the Bletchley Declaration[9] signed post the AI Safety Summit. These developments show an international awareness towards the risks of AI. However, European Union continues to hold the bargaining chip, as being the first to come up with a concrete legislation potentially tipping off the same ‘Brussels effect’ that was seen with the GDPR. The effect harnesses the economic power of the European Union to set a global blueprint for such legislations.[10]

Such a blueprint, is the risk-based approach elucidated in the AI Act. The Act says that in order to introduce an effective and proportionate set of rules, a defined structural risk based approach must be followed. This approach must be tailored to the scope of risk an AI system can potentially generate. Therefore, it is pertinent to prohibit certain unacceptable AI practices and to simultaneously lay down guidelines for high-risk AI, and transparency obligations for specific AI systems.

Generative AI, the Act says, is trained on copious amounts if text and data, and the data mining technique may be used extensively in the retrievals of such content, which, may be protected by copyright or any other related rights. Unless specific exceptions or limitations apply. The Directive (EU) 2019/790, had introduced limitations and exceptions that allowed for reproduction and extraction of works for the specific purpose of data mining under certain conditions, and under these rules, the right holders may choose to reserve their rights and prevent data mining, unless done for a scientific research. In an occasion where the right to opt out has been reserved, the Act states, the providers of general purpose AI need to get authorization from the rights holders if they wish to carry out data mining of such works.[11]

It should also be taken into account that certain industries which are more human centric than others sit at a higher risk of undetected AI biases. These industries include finance, health, criminal justice system, refugee and migration, and while they may rely on AI for an ease of working, biases may easily creep into these systems wherein prejudicing a group of people. When implementing AI, a careful handholding might be required for these industries, coupled with an analysis of the possible risk.

Conclusion: the Indian Angle

In the ever-evolving technology landscape, AI and Generative AI have emerged as forces of innovation, redefining the manner in which individuals interact and create. The Indian market is no different. As per the Statists data of 2023, the AI market has grown to a staggering 4.1 billion dollars, making the shift a testament to the growing reliance on AI products.[12]

On December 26, 2023, the Ministry of Electronics and Information Technology came up with the second advisory to social media platforms specifically concerning the propagation of misinformation powered by AI Deepfakes. The Advisory directed the platforms to ensure compliance with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021.[13]

In a bid to further bolster the AI landscape, the Indian government is said to be gearing up to amend the Information Technology (IT) Act of 2021.[14] Recently, the ministry of electronics and information technology revised its March 1 advisory to social media companies regarding the regulation of generative AI, doing away with the requirement of seeking government approval before launching AI products. The new advisory, directed the platforms to make sure that AI generated content is properly “labelled”, especially if the content can be potentially misused.[15]

Further, it is of utmost importance to consider the manner in which data is mined, for it is often possible that automated tools may pick up copyrighted or trademarked information. In a Delhi High Court case, OLX had obtained a restraining order against a certain company to prevent them from scraping any data from OLX’s website.[16] However, it is to be noted that data scraping was deemed illegal in this aspect because the company was using the scraped data on their own website and therefore, it came under the ambit of copyright law. Such data used for private use would not have attracted the same liabilities. Making the ethical nature of the data used for training modules questionable.

These developments point towards a growing awareness regarding the powers and prejudices of generative AI both within and beyond the borders of the country. Musk’s lawsuit is a simple surface scratch into the far deeper conundrum of the legal and moral ramifications of generative AI when used without the bindings of a legislative framework. While the AI Act is a step in the right direction of reasonable precaution, it sets a blueprint for the world to follow, for the better or for worse.

Ahana Bag, Junior Associate at S.S. Rana & Co. has assisted in the research of this Article.











[11] chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/





[16] OLX BV and Ors. v. Padawan Ltd., Delhi HC order 15 December 2016

Related Posts

MeitY Approval for roll out of AI generative models

Government to introduce Rules for AI companies and Generative AI models

For more information please contact us at :