EU Parliament Gives Final Nod to Landmark Artificial Intelligence Law

May 7, 2024
AI Law

By Vikrant Rana and Anuradha Gandhi

Artificial Intelligence is defined as, “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.[1] ” A law to bind this entity was in the making since April of 2021, and on March 13 of 2024, the Artificial Intelligence Act or the “AI Act” was finally adopted by the European Parliament.

It is the widely considered to be the world’s first comprehensive legislation to govern Artificial Intelligence. The regulations has stressed on principles of transparency, innovation, risk-based approach, human oversight and accountability. The impact of the Act is expected to far supersede its ambit, with its extraterritorial effect and fines up to 7% of global annual revenue or 35 million euros, whichever is higher.

Extraterritorial Jurisdiction and GDPR like governance system

The AI Act applies to AI systems, which have been defined in the Act as, “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Similar to the General Data Protection Regulation or GDPR, the Act also aspires to become a global standard for AI as GDPR has become for personal data protection. The AI Act has an extraterritorial reach, akin to the GDPR, the Act not only covers establishments in/ operating in EU, but any establishment that is placing an Artificial Intelligence system in the EU market for all uses, with the exception of personal uses. It also extends to importers and distributors and providers and deployers of Artificial Intelligence systems where “the output produced by the Artificial Intelligence system is used” in the EU.[2] These entities have been elucidated in the following paragraphs.

Phased Out Legislative Rollout

The Act is said to enter into force 20 days post its publication in the Official Journal, which is expected to be in May or June of 2024[3]. But most of the provisions of the Act will become applicable two years after its entry, while provisions governing generative Artificial Intelligence will apply after a year and provisions related to prohibited AI systems, in a half a year’s time.

While the time periods may appear to be generous, affected categories of national and multinational actors, such as governments and multinational companies, will have to take on significant redesigns of their products which might be required to be started as soon as possible. Non-AI companies too will be required to carefully navigate the legal necessities and set their own risk levels to be able to keep compliance in check.[4]

Entities and Classification Under the EU Artificial Intelligence Act

Risk-based AI system categorization

The risk-based system categorizes the AI systems ranging from low societal risk to a higher societal risk each with its conformity requirements

  1. Minimal Risk systems with voluntary codes of conduct. These AI systems pose minimal risk to fundamental rights, safety, or societal values and are subject to very few regulatory requirements under the EU AI Act. Examples could include basic AI applications used for entertainment purposes, simple algorithms for data analytics, or certain types of personal assistants.
  2. Limited Risk under Article 5. This category encompasses AI systems that don’t fall into the unacceptable or high-risk categories but still carry certain risks. While these systems are subject to fewer requirements than high-risk systems, they must still comply with specific transparency and accuracy obligations. Examples of limited-risk AI systems might include chatbots, AI-driven video games, and certain types of recommendation systems.
  3. Article 6 states that a High-risk AI system is a system that poses a significant threat to fundamental rights, health or safety and comes with conformity assessments and transparency requirements In the instance of such devices, human oversight is of vital importance and Article 14 mandates that such measures must be pre-built into the design of the AI systems. Examples of high-risk AI systems include those used in critical infrastructure sectors like healthcare, transportation, and energy, as well as those used in law enforcement, employment, and migration decisions.
  4. Unacceptable Risk falls under the prohibited category in Article 5. This category includes AI systems that pose an unacceptable risk to fundamental rights, safety, or societal values. Examples might include AI systems used for social scoring by governments, certain types of biometric surveillance, or AI systems designed to manipulate individuals’ behavior in a way that undermines their autonomy.

It further classifies AI based on the impact
1) General Purpose AI systems, i.e., is a system that has a wide variety of uses general in nature. Though, the data used, and potential risks associated with these systems, they could fall into various risk categories—high-risk, limited-risk, or minimal-risk. A general-purpose AI system used for natural language processing (NLP) might be considered high-risk if it’s deployed in sectors like healthcare or criminal justice where its outputs could significantly impact individuals’ rights and safety

2) General Purpose with systematic Risk AI systems, trained on data requiring cumulative computation of greater than 10^25 FLOPS. These systems, by their sheer computational scale, could potentially have significant societal impacts thus requiring additional transparency and documentation.[5]

Deployment Duality

The dual governance of the act covers both providers i.e. developers and the deployers, along with any entity when the output generated by the AI system is used in the EU.

Providers (developers) can be companies that develop AI systems to be placed on the market under their own name or trademark; importers and distributors of the AI systems in European Union irrespective whether their establishment is within EU or not. The deployers can be, natural or legal persons, and companies who are using AI in the course of their professional activities. The Act is developed from EU consumer perspective regulating the sale and use of the AI systems within EU, which in turn imposes obligations on the global value chain regardless of its geographical location.

Providers and Deployers responsibilities

Compliance with Requirements: Adherence to transparency, accountability, and safety standards.

Under article 16, Providers of High-Risk AI, have to indicate the high-risk AI systems on the packaging or accompanying documentation.

The Article 26 of the Act mandates that deployers assign human oversight to natural persons who have the necessary training and competence. The deployer is mandated to exercise control over the input training data to ensure that it is relevant and that it is sufficiently representative in view of the intended purpose of the high-risk AI system to mitigate biases and discrimination

Deployers are also required to monitor the operations of the high-risk AI systems and inform the providers, importer or distributor in case of any serious incident. An example of the same would be, before deploying a high-risk AI system, deployers, who are employers shall inform the affected employees that they will be subject to the use of a high-risk AI system. The deployer has also been tasked to carry out risk impact assessment.

The interesting aspect of this law entails that if natural persons are subject to decisions made by high-risk AI, then the natural person is to be informed that they are subject to the use of high-risk AI, unless there is an exception carved under law. Both providers and deployers of AI systems have shared responsibilities to promote the responsible development, deployment, and use of AI technologies in compliance with the EU AI Act. Collaboration between providers and deployers is crucial to effectively manage risks and ensure the ethical and safe use of AI systems.

Prohibitions and Exceptions Under the Act

Prohibitions under the Act

Article 5 of the AI Act also prohibits the use of AI systems that use “deceptive or manipulative techniques” to influence the user. The Act further prohibits AI Monitoring of social behavior and allotting of “social scores” on this basis. This ban refers to AI systems such as the social credit system in China[6], wherein certain behaviors are rewarded by the government and others punished via AI. The Article also prohibits AI systems that are illegal or contrary to ethical norms, such as facilitating criminal activities, infringing on individuals’ rights, or discriminating against protected groups. For example, that expands facial databases by scrapping of data from CCTV cameras or other sources as well as AI that classifies persons on the basis of biometric data, unless done by law enforcement, under specific circumstances such as to rescue kidnapping victims or the prevention of a terrorist attack. AI systems that pose significant risks to public safety or the environment, such as autonomous weapons systems or AI systems used in critical infrastructure without adequate safeguards, are prohibited under the EU AI Act. Misrepresentation of AI systems in a manner that could deceive or mislead users about their capabilities, functionality, or intended purpose is also prohibited

Exceptions under the Act

The impact of the Act would cover all AI goods and services, the training data used, the AI models, the hardware, development as well as development of the AI systems. Though the exceptions carved out under the Act, say that the Regulation would not apply to,

  • AI systems released under free and open source licenses (except High-risk AI systems or general-purpose models of reserved copyright),
  • AI systems used in relation to research for legitimate purposes,
  • AI used for purely personal non-professional activities,
  • AI developed and used for military or natural security purposes, and,
  • AI systems used in the framework of international agreements for law enforcement and judicial cooperation with the Union.

Law enforcement authorities can continue their use of AI, i.e., they can deploy a high-risk AI that has not passed the conformity assessment procedure in an event of an urgency. However, real time remote biometric identification in public spaces is only permitted for absolutely necessary law enforcement purposes, such as prevention of a foreseeable threat. For example the application of biometric identification has been given a leeway by the Act, which is in other cases prohibited.

Authorities and Regulatory Bodies

Article 28 of the EU AI Act mandates the creation of a notifying authority (national authority of each member state) which authorizes conformity assessment bodies and assists in the implementation of the act. The same shall be carried out by a national accreditation body within the meaning of and in accordance with Regulation (EC) No 765/2008. Article 29 of the Act specifically mentions conformity assessment bodies that provide third party conformity assessment certificates to High-risk AI systems.[7]

Article 70 of the EU AI Act mentions that each member state shall establish a national competent authority, comprising of at least one notifying authority as mentioned under Article 28 and one market surveillance authority. These national competent authorities shall be responsible for the implementation of these regulations in each member state. The EU AI office is the highest authority under the act and it oversees the implementation of the act across the 27 EU members.[8]

Impact on Ipr, Innovation and Businesses

Impact on IPR and Fundamental Rights

The Act also requires under compliance under the existing copyright laws, and respect the reservation of rights expressed by right holders. Further Recital 28a provides that the extent of the adverse impact caused by AI systems on fundamental rights protected by the Charter is of utmost importance when categorizing a system as high-risk, and those rights, included intellectual property rights.

Under Recitals 57d and 83, AI vendors and deployers must also evaluate their AI IPR and trade secrets subjected to the disclosure under the transparency obligations and consequently put in place, protection and confidentiality breach mitigation measures.

Recital 167 of the Act mentions that all parties involved in the application of this Regulation should carry out their tasks and activities in such a manner as to protect, intellectual property rights, confidential business information and trade secrets, the effective implementation of this Regulation, public and national security interests, the integrity of criminal and administrative proceedings, and the integrity of classified information.

Impact on promoting Innovation

The EU AI Act also provides for an AI regulatory sandbox, which is a controlled framework established to develop, train and validate AI systems in real world conditions. The Act mandates the establishment of at least one Virtual AI sandbox in each member state in order to ensure the proper testing of AI systems before they are released into the market, as per Article 57 of the EU AI Act. The sandbox is expected to facilitate dialogue between stakeholders, promote best practices in AI development and deployment, and help regulators stay informed about emerging technologies and their potential impact on society. Overall, it aims to strike a balance between encouraging innovation and safeguarding the rights and interests of individuals within the EU.

Impact on Businesses

The EU AI Act is said to affect multiple businesses all across the EU, some of the major effects are given below:

  1. Compliance Cost: Impact on the financial sector[9] – Banks use AI systems to assess credit worthiness and to calculate financial risks. These AI systems would be considered High risk AI systems under the new act and would force the banks using these systems to ensure conformity assessment. In the medical sector[10], the High-Risk AI Systems used, most notably with regards to pacemakers and other such medical devices, would be increasingly scrutinized. The use of general AI systems with a greater computational FLOPS count than 10^25 would definitely require additional compliances as they would be classified as GPAIs with systematic risks. Chabots, they could require additional compliances to run.
  2. Classification of AI users: The act provides obligations on all parties that are involved in putting an AI system on the market of the EU and these additional compliances would definitely affect the AI market in the EU.
  3. Market Access: Compliance with the EU AI Act may become a prerequisite for accessing the EU market. Businesses that fail to meet the regulatory standards may face barriers to entry or market restrictions, affecting their ability to compete effectively within the EU.
  4. Consumer Trust: Adhering to the principles of transparency, accountability, and ethical use of AI can enhance consumer trust and confidence in AI-powered products and services. Businesses that demonstrate a commitment to responsible AI development and deployment may gain a competitive advantage in the marketplace.
  5. Collaboration and Standards: The EU AI Act may stimulate collaboration among businesses, research institutions, and regulatory bodies to develop industry standards and best practices for AI governance. Participating in such initiatives can help businesses stay ahead of regulatory developments and shape the future direction of AI policy within the EU.

Penalties

One major impact would be new and increased penalties

Article 99 lists out the penalties for Non-compliance with the prohibition of the Artificial Intelligence practices referred to in Article 5 (prohibited AI practices) shall be subject to administrative fines of up to 35,000,000 EUR or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

Further for Non-compliance of an Artificial Intelligence system with any of the following provisions related to operators or notified bodies, other than those laid down in Articles 5, shall be subject to administrative fines of up to 15,000,000 EUR or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

The supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request shall be subject to administrative fines of up to 7,500,000 EUR or, if the offender is an undertaking, up to 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

Impact on India

A major impact may be on ethical and policy standards on the Deployers of Artificial Intelligence systems from India (mostly Indian businesses in the EU) currently in the European Union, The current implementation of the EU AI Act will increase the cost and compliances of Indian companies in the EU and in addition to this, many Indian AI startups and companies would have to answer to the conformity standards under the EU AI Act, if they are to ever sell their AI technology or use in the EU, thereby impacting them greatly[11]. This would also be an extension of the Brussels Effect[12], which refers to the EU’s power to unilaterally change standards across the world due to their power to influence multiple business and companies to conform to their standards. A great example of this is the Brussels Effect can be seen quite clearly in how the General Data Protection Regulation (GDPR), that has become a global standard with regard to data protection requirements.

The AI Act could have a similar effect wherein it becomes the standard for Global Market Access for AI stakeholders to follow in multiple jurisdictions apart from the EU. Therefore, as India grapples with the current realities of the AI, the EU AI Act may become the standard legislation that Indian legislators refer to or take inspiration from, to regulate AI in a manner that is befitting of a rapidly evolving democracy.

The principles of human rights in The EU AI Act

The EU AI Act complies with both the UNESCO guidelines for AI[13] as well as the UNGA principles[14] for the use of AI. The UNESCO guidelines, adopted in November 2021 by all 193 members, intended to ensure the development of AI in a manner that would ensure the protection of human rights and dealt with countering AI bias as a problem. The EU AI Act prioritizes a human rights centric approach to AI and specifically mandates providers of High Risk and General-Purpose AI systems to counter the creation of a negative AI feedback loop wherein the AI system perpetuates bias via a recycling of the inputs as outputs.

The UNGA AI principles, adopted March 21 2024,[15] reiterates the importance of the 17 sustainable development goals established by the UN and the use of AI for the same. The EU AI Act incorporates this idea via the provision of AI sandbox system on a priority basis to AI systems that could be used to serve the public.

Article 2(7) of the AI Act states that the act will not affect the established GDPR privacy and security guidelines without prejudice to the article 10(5) and article 59 of this Act. Article 10(5) relates to combatting AI bias and states that providers of AI may process personal data in prejudice to the GDPR guidelines to combat AI bias. Article 59 allows for personal data to be processed without consent only in an AI regulatory sandbox to develop AI systems for public benefit.

Further, AI vendors and operators must prepare for regular and extremely complex conformance testing, and be prepared that there remains a perpetual risk that a deployed or operational AI may fail its next conformance test.

The EU AI Act serves as a landmark legislation that would not only ensure the regulation of AI but also serve as a template for multiple jurisdictions. The rise of AI is inevitable and it brings forth its own unique challenges. However, with proper laws, AI can be used to ensure a bright, vibrant and diverse future and these legislations were certainly made with these ideals in mind.

Ahana Bag (Associate) and Akshay Krishna P (former Intern) at S.S. Rana & Co. have assisted in the research of this Article.

[1] https://www.britannica.com/technology/artificial-intelligence

[2] https://www.wiley.law/alert-EU-Adopts-the-AI-Act-The-Worlds-First-Comprehensive-AI-Regulation

[3] Ibid/

[4] Supra pt.1.

[5] FLOPS or Floating-Point Operations per Second are the primary measure of the speed of a calculation. For example, Chat GPT- 3 is trained on data requiring cumulative computation of 10^23 FLOPS thereby making it a general-purpose AI system.

[6] technologyreview.com/2022/11/22/1063605/china-announced-a-new-social-credit-law-what-does-it-mean/

[7] https://artificialintelligenceact.eu/article/28/

[8] https://artificialintelligenceact.eu/article/70/

[9] https://www.eiopa.europa.eu/publications/ai-act-and-its-impacts-european-financial-sector_en

[10] https://www.medicept.com/2024/03/06/eu-ai-act-and-its-impact-on-the-medical-device-industry/#:~:text=The%20AI%20Act%20complements%20the,promoting%20a%20level%20playing%20field

[11] https://www.business-standard.com/world-news/european-union-s-ai-act-sets-clear-regulatory-framework-say-experts-124031401237_1.html

[12] https://internetfreedom.in/the-impending-eu-ai-act-and-its-potential-effect-on-indias-ai-policy/

[13] https://www.unesco.org/en/artificial-intelligence/recommendation-ethics

[14] https://news.un.org/en/story/2024/03/1147831

[15] https://news.un.org/en/story/2024/03/1147831

Related Posts

India: Artificial Intelligence – A way to Superintelligence

For more information please contact us at : info@ssrana.com