By Vikrant Rana and Anuradha Gandhi
Introduction
The accuracy of Facial Recognition Systems (hereinafter referred to as “FRT”) has been in debate ever since their inception. This debate resurfaced when, after reviewing employee attendance data, the CEO of a famous cab company criticized low attendance rates. As a response, an employee challenged the accuracy of the attendance data, attributing the errors to flaws in the facial recognition system[1]. Such inaccurate results can lead to penal consequences for the data subjects, raising serious concerns about the reliability and fairness of the technology in critical decision making.
The statement can also be supported by one of the most significant cases in this area: the Clearview AI case[2].
In this case Clearview used web crawlers to scan websites for images containing faces, including social media, professional sites, blogs, and publicly accessible videos. It targeted all publicly available images without requiring user logins. From each photograph, a biometric template was created, forming a unique digital fingerprint based on facial features. These templates and associated metadata were stored in a centralized database, searchable by digital fingerprint. The Company provided an online platform that acted as facial recognition search engine, allowing users to identify individuals by uploading a face image. From this image, the tool generated a digital fingerprint corresponding to it and performed a search in the database for photographs to which similar fingerprints were linked. The search results included matching images, URLs, and contextual information like social media profiles, enabling detailed profiling of individuals. The CNIL (National Commission for Information Technology and Civil Liberties) found the Company’s practices to be intrusive, particularly the collection and processing of sensitive biometric data without consent. It imposed a €20 million fine on Clearview AI and issued an injunction requiring the company to comply with General Data Protection Regulation obligations.
FRT raises serious privacy concerns, as demonstrated in the above case. The indiscriminate collection of biometric data without consent or clear purpose limitation violates privacy rights. Retaining sensitive data without a defined retention period breaches data protection principles, and the lack of transparency, where data subjects were not informed about the data collection practices or how their personal data was being used, aggravates the issue. Moreover, data subjects did not have rights such as access or erasure of their data, further infringing on their privacy. These practices create liability for companies that fail to comply with data protection laws, highlighting the need for accountability and fair compensation for affected individuals.
What is Facial Recognition Technology and its issues?
Facial Recognition Technology is a biometric technology that uses algorithms to analyze and map facial features from images or videos to identify or verify a person’s identity. For instance, unlocking a smartphone with facial recognition is a common application of this technology. The device uses its camera to capture the user’s face, maps unique features and compares it to a stored template to verify the user’s identity. The main concerns with using such technology are:
- Lack of Human Intervention:
Facial recognition technology relies heavily on artificial intelligence for decision making.
This raises questions about compliance with Article 14 of European Union’s Artificial Intelligence Act, which mandates human oversight in high-risk Al applications. The absence of human intervention could result in unchecked errors, biased outcomes or unfair consequences. - Bias and Errors:
Studies have shown that facial recognition systems can have significant biases and errors, particularly when used without human oversight. For instance, in Nijeer Parks case[3], Parks was wrongfully arrested due to police misuse of unreliable facial recognition technology. Moreover, a study by MIT and Stanford University revealed that three commercially released facial-analysis programs showed significant error rates when determining gender based on skin type. While error rates for light-skinned men remained below 1%, the error rates for darker-skinned women surged dramatically, over 20% in some cases and more than 34% in others[4]. - Withdrawal:
Unlike passwords or other forms of authentication, biometric data such as facial features cannot be changed if compromised, as seen in the case of BioStar 2[5] wherein hackers gained access to facial recognition and fingerprint technology. This raises questions about whether individuals can effectively withdraw their data from facial recognition systems once it has been captured, stored or shared. - Infringement on Privacy:
Deployment of facial recognition technology often occurs without individual consent, such as through street surveillance. For instance, in China, facial recognition technology is extensively used for mass surveillance, including monitoring public spaces, tracking citizen’s movements and identifying individuals at protests or gatherings[6]. - Biometric Cloning:
In a recent report by Ministry of Home Affairs, it was highlighted that, “cybercriminals are cloning the biometric data of Aadhaar users uploaded on states’ registry websites that host sale deeds and agreements with the intention of carrying out unauthorized withdrawals through Aadhaar Enabled Payment System[7] - Third-Party Involvement:
The integration of facial recognition technology by companies often involves collaboration with third-party vendors, necessitating the sharing of sensitive biometric data which may lead to unauthorized access, misuse, or breaches of personal information. For instance, in 2019, photos of US border crossers, including faces and vehicle plate numbers, were compromised in a malware attack after a subcontractor working with the US Customs and Border Protection (CBP) unlawfully transferred sensitive images from CBP systems to its own network[8].
Aadhar and Facial Recognition Technology
The concurrent use of Facial Recognition Technology and government-issued identification cards, such as Aadhaar or PAN cards, by companies for verification purposes raises significant privacy concerns. A notable instance is the Digi Yatra initiative in India, which integrates Aadhaar-based verification with facial recognition for air travel[9].
Moreover, the Supreme Court has observed that, “If biometric authentication is attached to every transaction entered into by a person, it could lead to aggregation of metadata of citizens, and can be used for many purposes, including surveillance, thus necessitating the need for data protection[10].”
In light of these concerns, it is imperative for companies to clearly articulate the purpose of collecting Aadhaar information when FRT is already employed. This transparency ensures compliance with data protection regulations and upholds individuals’ privacy rights.
Repercussions
In India, data protection for body corporates is governed by Section 43A of the Information Technology Act, 2000, (“IT Act “) and The Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 (“IT Rules“). Section 43A holds entities handling sensitive personal data liable to compensate individuals for losses caused by negligence in implementing reasonable security practices. Rule 3 of the IT Rules[11] identifies sensitive personal data, including biometric information, as requiring heightened protection.
Under Section 30 of the Aadhaar Act[12], biometric data is explicitly deemed sensitive personal data and is afforded similar protection. Agencies contracted by Unique Identification Authority of India, such as Registrars and Enrolling Agencies, fall under Section 43A and can be held liable for breaches if they fail to ensure the security of Aadhaar holders’ data.
According to recent statistics[13], India has 170 FRT systems, 20 of which are operational, and the remaining are in various stages of implementation. Most FRT systems are found in Maharashtra, Delhi, and Telangana, with 11.2 per cent, 8.8 per cent, and 7.1 per cent shares respectively. Out of these 170 FRT systems, the Defence sector has the largest share (20.6%) in utilizing these systems, followed by the education sector (with a 13.5% share), transport (12.4% share), and public infrastructure sector (10%). Firms using these technologies must comply with data protection laws, as any negligence in handling such sensitive data could result in liability for breaches, with companies being held accountable for failures in securing or misusing biometric information.
Possible Solutions from an Organization Standpoint
- Transparency and Consent
Niti Ayog guidelines[14] advocates a consent-based policy for using facial recognition technology, requiring explicit approval from individuals before collecting their biometric data. At various stages, alternative means must be provided. For instance, CISF personnel can physically verify passenger’s travel IDs at airports, or organizations can use ID cards for verifications. Clear signage and notifications should also inform individuals about the use of facial recognition technology and its purpose. - Rights of Data Subjects
Organizations should ensure that data subjects have clear rights regarding their biometric data, including the right to access and to request the erasure of their data. Additionally, organizations should implement retention and deletion policies, such as the Digi Yatra example, where facial biometrics are deleted from airport databases 24 hours after a passenger’s flight departure[15].
Employees biometric data can be retained securely for daily attendance but should be permanently deleted when the employee leaves the organization. This ensures transparency and protects individuals’ privacy by allowing them control over their data. - Regular Cybersecurity Audits
Frequent cybersecurity audits are essential to ensure system reliability, usability and security while adapting to evolving digital threats. For instance, ISO 27001 to safeguard biometric databases. Independent algorithmic audits by accredited auditors should also be conducted before deployment and at regular intervals to address biases, inaccuracies and compliance issues. - Purpose Limitation and Data Minimization
Organizations must clearly define and communicate the specific purposes for collecting biometric data, ensuring it is only used for those purposes. Data should be limited to what is necessary for the intended purpose and not retained or processed beyond what is required, minimizing the risk of misuse and ensuring that only essential data is collected.
Conclusion
Facial recognition technology can be a valuable tool, but its use demands caution and strict oversight. As highlighted by the ICO in the Serco Leisure case[16], alternatives like ID cards or fobs should be prioritized where feasible to prevent the misuse of sensitive biometric data. Implementing such measures, alongside robust regulations and human oversight, ensures that this technology is used responsibly, balancing innovation with the protection of individual rights.
Rishabh Gupta, Assessment Intern at S.S. Rana & Co. has assisted in the research of this article.
[2] https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000046444859?isSuggest=true
[3] Parks v. McCormac, available at: https://www.aclu.org/cases/parks-v-mccormac
[4] https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212
[5] https://www.vpnmentor.com/blog/report-biostar2-leak/
[11] https://www.meity.gov.in/sites/upload_files/dit/files/GSR313E_10511(1).pdf
[12] https://uidai.gov.in/images/Aadhaar_Act_2016_as_amended.pdf
[14]Responsible AI for All – Adopting the Framework: A Use Case Approach on Facial Recognition Technology, available at: https://www.niti.gov.in/sites/default/files/2024-06/Responsible%20AI%20for%20All%20-%20Adopting%20the%20Framework%20A%20use-case%20approach%20on%20Facial%20Recognition%20Technology_0.pdf