By Anuradha Gandhi and Abhishekta Sharma
Introduction
Recently, the Central Drugs Standard Control Organisation (CDSCO) announced that software using Artificial Intelligence (AI) tools for cancer detection such as application analyzing CT scans, X-rays and other radiological data will require mandatory regulatory approval prior to market deployment.[1] Currently, the leading sectors driving AI adoption include industrial and automotive, consumer goods and retail, banking, financial services and insurance, and healthcare. Collectively, these sectors account for approximately 60 percent of the total value generated by AI.[2]
What includes as a Medical Device?
Under Indian regulatory framework, a Medical Devices is any instrument, apparatus, appliance, implant, material or other software (including accessories) intended for use in humans or animals to diagnose, prevent, monitor, support, treat or alleviate diseases, injuries or disabilities; support or modify anatomy or physiological processes; sustain life; disinfect medical devices; or control conception. It does not achieve its primary intended action through pharmacological, immunological or metabolic means though it may assist its function through such means.[3]
Types of Medical Software Device
Medical device software is broadly classified into two types:
- Software as a medical device
- Software in a medical device
| Software as a Medical Device (SaMD) | Software in a medical device (SiMD) |
| Standalone software which is not a part of hardware medical device, intended to perform one or more medical purposes and create a new information on their own. It can be interfaced with other medical device or can be used as a general purpose software. | Software that are part of a medical device hardware and influence the use of that medical device. They do not perform medical purpose on their own and do not intend to create new information on their own. |
| For example, An AI/ML-based tool intended for triage, and/or screening of cancer lesions. | For example, an embedded software that controls or drives an insulin pump to deliver a calculated dose of insulin. |
Regulatory framework under Medical Device Rules 2017
The Medical Device Rules, 2017 (hereinafter referred to as MDR 2017) establish a comprehensive regulatory framework for medical devices in India. Devices are categorized into four classes based on the risk levels:
- Class A (low risk),
- Class B (low moderate risk),
- Class C (moderate high risk),
- Class D (high risk).
This risk based classification determines the regulatory oversight and licensing requirements applicable to the manufacture, import, and sale of medical devices, including software based medical technologies.
Devices that are invasive, entering the body through a body orifice or making contact with the internal body fluid path are classified as moderate high risk medical device (Class C)
Personal Data collected in Cancer treatment
AI used for Cancer diagnostic and treatment has been classified under Class C and inherently involve collecting and processing of sensitive Personal Information starting from identifying people with cancer who have been diagnosed or received cancer care. The information collected involves:[4]
- Patient detail such as name, gender, age, birthplace, ethnicity
- Type of tumor/cancer
- Stage at which the cancer is
Possible Security and Data Privacy risks
Cancer detection AI tools process vast datasets of medical images, genomic profiles and patient histories to flag malignancies early. As they increasingly rely on vast amounts of sensitive personal information, concerns about data privacy and security have become paramount.
- Data Breaches
Using AI for cancer diagnosis, treatment, etc. introduces significant risks due to the sensitive nature of health information involved. AI systems require massive amounts of patient data to function effectively. This data includes not only medical histories but also genetic information, lifestyle data, and even social factors that could affect health outcomes. The sheer volume of data increases the attack surface, making it easier for cybercriminals to infiltrate systems - AI System Vulnerabilities and biasness
AI systems rely heavily on sensitive personal data which are attractive targets for cyberattacks and can cause serious harm if breached. Hackers can manipulate the data leading to incorrect diagnosis.[5] The algorithm bias can cause harm to patients and widen health inequalities. If a model is trained on non-diverse publicly available data which has inherent bias based on skin tone, such can increase the risk of advance diseases, misguided results and further reinforce existing disparities rather than reducing them.[6] Risk prediction algorithms can propagate existing social biases, manifesting as less accurate predictions for protected patient groups. - Data Anonymization Challenges
Healthcare providers typically attempt to anonymize patient data to protect privacy. However, research has shown that even anonymized data can be re-identified under certain circumstances, particularly when datasets are combined with other sources of information. In 2018, patients were re-identified using anonymized complaint datasets and data aggregated from public newspaper articles.[7] - Consent and Data Ownership
Healthcare ecosystem involves data sharing between multiple organizations, including hospitals, research institutions, and tech companies. This raises questions about patient consent and data ownership. Developer and deployer both forms the part of AI stakeholder, the developer is responsible for creation of the AI model and deployer is responsible for implementation of the application or software. In case of privacy risk and breach the question of data ownership and liability arises. This lack of clarity raises concerns with respect to protecting patient data.Further patient may not be aware how AI is being used which make it difficult for them to give informed consent. - AI might also flag healthy tissue as cancer, causing needless biopsies, stress or delayed treatment
Why regulatory oversight is needed?
Regulatory oversight is essential due to the complex and opaque nature of AI models, which makes it difficult to identify potential harm and assign liability when error occurs. Because AI system often function as “black-boxes” mistakes can go unnoticed until they cause significant consequences.
A well-known example of AI error involved an AI coding assistance platform that repeatedly logged software developers out of their accounts whenever they attempted to use the services on more than one device. When affected user contacted customer service, a representative stated that “one device per subscription” was an official security policy. However, no such policy existed but instead had been hallucinated by representative, who was in fact an LLM-based customer service chatbot, while the logouts were actually caused by an unrelated bug in the platform. Since multi-device workflows are standard practice for software engineers, the fabricated policy led to mass subscription cancellations.[8]
Similarly challenges arise in high stake domain such as AI-driven cancer detection system. These technologies involve multiple stakeholders are involved including data providers, algorithm developers, healthcare institutions and clinicians in development, validation deployment and use of these systems. This distributed responsibility complicates accountability when an AI system produces incorrect or biased results that may affect patient outcomes. Regulatory oversight is therefore crucial to address data bias and patient safety.
Legal framework governing the AI models
European Union
- AI Act (EU AI Act)[9] The EU AI Act is the most comprehensive AI-specific law enacted so far. It establishes a risk based regulatory framework that classifies AI systems from minimal to high risk or unacceptable risk, with stricter obligations for higher risk tools. Healthcare devices are covered under high risk and are required to meet transparency, safety, accountability, human oversight and documentation requirement.
Providers of AI models, including general-purpose models are required to supply technical details and training data information to regulators.
- General Data Protection Regulation (GDPR)[10]
Article 22 of GDPR further specifies the need for regulatory safeguards in AI-driven decision making. It grants individuals the right to not be subject to decision based solely on automated processing, including profiling where such decisions produce legal or similarly significant effects. It requires the presence of meaningful human involvement, transparency about logic involved and safeguards to prevent errors and bias.
India
- Draft Guidance on Medical Device Software issued by CDSCO[11]
It requires manufacturers to obtain appropriate license to manufacture or import medical device software before distribution in India. Class C software requires central approval from CDSCO.Further it requires companies to implement a comprehensive Quality Management System (QMS) covering entire software lifecycle. QMS practice should align with relevant compliances such as Bureau of Indian Standards (BIS), International Organization for Standardization (ISO), or International Electro technical Commission (IEC) standards, ensuring that quality assurance is integrated into each stage of production.The risk management process should be integrated across the entire lifecycle of the Medical Device Software. An Algorithm Change Protocol (ACP) may be devised, wherever applicable based on the nature and risks associated with the Medical Device Software. The ACP may contain the following information:
- A data management plan that includes how algorithm update will be assessed, managed, validated so that change do not compromise safety or intended performance.
- Protocols such as risk assessment, data governance and quality assurance strategies around modal updates
- Manufacturer/importers of medical software devices need to comply with cybersecurity protocols such as IS/ISO 14971, IS/ISO 62304, etc.
- Telemedicine Practice Guidelines under Indian Medical Council (Professional Conduct, Etiquette and Ethics Regulation, 2002][12]The Guidelines require technology platforms based on Artificial Intelligence/Machine Learning not to counsel the patients or prescribe any medicines to a patient. The Registered Medical Practitioner (RMP) is entitled to counsel or prescribe and has to directly communicate with the patient in this regard. AI technology could be used to assist and support a RMP on patient evaluation, diagnosis or management, the final prescription or counseling has to be directly delivered by the RMP.
- Digital Personal Data Protection Act, 2023 and Rules thereunder
The India’s primary legal framework for collection, storage, processing and transfer of Personal Information (hereinafter referred to as PI) requires data fiduciary (person who determines purpose and means of collecting PI) must obtain valid consent before processing PI, specifying purpose of use.Data may only be processed for specific explicitly stated purpose, limiting unauthorized use of data. It further requires conducting privacy impact assessment, embedding privacy by design in workflows and vendor contracts and reporting breached promptly.
Attributing Liability for harm caused due to AI use in healthcare
The National Consumer Disputes Redressal Commission, in Dr. Reba Modak v. Sankara Nethralaya, observed that Hospitals are vicariously liable for tortious acts of its doctors and employees carried out in the course of employment[13]. This position is also internationally accepted. A Minnesota Court in the United States in the case of Popovich v Allina Health System[14] observed that hospital is vicariously liable for the negligence of its employees where the hospital has control over the actions of the employees, but if there is a break in the chain of control between employer and employee, the hospital cannot be vicariously liable under the ‘doctrine of respondeat superior’. Under this theory, patient may have a bona fide malpractice claim relating to a hospital employee’s tortious use of AI to direct the patient’s care, and the activities were within the employee’s scope of employment, then liability may flow to the hospital system.[15]
[2] https://www.pib.gov.in/PressReleasePage.aspx?PRID=2209737®=3&lang=1
[3] Draft Guidance Document on Conduct of Medical device software under Medical Device Rules, 2017
[4] https://seer.cancer.gov/registries/cancer_registry/data_collection.html
[5] https://www.lepide.com/blog/ai-in-healthcare-security-and-privacy-concerns/
[6] https://www.thelancet.com/journals/landig/article/PIIS2589-7500(21)00252-1/fulltext?tpcc=nleyeonai
[8] An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess | WIRED
[9] High-level summary of the AI Act | EU Artificial Intelligence Act
[12] Telemedicine_Practice_Guidelines.pdf
[14] Popovich v Allina Health System 946 NW 2d 885, 891 (MN 2020) (United States)


