MeitY unveils India’s Approach towards regulating Artificial Intelligence

November 14, 2025
MeitY unveils India’s

By Vikrant Rana, Anuradha Gandhi and Prateek Chandgothia

Introduction

On November 5, 2025, Ministry of Electronics and Information Technology (hereinafter referred to as ‘MeitY’) unveiled the AI Governance Guidelines (hereinafter referred to as ‘AI Guidelines’) under the IndiaAI mission. The aim of these guidelines is to ensure safe, inclusive, and responsible adoption of artificial intelligence across sectors by focusing on human-centric development, responsible AI and mitigation of potential harms. The guidelines highlight the ‘Do No Harm’ as a core principle and propose a robust governance framework to foster cutting-edge innovation and safely develop and deploy AI for all while mitigating risks to individuals and society.[1]

Structure of the AI Guidelines

The AI guidelines are distributed into 4 parts. The first part lays down seven sutras that ground India’s AI governance philosophy. The second part examines key issues and recommendations through six pillars across three key domains:

  1. Enabling AI innovation through infrastructure and capacity building
  2. Regulating AI through policies, regulations and ensuring risk mitigation
  3. Oversight over AI systems through institutions and attributing accountability.

The third part of the AI guidelines recommends an action plan by distributing various steps to operationalize the recommendations in short-, medium- and long-term time frames. It enlists actionable steps to ensure a whole governance approach by leveraging the Technology & Policy Expert Committee and AI Governance Group for strategic oversight, and the AI Safety Institute for technical validation & safety research.

The fourth part is of great significance as it enlists practical guidelines for industry actors and regulators to ensure consistent and responsible implementation of the recommendations.[2]

India adopts a sectoral regulatory approach towards AI

Through the AI Guidelines, MeitY has now clarified that India will be adopting a sectoral based regulatory approach allowing sectoral regulators and relevant ministries to formulate AI rules and regulation specific to each sector. This means that the Reserve Bank of India (hereinafter referred to as ‘RBI’), Securities and Exchange Board of India, Insurance Regulatory and Development Authority of India and other sectoral regulators will be in charge of regulating AI in their specific sectors. Therefore, it is affirmed that, currently, the government is not planning to enact any umbrella legislation to regulate AI in India.[3]

Adapting the 7 Sutras in RBI FREE AI Framework for cross-sectoral applicability

On August 13, 2025, the Committee for developing the Framework for Responsible and Ethical Enablement of Artificial in the Financial Sector constituted by RBI submitted its report, i.e., the FREE AI Framework which laid down 7 guiding principles in the form of ‘sutras’ for adopting AI and Machine Learning in the financial sector.

(To read more on the RBI FREE AI framework, refer to – https://ssrana.in/articles/the-free-ai-framework-regulating-ai-in-financial-sector/ )

The AI Guidelines rework the seven sutras to extend their relevance for AI adoption across sectors. We have incorporated a brief set of sample steps[4] or questions for self-assessment[5], alongside each sutra to illustrate the practical scope of these sutras and the implications on the deployer, developer and end users.

  1. Trust is the foundation – Trust must be embedded across the value chain – i.e. in the underlying technology, the organizations building these tools, the institutions responsible for supervision, and the trust that individuals will use these tools responsibly.
    Sample Self-Assessment Questionnaire to ensure the Trust ‘sutra’
    1 Could the AI system affect human autonomy by interfering with the (end) user’s decision-making process in an unintended way?
    2 Did you consider whether the AI system should communicate to (end) users that a decision, content, advice or outcome is the result of an algorithmic decision?
    3 In case of a chat bot or other conversational system, are the human end users made aware that they are interacting with a non-human agent?

     

  2. People first – Humans should, as far as possible, have final control over AI systems, and human oversight is essential to maintain accountability.
    Sample Self-Assessment Questionnaire to ensure the People first ‘sutra’
    1 Is the AI system implemented in work and labour process? If so, did you consider the task allocation between the AI system and humans for meaningful interactions and appropriate human oversight and control?
    2 Can you describe the level of human control or involvement?
    3 Who is the “human in control” and what are the moments or tools for human intervention?
    4 Is there is a self-learning or autonomous AI system or use case? If so, did you put in place more specific mechanisms of control and oversight?
    5 Did you take safeguards to prevent overconfidence in or overreliance on the AI system for work processes?
  3. Innovation over restraint – Active adoption must be encouraged and serve as a catalyst for impactful innovation. Responsible innovation should be prioritized over cautionary restraint.
  4. Fairness and equity – India’s AI Governance framework should promote inclusive development. AI should be leveraged to promote inclusive development while mitigating risks of exclusion, bias, and discrimination.
    Sample Self-Assessment Questionnaire to ensure the Fairness and equity ‘sutra’
    1 Did you establish a strategy or a set of procedures to avoid creating or reinforcing unfair bias in the AI system, both regarding the use of input data as well as for the algorithm design?
    2 Did you consider diversity and representativeness of users in the data? Did you test for specific populations or problematic use cases?
    3 Did you put in place processes to test and monitor for potential biases during the development, deployment and use phase of the system?
  5. Accountability – AI developers and deployers should remain visible and accountable. Accountability should be clearly assigned based on the function performed, risk of harm, and due diligence conditions imposed.
    Sample steps to ensure the Accountability ‘sutra’
    1 Designate a member of senior leadership/management to be responsible and accountable for governance of the business’ development and use of AI systems, including enforcement of internal policies and procedures.
    2 Designate personnel responsible for keeping the business current on regulatory and technical developments.
    3 Clearly define all organizational roles and responsibilities associated with the design, development, use and deployment of AI systems to ensure such roles and responsibilities align under the AI leader’s portfolio
    4 Establish and enforce development guidelines to hold employees dealing with AI systems accountable
  6. Understandable by design – AI systems must have clear explanations and disclosures to help users and regulators understand how the system works, what it means for the user, and the likely outcomes intended by the entities deploying them, to the extent technically feasible.
    Sample steps to ensure the Understandable by design ‘sutra’
    1 Inform individuals that they are interacting with an AI system at the time of interaction, such as through a privacy notice or other mechanisms (e.g., labels, disclaimers)
    2 Employ a variety of methods to explain AI systems, such as – Visualizations; Model extraction; Feature importance
    3 Use language, concepts and terms most understandable to the particular audience of the explanation.
    4 Be transparent about the design of AI systems – disclose information about:

    • Design goals
    • Data inputs
    • The construction and operation of the system
    • System outputs and their impacts on targeted individuals/communities and/or society as a whole

    (To read more on the importance of understandability by design in AI Systems, refer to – https://ssrana.in/articles/ai-chats-became-public-records-privacy-crisis-unfolds/ )

  7. Safety, Resilience and Sustainability – AI systems must minimize risks of harm, be robust and resilient, have capabilities to detect anomalies, provide early warnings to limit harmful outcomes. Additionally, AI development efforts should be environmentally responsible and resource-efficient, and the adoption of smaller, resource-efficient ‘lightweight’ models should be encouraged.[1]
    Sample Self-Assessment Questionnaire to ensure the Safety, resilience and sustainability ‘sutra’
    1 Did you consider different types and natures of vulnerabilities, such as data pollution, physical infrastructure, cyber-attacks?
    2 Did you put measures or systems in place to ensure the integrity and resilience of the AI system against potential attacks?
    3 Did you ensure that your system has a sufficient fallback plan if it encounters adversarial attacks or other unexpected situations (for example technical switching procedures or asking for a human operator before proceeding)?
    4 Did you define thresholds and did you put governance procedures in place to trigger alternative/fallback plans?

Incentive mechanism to drive Industry led self-regulation

The AI Guidelines recommends Voluntary frameworks such as industry codes of practice, technical standards and self-certifications as an important layer of risk mitigation in India’s AI governance framework. Some examples of these voluntary frameworks are:

  1. Developer’s Playbook for Responsible AI in India published by NASSCOM for Responsible AI Principles.[2]
  2. International Code of Conduct for Organizations Developing Advanced AI Systems adopted at G7, Hiroshima meeting for voluntary commitments.[3]
  3. Technical guidelines issued by standard setting bodies like Telecommunication Engineer Center and Bureau of Indian Standards.
  4. Certification for AI tools in telecom, education, health, law for self-assessment or third-party review and audits of AI systems, with results disclosed to the public or regulators in the form of certification marks.

Entities which adopt these voluntary measures may grant them incentives such as:

  1. Access to regulatory sandboxes.
  2. Public recognition through certifications, ratings, or endorsement by the government.
  3. Venture capital to be directed to such entities
  4. Technical assistance, toolkits and playbooks to make voluntary compliance easier.[4]

Ensuring Accountability alongside Voluntary Frameworks

The AI guidelines acknowledge the need for enforcing accountability as suggested voluntary measures lack enforceability. It suggests certain accountability mechanisms relying on peer pressure, reputational incentives, and institutional oversight. These mechanisms recommend that the firms should publish transparency reports on red-teaming results, impact assessments, or risk mitigation steps. Self-certifications through auditors and standards bodies is also encouraged along with updating service terms to reflect commitments. Committee hearings may be conducted by regulators or parliamentary bodies to assess the voluntary compliance efforts. Firms are encouraged to implement techno-legal measures ensuring compliance is built into the system design. Lastly, the competitors and civil society may be allowed to observe and report violations. Currently, these mechanisms do not levy any penalties or fines for non-compliance however, it is stated that MietY may publish a schedule to legally enforce these mechanisms as compliances in the next 9-12 months. [5]

Practical Guidelines for the Entities adopting AI in business workflows

Under the fourth part of the AI guidelines, MietY has laid down important practical guidelines and recommendations for any person involved in developing or deploying AI systems in India:

  1. Compliance with existing laws – Relevant organizations must comply and be able to demonstrate compliance with existing laws such as the IT Act, Digital Personal Data Protection Act, 2023 (hereinafter referred to as ‘DPDPA’), Consumer Protection Act, 2019, and the existing intellectual property and criminal laws as applicable.
  2. Adoption of Voluntary measures – Organization must adopt voluntary measures such as principles, codes and standards with respect to privacy and security, fairness, inclusivity, non-discrimination, transparency, and other technical and organizational measures.
  3. Grievance Redressal – Organizations must enable the users to report AI-related harms and ensure resolution of such issues within a reasonable timeframe.
  4. Ensure Transparency – Organizations must publish transparency reports that evaluate the risk of harm to individuals and society in the Indian context. In case such reports contain any sensitive or proprietary information, they should be shared confidentially with relevant regulators.
  5. Implement Techno-legal solutions – Organizations must implement risk mitigating mechanisms including privacy-enhancing technologies, machine unlearning capabilities, algorithmic auditing systems, and automated bias detection mechanisms.[6]

Evaluating the viability of existing Indian Laws to regulate AI related harms

The India’s approach to AI involves utilizing and amended existing laws and formulating sectoral regulations therefore, laws and regulations across domains such as information technology, data protection, intellectual property, competition law, media law, employment law, consumer law, criminal law, amongst others shall apply to developers and deployers of AI systems.

Illustration 1: Use of deepfakes to impersonate individuals can be regulated by provisions under the Information Technology Act and the Bharatiya Nyaya Sanhita.

(To read more on this, refer to – https://ssrana.in/articles/2025-it-rules-amendment-regulating-synthetically-generated-information-in-indias-ai-and-privacy-landscape/)

Illustration 2: The use of personal data without user consent to train AI models is governed by the Digital Personal Data Protection Act.

However, there are certain gaps in the existing laws which need to addressed and certain amendments need to be introduced across legislations to ensure effective and clear application and understanding.

  1. Clarification regarding Applicability: Clarity is required with regard to how the definitions such as ‘intermediary’, ‘publisher’, ‘computer system’ under the IT Act, would apply to entities in the AI Value chain i.e., ‘developer’, ‘deployer’, ‘end-user’ as some of the modern AI systems generate data based on user prompts or even autonomously, and which refine their outputs through continuous learning.
  2. Clear and Specific legal Immunities: Under Section 79 of the IT Act, legal immunity is available to intermediaries for unlawful third-party content, provided they do not initiate the transmission of data, select the recipient of the data or modify it. It appears that such legal immunity would not be applicable to many types of AI systems that generate or modify content.
  3. Training of AI models on publicly available personal data: Key questions need to be answered on the applicability of the DPDPA. Some of which include whether the principles of collection and purpose limitation are compatible with how modern AI systems operate, the role of ‘consent managers’ in AI workflows and the value of dynamic and contextual notices in a world of multi-modal AI and ambient computing, and the scope of the research & ‘legitimate use’ exception for AI development.[7]

Indian Approach vis-à-vis Global Approach – Deviance or Coherence?

The AI guidelines showcase a both adoption of global approach and standards of AI regulations tailoring it specifically for the legal, economic and societal fabric of India taking specific inspirations for different jurisdictions:

Why the Pro-Innovation Approach?

It adopts a pro-innovation approach to facilitate and encourage development and adoption of AI while balancing it with associated risks through voluntary mechanisms and self-regulation. This approach is less stringent compared the European Union which has implemented a comprehensive EU AI Act prescribes stringent and strict penalties for high risk AI systems. India’s approach closely aligns with approaches adopted by countries like Japan and the United States of America (hereinafter referred to as ‘USA’). Japan adopted the law on Promotion of AI-Related Technologies in May 2025. It establishes an AI Strategy Center and implements non-binding guidelines to promote innovation and adoption. The framework emphasizes voluntary compliance and international cooperation. Similarly, USA has adopted a pro-innovation approach that emphasizes innovation, infrastructure development and international diplomacy to promote American leadership and global competitiveness. Voluntary commitments, such as the NIST AI Risk Management Framework, and some executive orders relating to AI governance are applicable.[8]

Is India’s Sectoral Regulatory Approach first of its kind?

As discussed in the preceding sections of this Article, the India’s AI Guidelines, while adopting the sectoral based regulatory approach resemble the approach adopted by the United Kingdom (hereinafter referred to as ‘UK’). Similar to India, certain existing legislations and regulations in the UK, such as The Data Protection Act 2018; Human Rights Act 1998 and the Equality Act 2010, have a cross sectoral application and apply to the AI Value Chain. UK’s sector-specific regulatory bodies have also adapted their approaches to AI-enabled technologies. As an example, in 2022 the UK´s Medicines and Healthcare Products Regulatory Agency published a roadmap clarifying in guidance the requirements for AI and software used in medical devices. The UK government has also clarified that the Consumer Law and the Tort Law shall also be applicable to the AI Value Chain in specific scenarios.[9]

What next to expect in light these AI Guidelines?

  1. Extensive amendments in the existing legal framework – Significant overhauling of the existing legislations is recommended by the AI guidelines to make them enforceable and applicable to AI value chain. These amendments shall aim to define the status of deployers, developers and end users under the existing laws such as the IT Act, BNS, and DPDPA etc.
  2. Addressing the fast-paced development of AI systems – The current AI guidelines adopt a flexible approach to AI regulations. It aims to prescribe baseline obligations largely centered on ethical and fair AI systems and the sectoral regulators along with standard setting bodies shall define specific compliances along with penalties for non-compliance wherever applicable. The applications and use cases of AI systems vary largely across sectors and this incoherent and inconsistent growth may prove to be impossible to track for a single centralized legislative body.
    Therefore, this flexible approach shall facilitate the efficient and effective tracking, identification, and addressing of sector specific risks by the respective regulators. Illustratively, for example, high-velocity algorithmic trading, direct human oversight is ineffective, given the speed at which they operate. In such cases, safeguards such as circuit breakers, automated checks, or system-level constraints should be considered. These are specific risk mitigation measures and might not be fit to be made applicable to all AI systems.
  3. Clarification on Copyright law’s ‘fair use’ by AI systems – The AI guidelines acknowledged key questions around copyright law including legality of using copyrighted work in AI training and its implications and evaluating the copyrightability of works produced by generative AI systems. It proposes a balanced copyright framework suited to India’s needs. It states that Section 52 of the Indian Copyright Act, limited ‘fair dealing’ exceptions apply for private or personal use, including research. These exceptions are restricted to non-commercial use and do not extend to organizational or institutional research. As a result, they may not cover many types of modern AI training. The Department for Promotion of Industry and Internal Trade is currently working on a balanced approach, which enables Text and Data Mining as fair use, with the objective of fostering innovation and enabling provisions to protect the rights of copyright holders.
    Drawing inference from the cases like Authors Guild v. Google, Inc. and Authors Guild, Inc. v. HathiTrust, the Courts have held digitization for creating search indexes as transformative since it served a different purpose from the original works, and it enabled new types of research. Additionally, the amount and the substantiality of the portion used and displaying short segments of text shall also play a major role in deciding on the Fair Use of Copyrighted works for AI model training. [10]
  4. Criminalization of deep fakes – The AI guidelines recommend amending the provisions of the IT Act and the BNS to include AI generated Content. This indicates a significant shift towards criminalizing malicious use of AI, especially generation and transmission of deep fakes. A recently proposed amendment in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 introducing labelling and transparency obligations for intermediary platforms to address the issue of unethical use of synthetically generated information.
    (To read more on the proposed amendment, refer to – https://ssrana.in/articles/2025-it-rules-amendment-regulating-synthetically-generated-information-in-indias-ai-and-privacy-landscape/)
  5. CERT-In’s expansive guidelines – The AI Guidelines designates CERT-In for leveraging Existing incident reporting mechanism to monitor vulnerabilities in AI systems across critical sectors and support the development of AI-driven threat detection tools such as anomaly detection and deep fake detection to counter AI-enabled disinformation. These incident reports[11] can be used to develop risk frameworks that apply to sensitive sectors and protection of critical infrastructure, such as telecom networks, energy grids and nuclear plants.

[1] Pg 5, India AI Governance Guidelines, 2025

[2] https://nasscom.in/ai/pdf/the-developer’s-playbook-for-responsible-ai-in-india.pdf

[3] https://www.mofa.go.jp/files/100573473.pdf

[4] Pg 28, India AI Governance Guidelines, 2025

[5] Pg 31, India AI Governance Guidelines, 2025

[6] Pg 42, India AI Governance Guidelines, 2025

[7] Pg 18, India AI Governance Guidelines, 2025

[8] Pg 49-50, India AI Governance Guidelines, 2025

[9] https://www.legalnodes.com/article/uk-ai-regulations

[10] https://www.4ipcouncil.com/application/files/7517/2189/4919/Copyright_Infringement_and_AI__A_Case_Study_of_Authors_Guild_v._OpenAI_and_Microsoft.pdf

[11] https://www.cert-in.org.in/s2cMainServlet?pageid=VLNLIST

[12] https://www.pib.gov.in/PressReleasePage.aspx?PRID=2186639

[13] Pg 1, India AI Governance Guidelines, 2025

[14] Pg 9, India AI Governance Guidelines, 2025

[15] https://trustarc.com/wp-content/uploads/2024/05/Responsible-AI-Checklist-.pdf

[16] https://ec.europa.eu/futurium/en/ethics-guidelines-trustworthy-ai/pilot-assessment-list-ethics-guidelines-trustworthy-ai.html

For more information please contact us at : info@ssrana.com