Kerala High Court’s New AI Guidelines Set National Standard for Judicial Integrity

August 14, 2025
Court’s New AI Guidelines

By Anuradha Gandhi and Rachita Thakur

Introduction

On July 19, 2025, the Kerala High Court published its “Policy Regarding Use of Artificial Intelligence Tools in District Judiciary”, (hereinafter referred to as the “Policy”), for the responsible and restricted use of Artificial Intelligence in judicial functions of the District Judiciary[1]. The Policy, addressed to all District Judges and Chief Judicial Magistrates, directs them to communicate the Policy to all Judicial Officers and the staff members under their jurisdiction and take necessary steps to ensure strict compliance with the Policy.

Kerala High Court is the first High Court to issue a formally documented and binding set of guidelines restricting Artificial Intelligence (AI) use in district and subordinate courts.

Aim of the Policy

The Policy aims to establish guidelines for responsible use of AI tools in judicial work with the objective to ensure:

  1. That the AI tools are used only in a responsible manner, solely as an assistive tool and strictly for specifically allowed purposes;
  2. That under no circumstances AI tools are used as a substitute for decision-making or legal reasoning.
  3. That judicial members and staff meet their ethical and legal obligations, ensuring human supervision, transparency, fairness, confidentiality, and accountability are maintained at every stage of judicial decision-making.

Scope and Applicability

The Policy is made applicable to:

  1. All members of district judiciary in Kerala and the employees assisting them in their diverse judicial work;
  2. An interns and law clerks working with the District Judiciary in Kerala
  3. All kinds of AI tools, including but not limited to generative AI tools, databases that use AI to provide access to diverse resources including case laws and statutes
  4. All circumstances where AI tools are used to perform or assist in the performance of any judicial work, without regard to the location and time of use and irrespective  of whether used  on personal devices or devices owned  by the courts or third-party devices.

Data Privacy and Security: Legal safeguards in the Policy

Through the Policy, the Kerala High Court acknowledges that while the use of AI tools can be beneficial, the ultimate risk to individual privacy, data security and erosion of trust in judicial decision making also entails.  Thus, the Policy lists down the guiding principles for usage of AI for judicial purposes which are also in alignment with the principles of data privacy under the Digital Personal Data Protection Act, 2023.

Guiding Principles in the Use of AI Tools – Privacy Principles

The Policy sets out clear, stringent principles governing the ethical and responsible use of Artificial Intelligence in the district judiciary. It emphasizes that transparency, fairness, accountability and confidentiality are fundamental judicial values that must not be compromised by AI usage.

  1. Transparency, fairness, accountability and confidentiality: The Policy clearly mandates all the members of judiciary and employees to ensure that any AI tool they use is for official purpose adheres to the integral principles of transparency, fairness, accountability and confidentiality which shall not be compromised by such use.
  2. Risk Mitigation: The policy warns against using cloud-based or public AI tools for official case data. Most AI tools, including popular GenAI tools like ChatGPT and Deepseek, are cloud-based, and any information input by users may be accessed by service providers. Submitting information such as facts of a case, personal identifiers, or privileged communications may result in serious violations of confidentiality. Therefore, the use of all cloud-based services should be avoided, except for those specifically vetted and approved by the judiciary. Approved AI tools shall only be used for their intended purposes.
  3. Preventing AI “Hallucinations” through Human Oversight: To address the problem, where AI generates fictitious, misleading or off topic results; the Kerala HC Policy mandates verification of AI outputs, given that AI tools frequently produce incomplete, biased or erroneous outputs, all results including citations, summaries or translations must undergo strict human verifications by judges or qualified translators, and any output found to be erroneous is to be flagged and reported immediately.
  4. Risk of Automation Bias: The Policy stress proactive stance on automation bias, ensuring judicial officers and staff do not place undue trust in AI conclusions, and foster a vigilant, critical approach to all AI generated outputs.
  5. Permissible Use: While approved AI tools may be used for routine administrative tasks such as scheduling cases or court management, human supervision is required at all times.
  6. Prohibited Uses: The Policy prohibits use of AI tools to arrive at any findings, reliefs, orders, or judgments under any circumstances. Judges are obligated to retain full responsibility for the content and integrity of any judicial order, judgment, or part thereof.
  7. Periodic Review and Audit: The policy institutes systematic audit requirement, all AI usage must be entered into official logs and there are periodic, structured reviews by IT and administrative authorities to assess ongoing risks, usage patterns and the effectiveness of safeguards.
  8. Training and Capacity Building: Judicial officers supporting staff are required to attend regular training on the ethical, legal and technical aspects of AI use to understand both sides and risks.
  9. Accountability: Any errors or issues detected in AI outputs must be promptly reported to the Principal District Court, which then escalates them to the High Court’s IT Department for immediate review.

Enforcement and Disciplinary Action

The Kerala High Court’s Policy explicitly states, “Any violations of this policy may result in disciplinary action[2] and rules pertaining to disciplinary proceedings shall prevail”, which implies that the warning is not merely symbolic; ensuring that existing legal frameworks for enforcing discipline and accountability within the judiciary remain effective. This means that the proceedings for violation of any of the provisions of this Policy shall be dealt with in the manner specified under the Kerala Civil Services (Classification, Control and Appeal) Rules, 1960 which includes written warnings, suspensions[3], and demotions[4] or even in certain cases dismissal from service depending upon the severity of the violation.

Supreme Courts Guidelines on AI

Earlier, in September 2024, the Supreme Court had issued a set of guidelines, Design, Development, And Implementation Of Artificial Intelligence (Ai) Solution, Tools For Transcribing Arguments And Court Proceedings At Supreme Court Of India, that played a foundational role steering AI use in Judiciary.[5] The Supreme Court’s guidelines were broad urging states to frame their own policies but leaving operational details to High Courts. Kerala’s judiciary recognized the need for comprehensive, enforceable framework that covers all judicial personnel and explicitly tackles data privacy, accountability and human oversight in AI-assisted workflows[6].

Since introducing AI tools like SUVAS for translation and SUPACE[7] for research assistance, the Court has reiterated several key principles:

  • Use of AI should strictly be confined to administrative, research and translation functions.
  • AI should not be used for judicial reasoning, making findings, issuing orders or drafting final judgements.
  • Judges retain exclusive responsibility for all substantive decisions, ensuring transparency and due process.
  • The Supreme Court of India has taken an advisory and coordinating role in directing High Courts (HC) and High Court Judges (HJ) regarding the integration of Artificial Intelligence into Judicial workflows.

Have other High Courts released their own AI Policies?

Kerala HC has taken lead in issuing a fully articulated and binding policy regulating the use of AI in district judiciary operations. It exemplifies an ethics-driven approach interlinking technological innovation with judicial responsibility.

This pioneering move positions Kerala as a national model for judicial governance, signaling to other High Courts and state judiciaries the importance of clear, enforceable standards to mitigate AI risks.

Analysis of AI in Judicial System across Jurisdictions

AI and the Indian Judicial System

Though India, currently does not have any centralized regulation governing use of AI, it has undertaken initiatives like the e-Courts Project Phase III, where AI is being used for automated case management, legal research, document digitization, and user assistance via chatbots[8].

In the case of Jaswinder Singh v. State of Punjab (Punjab & Haryana High Court, 2023)[9], the presiding judge, while considering a bail application, referred a query to ChatGPT to understand the Bail Jurisprudence in cases involving acts of cruelty, the judge clarified that the AI’s response was sought only for a comparative, global outlook and had no bearing on the outcome or reasoning of the judicial order. The court emphasized that true judicial reasoning cannot be replaced by AI and the reference would not influence the final decision. This case has been widely referenced as the first instance where a generative AI tool was used in an Indian Courtroom, although only for research assistance.

AI and Judiciary in the United States of America

  • AI-powered tools like COMPAS are used in the US court system for risk assessment, sentencing guidelines, and public inquiries, but it faces significant criticism for racial and socioeconomic bias, lack of transparency, and due process concerns, particularly following ProPublica report of 2016[10] that revealed its tendency to classify Black defendants as higher risk.
  • As of mid-2025, the U.S. lacks a federal law, the regulation is fragmented across states. In State v. Loomis (Wisconsin, 2016)[11], the Wisconsin Supreme Court considered whether using a COMPAS risk assessment during sentencing would violate a defendant’s due process rights. The Court upheld the use of COMPAS but with specific caveats. It ruled that COMPAS scores could be used as one factor among many in sentencing, but judges must not rely solely on the assessment. The court mandated that defendants must be given a written notice and that judges must be made aware of the tool’s limitations, including its proprietary nature and the fact that its scoring methodology has not been validated for all populations.

AI and Judiciary in China

  • In December 2022, the Supreme People’s Court issued the “Opinion on Regulating and strengthening the application of Artificial Intelligence in Judicial fields”, which require all courts to adopt competent AI systems by 2025. The policy mandates legality, transparency, data privacy and national security protection.[12]
  • China’s Smart Court system leverages AI to analyze past cases, suggest applicable laws and precedents, and recommend sentences, aiming for faster and more informed judicial decisions. Additionally, Chinese courts utilize AI for legal research through platforms like ‘China Judgements Online’[13].

AI and Judiciary in United Kingdom

  • UK AI Action Plan for Justice, published in July 2026, promotes responsible AI adoption in courts and tribunals, emphasizing governance, ethics, privacy and transparency through collaboration between judiciary, regulators and government.[14]
  • The UK Bar Council’s Ethics Committee advises on AI use, warning risks like bias and confidentiality breaches and stressing verification of AI outputs[15].

A Way Forward

India should strengthen its existing AI governance in the judiciary by ensuring uniform policies across states, investing in capacity-building for judges and court staff and upgrading digital infrastructure, all while upholding strict data privacy and ethical safeguards. Drawing from global best practices, regulations must remain adaptive to evolving AI capabilities, with mechanisms for transparency, accountability and continuous risk assessment. Public engagement and clear communication of AI’s assistive role will be vital to preserving judicial integrity and trust while leveraging technology to improve efficiency and access to justice.

Akshara Gupta, Legal Intern at S.S.Rana & Co. has assisted in the research of this article.

[1] https://images.assettype.com/theleaflet/2025-07-22/mt4bw6n7/Kerala_HC_AI_Guidelines.pdf

[2] Kerala Civil Services (Classification, Control and Appeal) Rules, 1960, s.2(c)

[3] Kerala Civil Services (Classification, Control and Appeal) Rules, 1960, s. 10

[4] Kerala Civil Services (Classification, Control and Appeal) Rules, 1960, s. 11

[5] https://cdnbbsr.s3waas.gov.in/s3ec0490f1f4972d133619a60c30f3559e/uploads/2024/01/2024012579-1.pdf

[6] https://indiaai.gov.in/article/from-backlogs-to-breakthroughs-the-integration-of-ai-in-india-s-judiciary

[7] https://www.pib.gov.in/PressReleasePage.aspx?PRID=2113224

[8] https://www.pib.gov.in/PressReleasePage.aspx?PRID=2106239

[9] https://www.barandbench.com/columns/artificial-intelligence-in-context-of-legal-profession-and-indian-judicial-system

[10] https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

[11] State v. Loomis, 881 N.W.2d 749, 767 (Wis. 2016).

[12] https://english.court.gov.cn/2022-12/12/c_1053712.htm

[13] https://www.barandbench.com/columns/artificial-intelligence-in-context-of-legal-profession-and-indian-judicial-system

[14] https://www.gov.uk/government/publications/ai-action-plan-for-justice/ai-action-plan-for-justice

[15] https://www.barcouncilethics.co.uk/wp-content/uploads/2024/01/Considerations-when-using-ChatGPT-and-Generative-AI-Software-based-on-large-language-models-January-2024.pdf

For more information please contact us at : info@ssrana.com