This story by Sakkcham Singh Parmaar originally appeared on Global Voices on December 5, 2025.
Artificial Intelligence (AI) has snuck into the judicial work of India in an unprecedented way. AI can generate real-time transcripts at the Supreme Court Constitution Bench hearings; automated software records the deposition of witnesses in trial courts. Judges are even testing AI tools for legal research and translation to navigate through case files containing multiple languages. However, these tests are happening within a strained judiciary space, thus posing a very central question: can algorithms speed up justice and still preserve fairness, transparency, and human discretion?
Millions of cases are pending in India; in fact, the backlog runs into several tens of millions. To tackle this issue, the government, under the direction of the Supreme Court and the Ministry of Law and Justice, is implementing Phase III of the e-Courts project, which seeks to modernize filings, case management, and workflow processes with machine learning and languages technologies. A sizable portion of the budget is earmarked for future technologies, such as AI and blockchain, thus signifying a political bet that such digital tools will mitigate the current delays while still abiding by the edict that only judges decide cases. As courts start adopting AI haphazardly, they also need to set boundaries for accountability, privacy, and the limits of automation.
The adoption of AI builds on earlier digitalization. Since the implementation of e-Courts in 2007, the program introduced e-filing, digital cause lists, and online judgments, with the aim of making online applications possible. Phase III revolves around considering judicial information now digitized for interpretation under natural language processing and machine learning.
A key innovation is the Supreme Court Portal for Assistance in Courts Efficiency (SUPACE), an AI-powered platform making it easier for judges and research staff to reach informed working decisions regarding the handling of massive case records. SUPACE does not make decisions; it identifies facts, proposes precedents, and drafts the outlines, cutting down manual research time and enabling judges to concentrate on legal reasoning.
Language access is also a primary concern. The Supreme Court has developed Vidhik Anuvaad Software (SUVAS), which converts judgments from English to other Indian languages, while some high courts test tools for converting amortized judgments in local languages into English. AI-powered transcription is changing record maintenance as well. The apex court has initiated automated transcription in constitutional matters and has been producing near-real-time, searchable text for the record since 2023.
The most important direction was given by the High Court of Kerala in 2025, instructing that all subordinate courts would have to use the AI-enabled speech to text tool Adalat.AI to record witness depositions from November 1, 2025. Developed by a start-up with research links to universities like Harvard and MIT, Adalat.AI replaces slow handwritten notes with immediate digital transcripts captured within the district court system. The order permits judges to use only alternative platforms vetted by the High Court's IT Directorate if the system fails. Thus, control over how sensitive audio is processed is ensured.
Officials describe these reforms as steps toward a more efficient and transparent judiciary. Policy documents emphasize AI’s potential to reduce human error in transcription, automatically catch basic errors during the e-filing process, and help overburdened judges prioritize urgent cases. Commentators on judicial reform argue that, if implemented carefully, such systems could shorten hearings, improve the accuracy of transcripts and translations, and give litigants particularly those in remote districts with scarce legal resources better visibility into the progress of their cases.
Despite the optimism, judges and scholars have raised concerns. A notable warning came from the Delhi High Court in 2023, when it refused to consider arguments in a trademark case that relied on ChatGPT. The court stated that large language models could fabricate case citations and facts, and that their output required independent verification.
In another case, the same Delhi High Court bench allowed homebuyers to withdraw a petition after they discovered that some portions of their pleadings, including case citations, had been generated on ChatGPT. Complaints noted in the so-formed document included non-existent cases and misquoted statements. The judge chastized the use of unverified generative AI, saying such practices could mislead the court. The incident reflected the professional dangers of the speed-and-accuracy trade offered by AI utilization within the judiciary.
The black box problem goes much beyond that. AI tools used for searching, summarizing, or transcribing can be built on opaque models. When SUPACE draws attention to certain precedents, judges and litigants cannot know how exactly those cases were prioritized. Scholars warn that this level of opacity makes errors more difficult to unearth and may all too subtly influence judicial thinking if the algorithmic suggestions are in some way “seen as neutral.”
Another danger is bias. Indian case law, like society, is unequal and, therefore, datasets for training AI may have also been imprinted with the discriminatory patterns based on caste, gender, class or religion. Analysts caution that AI would reinforce such biases in the name of better efficiency. Senior judges, including the Chief Justice of India, acknowledged that AI can “amplify discrimination” in scenarios where its opacity is still in place or where it trained on unrepresentative data.
Privacy and security concerns have also risen. Judicial record contains a very large amount of sensitive personal data, such as criminal allegations, finances, and medical details. All guidelines from courts like Kerala High Court discourage uploading such data to public cloud tools. The Digital Personal Data Protection Act, 2023, applies to automated processing, which covers many AI tools used in courts. In the absence of a dedicated AI law, courts and developers must navigate a patchwork of confidentiality and data-protection norms.
A further concern that challenges long-term improvement would be that of “automation bias,” where humans unconsciously trust computer outputs too much. Scholars contend, for instance, that when AI presents the judge with an applicable precedent or case priority, under the pressure of workload, the judge may tend, oftentimes without even realizing it, to reconsider some issue. As systems are becoming more seamless, the only aspect that will keep AI as a tool from becoming a co-silent author in judicial decisions will be strict intrusions from the side of judicial discipline.
The judiciary is attempting to find a balance between the demands of the caseload and ethical safeguards in their functioning. In this respect, Kerala has taken the lead by not only mandating Adalat.AI but also issuing a comprehensive AI policy for subordinate courts. The policy views AI as an administrative tool for transcription and translation, prohibits generative AI from drafting judgments or making outcome predictions, advises judges to rigorously evaluate AI outputs, and bans external platforms requiring the uploading of confidential information.
At the national level, the Supreme Court has set up an AI Committee to assess its tools and assess integration into all court IT systems, especially developing partnerships with institutions like IIT Madras. Government statements suggest that a uniform policy for AI use in courts is underway that will be in sync with ethical and privacy guidelines. The authorities stress that AI will be accepted only with “human supervision, ethical oversight, and privacy protection” with only judges authorized to sign orders.
Yet, there is no across-the-board AI law for India. Some rules today are found in court circulars, statutes of data protection, and general policies on technology. Studies on judicial integrity suggest conducting periodic audits for bias, mandatory disclosures whenever AI influences filings or decisions, and providing litigants a way to challenge AI tools that impinge on their cases. Experts stress the need for better technological infrastructure in trial courts, judicial training in questioning AI outputs, and public education around what these tools can do and cannot do.
Today’s major challenge is no longer the adoption of AI, but rather the ability to coexist with it. Real issues such as backlog, language barriers, and inequality of access to legal information can all be alleviated by AI tools. However, it cannot be denied that using opaque algorithms in day-to-day judicial processes may lower their accountability. For now, India’s judges seem determined to keep humans firmly in charge, treating AI as an assistant and not as an oracle. How long this balance holds good will determine not just the rate of dispensing justice but also the level of trust that the public has in this process.
(DS)
Suggested Reading: