
AI tools not for decision making: Kerala HC guidelines to district judiciary on AI usage, ETCISO
The High Court has come out with the 'Policy Regarding Use of Artificial Intelligence Tools in District Judiciary' for a responsible and restricted use of AI in judicial functions of the district judiciary of the state in view of the increasing availability of and access to such software tools.
According to court sources, it is a first-of-its-kind policy.
It has advised the district judiciary to "exercise extreme caution" as "indiscriminate use of AI tools might result in negative consequences, including violation of privacy rights, data security risks and erosion of trust in the judicial decision making".
Advt
Advt
Join the community of 2M+ industry professionals. Subscribe to Newsletter to get latest insights & analysis in your inbox.
All about ETCISO industry right on your smartphone! Download the ETCISO App and get the Realtime updates and Save your favourite articles.
"The objectives are to ensure that AI tools are used only in a responsible manner, solely as an assistive tool, and strictly for specifically allowed purposes. The policy aims to ensure that under no circumstances AI tools are used as a substitute for decision making or legal reasoning," the policy document said.The policy also aims to help members of the judiciary and staff to comply with their ethical and legal obligations, particularly in terms of ensuring human supervision, transparency, fairness, confidentiality and accountability at all stages of judicial decision making."Any violation of this policy may result in disciplinary action, and rules pertaining to disciplinary proceedings shall prevail," the policy document issued on July 19 said.The new guidelines are applicable to members of the district judiciary in the state, the staff assisting them and also any interns or law clerks working with them in Kerala."The policy covers all kinds of AI tools, including, but not limited to, generative AI tools, and databases that use AI to provide access to diverse resources, including case laws and statutes," the document said.Generative AI examples include ChatGPT, Gemini, Copilot and Deepseek , it said.It also said that the new guidelines apply to all circumstances wherein AI tools are used to perform or assist in the performance of judicial work, irrespective of location and time of use and whether they are used on personal, court-owned or third party devices.The policy directs that usage of AI tools for official purposes adhere to the principles of transparency, fairness, accountability and protection of confidentiality, avoid use of cloud-based services -- except for the approved AI tools, meticulous verification of the results, including translations, generated by such software and all time human supervision of their usage."AI tools shall not be used to arrive at any findings, reliefs, order or judgement under any circumstances, as the responsibility for the content and integrity of the judicial order, judgement or any part thereof lies fully with the judges," it said.It further directs that courts shall maintain a detailed audit of all instances wherein AI tools are used."The records in this regard shall include the tools used and the human verification process adopted," it said.Participating in training programmes on the ethical, legal, technical and practical aspects of AI and reporting any errors or issues noticed in the output generated by any of the approved AI tools, are the other guidelines mentioned in the policy document.The High Court has requested all District Judges and Chief Judicial Magistrates to communicate the policy document to all judicial officers and the staff members under their jurisdiction and take necessary steps to ensure its strict compliance.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
30 minutes ago
- Time of India
Trump to sign executive orders at AI summit on Wednesday: White House says
Synopsis US President Donald Trump will sign executive orders at an AI summit on Wednesday, the White House said late on Tuesday. AP US President Donald Trump will sign executive orders at an AI summit on Wednesday, the White House said late on Tuesday.


Hindustan Times
30 minutes ago
- Hindustan Times
Maha approaches SC against acquittal of 12 in Mumbai blasts
A day after the Bombay High Court acquitted all 12 men convicted of planning and executing the July 11, 2006 serial bomb blasts on Mumbai's suburban rail network, including five on death row, the Maharashtra government on Tuesday rushed to the Supreme Court seeking a stay on the verdict and an urgent hearing of its appeal. Maha approaches SC against acquittal of 12 in Mumbai blasts Solicitor General Tushar Mehta, appearing for the state government, mentioned the matter before Chief Justice of India Bhushan R Gavai, requesting that the petition be heard without delay. The CJI agreed to list the case for a hearing on July 24, even as he remarked: 'But we have been reading that some of them have already been released from jail.' Responding to the observation, Mehta acknowledged the development but added: 'The state still wants the appeal to be heard expeditiously.' The special leave petition challenging the High Court judgment was filed earlier in the day. The state's legal challenge argues that the High Court erred in reversing the trial court's judgment and seeks a stay on the acquittal to prevent further release of the accused. The acquittals triggered political outrage, with Maharashtra chief minister Devendra Fadnavis on Monday calling the verdict 'shocking' and vowing to challenge it in the Supreme Court. The appeal comes in the wake of Monday's decision by the Bombay High Court, which overturned the 2015 convictions handed down by a special court under the Maharashtra Control of Organised Crime Act (MCOCA). The High Court held that the prosecution 'utterly failed to establish the offence beyond reasonable doubt,' describing the investigation as riddled with procedural lapses, unreliable evidence and grave violations of the accused's constitutional rights. The 2006 blasts were among the deadliest terror attacks in India's history, killing 188 people and injuring 829. Seven powerful improvised explosive devices, planted in pressure cookers, ripped through first-class compartments of Mumbai's crowded local trains within six minutes during evening rush hour. The carnage left behind mangled steel and shattered lives, and prompted a massive terror investigation led by the Maharashtra Anti-Terrorism Squad (ATS). Within four months, 13 men were arrested by the ATS, which claimed that the attacks were orchestrated by former members of the banned Students' Islamic Movement of India (SIMI) and aided by the Pakistan-based Lashkar-e-Taiba (LeT). The ATS further alleged that 12 Pakistani nationals had infiltrated India to provide explosives and training to the accused—claims that ultimately failed to stand judicial scrutiny. In 2015, the MCOCA court convicted 12 of the 13 accused, awarding the death penalty to five and life imprisonment to the others. One man, Abdul Wahid Shaikh, a schoolteacher who refused to confess, was acquitted by the trial court. One of the 13 accused died during the lengthy appeals process before the Bombay High Court. On Monday, the Bombay High Court bench of Justices Anil S Kilor and Shyam Chandak delivered a 400-page verdict that raised fundamental questions about the fairness of the investigation and trial. It described the prosecution's case as a 'deceptive closure' that undermined public trust while allowing the true culprits to remain at large. The high court pointed out that the prosecution's reliance on confessional statements, which formed the bedrock of the ATS's case, was deeply flawed. Most of these statements, recorded between October 4 and 25, 2006, bore tell-tale signs of being 'cut-copy-paste' reproductions and raised suspicions of being extracted under coercion. Several accused retracted their confessions during trial, alleging torture in custody -- a claim the court found credible in light of procedural violations. The high court also noted that the accused were not informed of their right to consult their lawyers before confessing, despite being represented by advocates on record. This, the court ruled, was a violation of their fundamental rights. Furthermore, it cast serious doubt on the credibility of eyewitnesses, including two taxi drivers and a few train passengers, who claimed to have seen the accused planting the bombs. Their testimonies, recorded more than 100 days after the incident and four years later during identification parades, were found to be unreliable. The test identification parades themselves were conducted by officials not authorised under law. Material evidence, such as recovered RDX, circuit boards, pressure cookers, soldering guns and maps, was also deemed inadmissible. The court found that the chain of custody was broken and that the items were not properly sealed before being sent for forensic testing, casting doubt on their origin and connection to the accused. Additionally, the high court raised questions about the applicability of MCOCA in the case and noted serious procedural lapses in invoking its provisions. Ends


Hans India
an hour ago
- Hans India
Researchers develop new ways for AI models to work together
Researchers have developed a set of algorithms that allow different artificial intelligence (AI) models to 'think' and work together as one. The development, by researchers at the Weizmann Institute of Science (WIS) makes it possible to combine the strengths of different AI systems, speeding up performance and reducing costs, Xinhua news agency reported. The new method significantly improves the speed of large language models, or LLMs, which power tools like ChatGPT and Gemini. On average, it increases performance by 1.5 times, and in some cases by as much as 2.8 times, the team said, adding that it could make AI more suitable for smartphones, drones, and autonomous vehicles. In those settings, faster response times can be critical to safety and accuracy. For example, in a self-driving car, a faster AI model can mean the difference between a safe decision and a dangerous error. Until now, AI models developed by different companies could not easily communicate or collaborate because each uses a different internal 'language,' made up of unique tokens. The researchers compared this to people from different countries trying to talk without a shared vocabulary. To overcome this, the team developed two algorithms. One allows a model to translate its output into a shared format that other models can understand. The other encourages collaboration using tokens that have the same meaning across different systems, like common words in human languages. Though initially concerned that meaning might be lost in translation, the researchers found that their system worked efficiently. The new tools are already available through open-source platforms and are helping developers worldwide create faster and more collaborative AI applications. The finding was presented at the International Conference on Machine Learning being held in Vancouver, Canada.