
Kerala High Court Bans Use Of AI Tools In Judicial Decision-Making
In a landmark move, the Kerala High Court has come out with an Artificial Intelligence (AI) usage policy which specifically prohibits usage of such tools for decision making or legal reasoning by the district judiciary.
The High Court has come out with the 'Policy Regarding Use of Artificial Intelligence Tools in District Judiciary' for a responsible and restricted use of AI in judicial functions of the district judiciary of the state in view of the increasing availability of and access to such software tools.
According to court sources, it is a first-of-its-kind policy.
It has advised the district judiciary to "exercise extreme caution" as "indiscriminate use of AI tools might result in negative consequences, including violation of privacy rights, data security risks and erosion of trust in the judicial decision making".
"The objectives are to ensure that AI tools are used only in a responsible manner, solely as an assistive tool, and strictly for specifically allowed purposes. The policy aims to ensure that under no circumstances AI tools are used as a substitute for decision making or legal reasoning," the policy document said.
The policy also aims to help members of the judiciary and staff to comply with their ethical and legal obligations, particularly in terms of ensuring human supervision, transparency, fairness, confidentiality and accountability at all stages of judicial decision making.
"Any violation of this policy may result in disciplinary action, and rules pertaining to disciplinary proceedings shall prevail," the policy document issued on July 19 said.
The new guidelines are applicable to members of the district judiciary in the state, the staff assisting them and also any interns or law clerks working with them in Kerala.
"The policy covers all kinds of AI tools, including, but not limited to, generative AI tools, and databases that use AI to provide access to diverse resources, including case laws and statutes," the document said.
Generative AI examples include ChatGPT, Gemini, Copilot and Deepseek, it said.
It also said that the new guidelines apply to all circumstances wherein AI tools are used to perform or assist in the performance of judicial work, irrespective of location and time of use and whether they are used on personal, court-owned or third party devices.
The policy directs that usage of AI tools for official purposes adhere to the principles of transparency, fairness, accountability and protection of confidentiality, avoid use of cloud-based services -- except for the approved AI tools, meticulous verification of the results, including translations, generated by such software and all time human supervision of their usage.
"AI tools shall not be used to arrive at any findings, reliefs, order or judgement under any circumstances, as the responsibility for the content and integrity of the judicial order, judgement or any part thereof lies fully with the judges," it said.
It further directs that courts shall maintain a detailed audit of all instances wherein AI tools are used.
"The records in this regard shall include the tools used and the human verification process adopted," it said.
Participating in training programmes on the ethical, legal, technical and practical aspects of AI and reporting any errors or issues noticed in the output generated by any of the approved AI tools, are the other guidelines mentioned in the policy document.
The High Court has requested all District Judges and Chief Judicial Magistrates to communicate the policy document to all judicial officers and the staff members under their jurisdiction and take necessary steps to ensure its strict compliance.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
26 minutes ago
- Time of India
Telling secrets to ChatGPT? Using it as a therapist? Your AI chats aren't legally private, warns Sam Altman
Many users may treat ChatGPT like a trusted confidant—asking for relationship advice, sharing emotional struggles, or even seeking guidance during personal crises. But OpenAI CEO Sam Altman has warned that unlike conversations with a therapist, doctor, or lawyer, chats with the AI tool carry no legal confidentiality. During a recent appearance on This Past Weekend, a podcast hosted by comedian Theo Von, Altman said that users, particularly younger ones, often treat ChatGPT like a therapist or life coach. However, he cautioned that the same legal safeguards that protect personal conversations in professional settings do not extend to AI. Explore courses from Top Institutes in Please select course: Select a Course Category Data Science Artificial Intelligence Operations Management Degree Healthcare Technology Design Thinking Leadership Digital Marketing Public Policy Product Management CXO Data Analytics Finance Others others MCA PGDM Project Management Cybersecurity Data Science Management MBA healthcare Skills you'll gain: Duration: 10 Months IIM Kozhikode CERT-IIMK DABS India Starts on undefined Get Details Skills you'll gain: Duration: 11 Months IIT Madras CERT-IITM Advanced Cert Prog in AI and ML India Starts on undefined Get Details Skills you'll gain: Duration: 11 Months E&ICT Academy, Indian Institute of Technology Guwahati CERT-IITG Postgraduate Cert in AI and ML India Starts on undefined Get Details Skills you'll gain: Duration: 30 Weeks IIM Kozhikode SEPO - IIMK-AI for Senior Executives India Starts on undefined Get Details Skills you'll gain: Duration: 10 Months E&ICT Academy, Indian Institute of Technology Guwahati CERT-IITG Prof Cert in DS & BA with GenAI India Starts on undefined Get Details Altman explained that legal privileges—such as doctor-patient or attorney-client confidentiality—do not apply when using ChatGPT. If there's a lawsuit, OpenAI could be compelled to turn over user chats, including the most sensitive ones. 'That's very screwed up,' Altman admitted, adding that the lack of legal protection is a major gap that needs urgent attention. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like SRM Online MBA | India's top ranked institute SRM Online Learn More Undo Altman Urges New Privacy Standards for AI Altman believes that conversations with AI should eventually be treated with the same privacy standards as those with human professionals. He pointed out that the rapid adoption of generative AI has raised legal and ethical questions that didn't even exist a year ago. Von, who expressed hesitation about using ChatGPT due to privacy concerns, found Altman's warning validating. The OpenAI chief acknowledged that the absence of clear regulations could be a barrier for users who might otherwise benefit from the chatbot's assistance. 'It makes sense to want privacy clarity before you use it a lot,' Altman said, agreeing with Von's skepticism. Chats Can Be Accessed and Stored According to OpenAI's own policies, conversations from users on the free tier can be retained for up to 30 days for safety and system improvement, though they may sometimes be kept longer for legal reasons. This means chats are not end-to-end encrypted like on messaging platforms such as WhatsApp or Signal. OpenAI staff may access user inputs to optimize the AI model or monitor misuse. The privacy issue is not just theoretical. OpenAI is currently involved in a lawsuit with The New York Times, which has brought the company's data storage practices under scrutiny. A court order related to the case has reportedly required OpenAI to retain and potentially produce user conversations—excluding those from its ChatGPT Enterprise customers. OpenAI is appealing the order, calling it an overreach. Debate Around AI and Data Rights Altman also highlighted that tech companies are increasingly facing demands to produce user data in legal or criminal cases. He drew parallels to how people shifted to encrypted health tracking apps after the U.S. Supreme Court's Roe v. Wade reversal, which raised fears about digital privacy around personal choices. While AI chatbots like ChatGPT have become a popular tool for emotional support, the legal framework surrounding their use hasn't caught up. Until it does, Altman's message is clear: users should be cautious about what they choose to share.


Indian Express
2 hours ago
- Indian Express
Meet Lumo, the new AI chatbot that protects user privacy
Proton, the company that introduced the encrypted email service Proton Mail, has now unveiled an AI chatbot with focus on user privacy. Named Lumo, the chatbot can generate code, write email, summarise documents, and much more. Proton has dubbed its AI chatbot as an alternative to ChatGPT, Gemini, Copilot, etc. The AI chatbot preserves user privacy while storing data locally on users' devices. Lumo is powered by several open-source large language models that run on Proton's servers in Europe, including Mistral's Nemo, Mistral Small 3, Nvidia's OpenHands 32B, and the Allen Institute for AI's OLMO 2 32B model. Lumo can field requests through different models depending on which is better suited for a query. The company claims that the new chatbot will protect information with 'zero-access' encryption, which grants the user an encryption key that allows them exclusive access to their data. This encryption key will block third parties and even Proton from accessing the user content, meaning the company will not be sharing any personal information. Proton has reportedly used Transport Layer Security (TLS) encryption for data transmission and 'asymmetrically' encrypts prompts, allowing only the Lumo GPU servers to decrypt them. When it comes to features, Ghost mode ensures that your active chat sessions are not saved, not even on local devices. With the Web search feature, Lumo can look up recent or new information on the internet to add to its current knowledge. It can also understand and analyse your files, but does not keep a record of them. Lastly, integration with Proton Drive makes it simple to add end-to-end encrypted files from your Proton Drive to your Lumo chats. The chatbot comes with internet search, however, it is disabled by default to ensure privacy. Once enabled, Lumo will deploy privacy-friendly search engines to provide responses to user queries. It can analyse uploaded files, but it does not store any of the data. Proton Drive files, which are meant to remain end-to-end encrypted while communicating with the chatbot, can also be linked by users to Lumo. The chatbot comes in both a free and premium version. Those without an account with Lumo or Proton, will be able to ask 25 queries per week. They will not be able to access chat histories. On the other hand, users with a free account can ask up to 100 questions per week. Lumo Plus plan is priced at $12.99 a month and comes with unlimited chats, an extended encrypted chat history, and more.


Time of India
2 hours ago
- Time of India
China urges global consensus on balancing AI development, security
China's Premier Li Qiang warned Saturday that artificial intelligence development must be weighed against the security risks, saying global consensus was urgently needed even as the tech race between Beijing and Washington shows no sign of remarks came just days after US President Donald Trump unveiled an aggressive low-regulation strategy aimed at cementing US dominance in the fast-moving field, promising to "remove red tape and onerous regulation" that could hinder private sector AI development. Opening the World AI Conference (WAIC) in Shanghai on Saturday, Li emphasised the need for governance and open-source development, announcing the establishment of a Chinese-led body for international AI cooperation. "The risks and challenges brought by artificial intelligence have drawn widespread attention... How to find a balance between development and security urgently requires further consensus from the entire society," the premier said. Li said China would "actively promote" the development of open-source AI , adding Beijing was willing to share advances with other countries, particularly developing ones. "If we engage in technological monopolies, controls and blockage, artificial intelligence will become the preserve of a few countries and a few enterprises," he said. "Only by adhering to openness, sharing and fairness in access to intelligence can more countries and groups benefit from (AI)." The premier highlighted "insufficient supply of computing power and chips" as a bottleneck. Washington has expanded its efforts in recent years to curb exports of state-of-the-art chips to China, concerned that these can be used to advance Beijing's military systems and erode US tech dominance. For its part, China has made AI a pillar of its plans for technological self-reliance, with the government pledging a raft of measures to boost the sector. In January, Chinese startup DeepSeek unveiled an AI model that performed as well as top US systems despite using less powerful chips. 'Pet tiger cub' At a time when AI is being integrated across virtually all industries, its uses have raised major ethical questions, from the spread of misinformation to its impact on employment, or the potential loss of technological control. In a speech at WAIC on Saturday, Nobel Prize-winning physicist Geoffrey Hinton compared the situation to keeping "a very cute tiger cub as a pet". "To survive", he said, you need to ensure you can train it not to kill you when it grows up. In a video message played at the WAIC opening ceremony, UN Secretary-General Antonio Guterres said AI governance would be "a defining test of international cooperation". The ceremony also saw the French president's AI envoy, Anne Bouverot, underscore the "an urgent need" for global action. At an AI summit in Paris in February, 58 countries including China, France and India -- as well as the European Union and African Union Commission -- called for enhanced coordination on AI governance. But the United States warned against "excessive regulation", and alongside the United Kingdom, refused to sign the summit's appeal for an "open", "inclusive" and "ethical" AI.