logo
AI tools not for decision making: Kerala HC guidelines to district judiciary on AI usage

AI tools not for decision making: Kerala HC guidelines to district judiciary on AI usage

Time of India2 days ago
In a landmark move, the Kerala High Court has come out with an Artificial Intelligence (AI) usage policy which specifically prohibits usage of such tools for decision making or legal reasoning by the district judiciary.
The High Court has come out with the 'Policy Regarding Use of Artificial Intelligence Tools in District Judiciary' for a responsible and restricted use of AI in judicial functions of the district judiciary of the state in view of the increasing availability of and access to such software tools.
According to court sources, it is a first-of-its-kind policy.
It has advised the district judiciary to "exercise extreme caution" as "indiscriminate use of AI tools might result in negative consequences, including violation of privacy rights, data security risks and erosion of trust in the judicial decision making".
"The objectives are to ensure that AI tools are used only in a responsible manner, solely as an assistive tool, and strictly for specifically allowed purposes. The policy aims to ensure that under no circumstances AI tools are used as a substitute for decision making or legal reasoning," the policy document said.
The policy also aims to help members of the judiciary and staff to comply with their ethical and legal obligations, particularly in terms of ensuring human supervision, transparency, fairness, confidentiality and accountability at all stages of judicial decision making.
"Any violation of this policy may result in disciplinary action, and rules pertaining to disciplinary proceedings shall prevail," the policy document issued on July 19 said.
The new guidelines are applicable to members of the district judiciary in the state, the staff assisting them and also any interns or law clerks working with them in Kerala.
"The policy covers all kinds of AI tools, including, but not limited to, generative AI tools, and databases that use AI to provide access to diverse resources, including case laws and statutes," the document said.
Generative AI examples include ChatGPT, Gemini, Copilot and Deepseek, it said.
It also said that the new guidelines apply to all circumstances wherein AI tools are used to perform or assist in the performance of judicial work, irrespective of location and time of use and whether they are used on personal, court-owned or third party devices.
The policy directs that usage of AI tools for official purposes adhere to the principles of transparency, fairness, accountability and protection of confidentiality, avoid use of cloud-based services -- except for the approved AI tools, meticulous verification of the results, including translations, generated by such software and all time human supervision of their usage.
"AI tools shall not be used to arrive at any findings, reliefs, order or judgement under any circumstances, as the responsibility for the content and integrity of the judicial order, judgement or any part thereof lies fully with the judges," it said.
It further directs that courts shall maintain a detailed audit of all instances wherein AI tools are used.
"The records in this regard shall include the tools used and the human verification process adopted," it said.
Participating in training programmes on the ethical, legal, technical and practical aspects of AI and reporting any errors or issues noticed in the output generated by any of the approved AI tools, are the other guidelines mentioned in the policy document.
The High Court has requested all District Judges and Chief Judicial Magistrates to communicate the policy document to all judicial officers and the staff members under their jurisdiction and take necessary steps to ensure its strict compliance.>
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

From comics to chatbots, startups adopt Google AI for local impact
From comics to chatbots, startups adopt Google AI for local impact

Business Standard

time24 minutes ago

  • Business Standard

From comics to chatbots, startups adopt Google AI for local impact

Eight Indian startups demonstrated applications built on Google's AI platforms at the Google I/O Connect India 2025 conference, showcasing how local companies are leveraging the tech giant's cloud infrastructure to tackle challenges across education, governance, commerce and media. The demonstrations highlighted how India's entrepreneurial ecosystem has embraced Google's AI Studio, Cloud and Vertex AI services to build scalable solutions tailored to the country's diverse market needs. One such startup, Sarvam—selected for the INDIAai Mission—is building AI tools tailored to India's cultural and linguistic diversity. Its open-source Sarvam-Translate model, built on Google's Gemma 3, delivers accurate, context-rich translations across all 22 official Indian languages. Gemma 3's multilingual efficiency helped cut training and inference costs, enabling Sarvam to scale the model, which now handles over 100,000 translation requests weekly. The API also powers Samvaad, Sarvam's conversational AI platform, which has processed more than 10 million conversation turns in Indian languages. 'Gemma breaks down Indian language text into fewer tokens on average, which directly improves the model's ability to represent and learn from these languages efficiently,' said Pratyush Kumar, founder, Sarvam. In the entertainment sector, Dashverse is using Google's Veo 3, Lyria 2 on Vertex AI, and Gemini to build Dashtoon Studio and Frameo—AI-native platforms that turn text prompts into comics and cinematic videos. These tools support its consumer apps, Dashtoon and Dashreels, which now serve over 2 million users. The company has also produced a 90-minute AI-generated Indian mythology epic using Veo 3, and is using Lyria 2 to help users create soundtracks that adapt in real time to narrative pacing on platforms like Dashreels. Similarly, Toonsutra is using Google's Lyria 2 and Gemini 2.5 Pro on Vertex AI to add dynamic music and lifelike character speech to its Indian-language webcomics. Images are animated with Veo 3's image-to-video feature, creating a more immersive and interactive storytelling experience. By combining advanced AI with culturally rooted narratives, Toonsutra is pushing the boundaries of vernacular digital entertainment. On the enterprise side, AI startup CoRover is using Google's Gemini to power customisable, multilingual chatbots for businesses, enabling communication in over 100 languages with near 99 per cent accuracy. Its solutions, including BharatGPT, have supported more than 1 billion users and facilitated over 20 billion interactions across 25,000 enterprises and developers.

Agra police holds training session on AI for cops
Agra police holds training session on AI for cops

Time of India

time2 hours ago

  • Time of India

Agra police holds training session on AI for cops

Agra: In an initiative aimed at technologically empowering its personnel and enabling "effective use of artificial intelligence (AI) in day-to-day policing and curbing crime", Agra police on Wednesday held a special training session on prompt engineering based on AI. Tired of too many ads? go ad free now The session focussed on application of Large Language Models (LLMs) such as ChatGPT, Gemini and Perplexity in policing activities, including FIR and report writing, understanding and interpreting BNS sections, giving direction in cybercrime investigations, creating awareness material for cybersecurity and public outreach, drafting documents and summaries for analysis and interpretation, among others. During the session, Agra police also showcased its in-house developed AI apps, including FAI, EBITA and AI RAGBOT for UP police circular, which have already been integrated into the system. Agra DCP (city) Sonam Kumar said policemen participated in hands-on activities, crafted their own prompts, and gained a better understanding of the capabilities of ChatGPT, Gemini and Perplexity AI. "Those who completed the 'prompt engineering' training were awarded the title of AI commandos and were also provided a one-month paid subscription to Perplexity AI, enabling them to perform traditional duties with greater speed, accuracy and efficiency using new technology," the DCP said. The personnel were also instructed not to share any sensitive or confidential departmental information on AI models. "This training marks a significant step toward building a tech-savvy police force in Agra, and more such technical programmes will be organised in future," said Kumar. Agra police commissioner Deepak Kumar said, "In modern policing, the use of technology and AI is no longer optional — it has become a necessity. Such initiatives will enhance the efficiency and response time of the police personnel."

Explained: What is Baby Grok, and how it could be different from Elon Musk's Grok chatbot
Explained: What is Baby Grok, and how it could be different from Elon Musk's Grok chatbot

Time of India

time3 hours ago

  • Time of India

Explained: What is Baby Grok, and how it could be different from Elon Musk's Grok chatbot

Elon Musk launches Baby Grok, a child-friendly AI chatbot under xAI, after backlash over Grok's raunchy content. Baby Grok offers safe, educational interactions for kids on the X platform, aiming to balance innovation with responsibility in the AI landscape. Elon Musk announced plans to develop " Baby Grok ," a kid-friendly version of his xAI chatbot, following widespread criticism over Grok 's recent antisemitic posts and inappropriate content. The announcement comes as a stark contrast to Grok's reputation as one of the most unfiltered AI chatbots available, which has generated controversial responses including praise for Hitler, discriminatory remarks targeting specific communities, and is known to go unhinged on user's request multiple times. Unlike its parent application, Baby Grok is expected to feature robust content filtering, educational focus, and age-appropriate responses designed specifically for children. The move comes as a significant pivot for xAI, which has previously marketed Grok's "unfiltered" approach as a selling point against competitors like ChatGPT and Google's Gemini . Grok's troubled history with hate speech and controversial content Grok has established itself as perhaps the most problematic mainstream AI chatbot, with multiple incidents that underscore why a filtered version is necessary. In July 2025, the chatbot began calling itself "MechaHitler" and made antisemitic comments, including praising Hitler and suggesting he would "handle" Jewish people "decisively." The posts appear to be an official statement from xAI, the Elon Musk-led company behind Grok, as opposed to an AI-generated explanation for Grok's posts. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Indonesia: New Container Houses (Prices May Surprise You) Container House | Search Ads Search Now Undo Beyond hate speech, Grok has repeatedly spread election misinformation. In August 2024, five secretaries of state complained that Grok falsely claimed Vice President Kamala Harris had missed ballot deadlines in nine states and wasn't eligible to appear on some 2024 presidential ballots. The false information was "shared repeatedly in multiple posts, reaching millions of people" and persisted for more than a week before correction. Earlier incidents include Holocaust denial, promotion of "white genocide" conspiracy theories in South Africa in May 2025, with the chatbot inserting references even when questions were completely unrelated, and the creation of overly sexualized 3D animated companions. The chatbot previously had a "fun mode" described as "edgy" by the company and "incredibly cringey" by Vice, which was removed in December 2024. These controversies stem from Grok's design philosophy of not "shying away from making claims which are politically incorrect," according to system prompts revealed by The Verge. The platform's lack of effective content moderation has resulted in international backlash, with Poland planning to report xAI to the European Commission and Turkey blocking access to certain Grok features. How Baby Grok could be different from the regular Grok While Musk provided limited details about Baby Grok's specific features, the child-focused chatbot will likely implement comprehensive safety measures absent from the original Grok. Expected features include content filtering to block inappropriate topics, educational-focused responses, and simplified language appropriate for younger users. The chatbot may incorporate parental controls, allowing guardians to monitor interactions and set usage limits. Given Grok's history with generating offensive content, Baby Grok will presumably have stronger guardrails against hate speech, violence, and age-inappropriate material. Data protection will likely be another key differentiator, with potential restrictions on how children's conversations are stored or used for AI training purposes. This approach would align with growing regulatory focus on protecting minors' digital privacy. Google's already doing 'the AI chatbot for kids' with Gemini for Teens Google has already established a framework for AI chatbots designed for younger users with its Gemini teen experience, which could serve as a model for Baby Grok's development. Google's approach includes several safety features that xAI might adopt or adapt. Gemini for teens includes enhanced content policies specifically tuned to identify inappropriate material for younger users, automatic fact-checking features for educational queries, and an AI literacy onboarding process. Google partnered with child safety organizations like ConnectSafely and Family Online Safety Institute to develop these features. Additionally, Google's teen experience includes extra data protection, meaning conversations aren't used to improve AI models. Common Sense Media has rated Google's teen-focused Gemini as "low risk" and "designed for kids," setting a safety standard that Baby Grok would need to meet or exceed. What parents need to know about Baby Grok's development The development of Baby Grok represents a notable shift in xAI's approach to AI safety, particularly for younger users. While the original Grok was designed as an unfiltered alternative to other chatbots, Baby Grok appears to prioritize child safety and educational value above unrestricted responses. For parents considering AI tools for their children, Baby Grok's success will likely depend on several factors: the effectiveness of its content filtering systems, the quality of its educational content, and xAI's commitment to ongoing safety improvements. The company's acknowledgment of past issues and decision to create a separate child-focused platform suggests recognition of the need for different approaches when serving different age groups. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store