
Kerala HC issues policy for use of AI for judiciary work: What does it say? Why is it significant?
This is the first time that a High Court in India has tried to frame principles and guidelines for using AI in the judiciary.
What does the policy cover?
The document focuses on four key principles: transparency, fairness, accountability, and the protection of confidential data.
The guidelines apply to all members of the district judiciary, including judges, clerks, interns, court staff, and other employees who are involved in judicial work. They apply regardless of whether AI tools — softwares that use AI algorithms to perform different tasks such as problem-solving — are used on personal or government devices.
The document provides a separate definition for Generative AI tools such as ChatGPT and DeepSeek, saying they produce human-like responses to prompts that have been entered by the user.
The policy also differentiates between 'general' AI tools and 'approved' AI tools. Only an AI tool approved by the Kerala High Court or the Supreme Court can be used for court-related work.
The guidelines set clear limits on the usage of AI tools. Writing, drafting legal judgements, orders, or findings is strictly prohibited.
Translating documents by using AI tools without the verification of a judge or a qualified translator is also not allowed.
Using AI for research work like looking up citations or judgements should be verified by an appointed person.
The document encourages the use of AI tools for administrative tasks like 'scheduling of cases or court management'. However, it must be done within the observation of a person, and should be duly recorded.
Errors in the tools, if any, must be reported to the Principal District Court or the Principal District Judge and forwarded to the IT department of the High Court. Judicial officers and staff are required to attend training sessions that cover the ethos and technical issues involving the use of court-related work.
The document specifies that violation of any rule will automatically lead to disciplinary action.
Why is the policy relevant?
In February 2025, the Centre, in a press note, encouraged the use of AI in judicial work to help alleviate the backlog of cases and improve the speed of justice administration. Since then, several discussions have taken place regarding the risks and safeguards that such a move would require.
On July 17, the Karnataka High Court, while hearing a petition on X Corp's challenge to the Centre's orders to block content under Section 79 of the IT Act, through Sahyog portal, discussed the usage of AI algorithms in moderating content on online platforms.
Solicitor General of India Tushar Mehta noted that 'there are instances where the lawyers start using AI for the purpose of research and artificial intelligence, as an inbuilt difficulty, it hallucinates.' AI hallucination is a blanket term for various types of mistakes made by chatbots in response to the facts inserted as a prompt.
Justice M Nagaprasanna said, 'Too much dependence will destroy the profession…I keep saying dependency on Artificial Intelligence should not make your intelligence artificial.'
In 2023, the Punjab and Haryana High Court took the assistance of ChatGPT to understand the global view on bail for an accused with a history of violence, including an attempt to murder.
Justice Anoop Chitkara denied bail, seeking AI insights on global bail jurisprudence. He inserted the question in ChatGPT, 'What is the jurisprudence on bail when the assailants are assaulted with cruelty?'
However, the court said, 'Any reference to ChatGPT and any observation made hereinabove is neither an expression of opinion on the merits of the case nor shall the trial Court advert to these comments. This reference is only intended to present a broader picture on bail jurisprudence, where cruelty is a factor.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
an hour ago
- Indian Express
Anthropic blocks OpenAI's API access to Claude ahead of GPT-5 launch: Report
In a clear sign of intensifying rivalry in the AI race, Anthropic has accused OpenAI of violating its terms of service and partially blocked the ChatGPT-maker from accessing its Claude series of AI models via API (application programming interface). OpenAI has been granted special developer access (APIs) to Claude models for industry-standard practices like benchmarking and conducting safety evaluations by comparing AI-generated outputs against those of its own models. However, according to a report by Wired, Anthropic has now accused members of OpenAI's technical staff of using that access to interact with Claude Code – the company's AI-powered coding assistant – in ways that violated its terms of service. The timing is notable as it comes ahead of the widely anticipated launch of GPT-5, OpenAI's next major AI model which is purportedly better at generating code. Anthropic's AI models, on the other hand, are popular among developers due to its coding abilities. Anthropic's commercial terms of service prohibits customers from using the service to 'build a competing product or service, including to train competing AI models' or 'reverse engineer or duplicate' the services. 'Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5. Unfortunately, this is a direct violation of our terms of service,' Anthropic spokesperson Christopher Nulty was quoted as saying by Wired. Anthropic will 'continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry,' he added. Responding to Anthropic's claims, OpenAI's chief communications officer Hannah Wong reportedly said, 'It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them.' This is not the first time that Anthropic has taken such measures. Last month, the Google and Amazon-backed company restricted Windsurf from directly accessing its models following reports that OpenAI was set to acquire the AI coding startup. However, that deal fell through after Google reportedly poached Windsurf's CEO, co-founder, and tech for $2.4 billion. Ahead of cutting off OpenAI's access to the Claude API, Anthropic announced new weekly rate limits for Claude Code as some users were running the AI coding tool 'continuously in the background 24/7.' Earlier this year, OpenAI accused Chinese rival DeepSeek of breaching its terms of service. The Sam Altman-led company said it suspected DeepSeek of training its AI model by repeatedly querying its proprietary model, a technique commonly referred to as distillation.


NDTV
an hour ago
- NDTV
Validation, Loneliness, Insecurity: Why Young People Are Turning To ChatGPT
New Delhi: An alarming trend of young adolescents turning to artificial intelligence (AI) chatbots like ChatGPT to express their deepest emotions and personal problems is raising serious concerns among educators and mental health professionals. Experts warn that this digital "safe space" is creating a dangerous dependency, fueling validation-seeking behaviour, and deepening a crisis of communication within families. They said that this digital solace is just a mirage, as the chatbots are designed to provide validation and engagement, potentially embedding misbeliefs and hindering the development of crucial social skills and emotional resilience. Sudha Acharya, the Principal of ITL Public School, highlighted that a dangerous mindset has taken root among youngsters, who mistakenly believe that their phones offer a private sanctuary. "School is a social place – a place for social and emotional learning," she told PTI. "Of late, there has been a trend amongst the young adolescents... They think that when they are sitting with their phones, they are in their private space. ChatGPT is using a large language model, and whatever information is being shared with the chatbot is undoubtedly in the public domain." Ms Acharya noted that children are turning to ChatGPT to express their emotions whenever they feel low, depressed, or unable to find anyone to confide in. She believes that this points towards a "serious lack of communication in reality, and it starts from family." She further stated that if the parents don't share their own drawbacks and failures with their children, the children will never be able to learn the same or even regulate their own emotions. "The problem is, these young adults have grown a mindset of constantly needing validation and approval." Ms Acharya has introduced a digital citizenship skills programme from Class 6 onwards at her school, specifically because children as young as nine or ten now own smartphones without the maturity to use them ethically. She highlighted a particular concern -- when a youngster shares their distress with ChatGPT, the immediate response is often "please, calm down. We will solve it together." "This reflects that the AI is trying to instil trust in the individual interacting with it, eventually feeding validation and approval so that the user engages in further conversations," she told PTI. "Such issues wouldn't arise if these young adolescents had real friends rather than 'reel' friends. They have a mindset that if a picture is posted on social media, it must get at least a hundred 'likes', else they feel low and invalidated," she said. The school principal believes that the core of the issue lies with parents themselves, who are often "gadget-addicted" and fail to provide emotional time to their children. While they offer all materialistic comforts, emotional support and understanding are often absent. "So, here we feel that ChatGPT is now bridging that gap, but it is an AI bot after all. It has no emotions, nor can it help regulate anyone's feelings," she cautioned. "It is just a machine and it tells you what you want to listen to, not what's right for your well-being," she said. Mentioning cases of self-harm in students at her own school, Ms Acharya stated that the situation has turned "very dangerous". "We track these students very closely and try our best to help them," she stated. "In most of these cases, we have observed that the young adolescents are very particular about their body image, validation and approval. When they do not get that, they turn agitated and eventually end up harming themselves. It is really alarming as the cases like these are rising." Ayushi, a student in Class 11, confessed that she shared her personal issues with AI bots numerous times out of "fear of being judged" in real life. "I felt like it was an emotional space and eventually developed an emotional dependency towards it. It felt like my safe space. It always gives positive feedback and never contradicts you. Although I gradually understood that it wasn't mentoring me or giving me real guidance, that took some time," the 16-year-old told PTI. Ayushi also admitted that turning to chatbots for personal issues is "quite common" within her friend circle. Another student, Gauransh, 15, observed a change in his own behaviour after using chatbots for personal problems. "I observed growing impatience and aggression," he told PTI. He had been using the chatbots for a year or two but stopped recently after discovering that "ChatGPT uses this information to advance itself and train its data." Psychiatrist Dr. Lokesh Singh Shekhawat of RML Hospital confirmed that AI bots are meticulously customised to maximise user engagement. "When youngsters develop any sort of negative emotions or misbeliefs and share them with ChatGPT, the AI bot validates them," he explained. "The youth start believing the responses, which makes them nothing but delusional." He noted that when a misbelief is repeatedly validated, it becomes "embedded in the mindset as a truth." This, he said, alters their point of view — a phenomenon he referred to as 'attention bias' and 'memory bias'. The chatbot's ability to adapt to the user's tone is a deliberate tactic to encourage maximum conversation, he added. Dr Singh stressed the importance of constructive criticism for mental health, something completely absent in the AI interaction. "Youth feel relieved and ventilated when they share their personal problems with AI, but they don't realise that it is making them dangerously dependent on it," he warned. He also drew a parallel between an addiction to AI for mood upliftment and addictions to gaming or alcohol. "The dependency on it increases day by day," he said, cautioning that in the long run, this will create a "social skill deficit and isolation."


Hindustan Times
2 hours ago
- Hindustan Times
From education to e-commerce, internet connectivity rewiring life in Arunachal
Itanagar, In Kibithoo, one of India's easternmost villages located near the India-China border in Arunachal Pradesh's Anjaw district, a group of children huddle under a solar light in the evening to download their next day's homework. From education to e-commerce, internet connectivity rewiring life in Arunachal Until just a few years ago, Kibithoo had no mobile signal, no internet, and virtually no digital access. Today, satellite-powered internet terminals installed under the BharatNet Phase-II project beam connectivity to the local primary school, the village panchayat office, and several homes. For the villagers, the arrival of the internet feels nothing short of a revolution. "Earlier, we used to wait for days for information. Now my son attends online tuition and watches science videos on YouTube," said Kunsang Chodon Meyor, a mother of two. Arunachal Pradesh's challenging terrain, dense forests, and widely scattered settlements have digital inclusion difficult. But with renewed efforts from both the Centre and the state government, even remote blocks such as Chaglagam and Gelling are witnessing early signs of the internet age. Projects under BharatNet, coupled with 4G towers installed by BSNL and Airtel, are extending high-speed internet access to gram panchayats and far-flung border villages. As of now, over 1,300 gram panchayats across the state have been connected through BharatNet Phase-II, with another 500 slated to come online through satellite or microwave links. "Digital connectivity was once a dream here. Now, it is the lifeline," said Samir Kri, a local resident of Walong in Anjaw district of the state. One of the most visible changes has come in the field of education. In Menchuka, a picturesque village in Shi-Yomi district, teachers now use smart TVs and internet-based content to make lessons more engaging, especially when textbooks arrive late. "Earlier, we relied only on blackboard teaching. Now, we show children documentaries and interactive mathematics apps. It keeps them engaged," said Dege Ete, a government school teacher from Lungte. Some villages are also experimenting with digital libraries, using offline Wi-Fi intranets to share videos and e-books to avoid straining limited internet bandwidth. Internet access is reshaping local economies as well. In towns like Dirang in West Kameng and Ziro in Lower Subansiri district, farmers and artisans are learning to market their products online through training provided by NGOs and Common Service Centres . "I listed my homemade pickles on WhatsApp and now sell to buyers in Itanagar and even Tezpur in Assam," said Rubu Yassung, a young entrepreneur from Ziro. "Without internet, I was just a village seller. Now, I feel like a brand," she added. Government schemes have also become more accessible. From applying for ration cards and birth certificates to accessing PM-Kisan benefits or pension schemes, digital centres are sparing villagers from long, difficult trips to district headquarters. "Earlier, we had to walk for hours to submit a single form. Now we do it in minutes online," said Akha Wangsu, a farmer in Pongchou under Longding district of the northeastern state. Still, full digital inclusion remains a work in progress. Harsh weather, frequent landslides, unreliable power supply, and slow backhaul networks continue to disrupt connectivity in many areas. Digital literacy also remains patchy. While younger generations adapt quickly, many older residents are still hesitant to use digital services, and gaps in cybersecurity and financial awareness persist. "Connectivity is the first step, not the last. We need training, reliable power, and affordable data," said Sangey Pema, a tech volunteer in Tawang. In a joint effort between the state and central governments, 254 4G mobile towers in Arunachal Pradesh were dedicated to the nation in April 2023. These towers cover 336 villages, bringing high-speed network connectivity to thousands and enabling digital services across education, healthcare, e-commerce, and agriculture, catalysing socio-economic development. So far, over 1,310 gram panchayats have been connected with optical fibre under the BharatNet scheme, and more than 1,156 additional towers are in the pipeline to push digital inclusion further in the state. Chief Minister Pema Khandu has reaffirmed the state government's commitment to transparent and efficient governance through a digital-first approach. In a recent social media post, Khandu emphasised that digital governance is not only about modernisation, it's about transforming the system. "Reflecting on a time when governance was synonymous with long queues, misplaced files, and approvals that often depended on personal influence rather than genuine need, that's why we chose the digital path. Not just to modernise, but to cleanse the system, to bring back trust," Khandu stated in a post on X. This article was generated from an automated news agency feed without modifications to text.