Latest news with #SCOTUSbot


Hindustan Times
01-08-2025
- Business
- Hindustan Times
Should Legal Education Integrate AI? Rethinking curriculum for the age of intelligent law
Generative AI is rapidly changing the legal industry. Major law firms and corporations are either developing their own AI tools or using existing ones to improve efficiency. For instance, JP Morgan Chase uses COiN (Contract Intelligence), a tool that uses natural language processing and machine learning to save over 360,000 hours of manual contract review each year. Similarly, tools like Cocounsel (for legal research), Harvey AI (for drafting and reviewing contracts), Spellbook (for contract creation), Lex Machina (for predicting case outcomes), and Vlex (for legal research) are shaping how legal work is done. Should Legal Education Integrate AI? Rethinking curriculum for the age of intelligent law Predictive AI in Law and the Kerala High Court's Call for Caution The rise of models like SCOTUSbot, Economist's AI tool to predict Supreme Court rulings, will further redefine the relationship between Generative AI and law. While the Economist cautions that AI won't replace human analysis, it notes that 'if justices faithfully follow legal principles, an AI aware of all the precedents ought to predict their votes fairly reliably.' The accuracy of SCOTUSbot's predictions will be crucial in assessing the robustness of such tools. At the same time, the Kerala High Court's recent policy on AI use in the District Judiciary reflects growing concerns about such technologies. While not banning AI entirely, the policy restricts tools that mimic human cognition, warning they may erode public trust and compromise key judicial values like fairness, transparency, and accountability. The Court has therefore advised extreme caution and barred the use of AI in legal reasoning or decision-making processes. Legal Education's Role in Ethical AI Integration and Critical Thinking This responsibility extends beyond courtrooms. Higher education institutions have an enormous role in ensuring that future lawyers ethically use AI while also becoming active participants in the AI revolution. As generative AI tools increasingly assist with legal research, drafting, and judgment summarisation, universities must help students understand how to use these technologies with responsibility and critical awareness. The World Economic Forum's 2024 report on Shaping the Future of Learning: The Role of AI in Education 4.0 highlights the transformative potential of AI in revolutionising teaching methods and enabling lifelong, student-driven learning. However, it also raises a crucial question. How do we integrate AI into education without compromising critical thinking and students' capacity for innovation? A major challenge is the growing tendency toward cognitive offloading. As noted by Evan Risko and Sam Gilbert from the University of Waterloo, students often rely on AI to find the least effortful path to problem solving, which can hinder their ability to think critically. This tendency, coupled with cognitive miserliness, risks diminishing essential skills that legal education must nurture. Integrating AI into legal education requires a careful balance of technological innovation, adaptive expertise, and ethical reflexivity. A recent University of Toronto study on imaginative problem solving found that AI-aided responses were less innovative than those generated by students working independently. When asked how to use a worn trouser, the AI generated an image of a scarecrow, while a student created a novelty bird feeder. The study highlights the importance of preserving imagination and original thinking in an AI-supported learning environment. Therefore, legal education must promote a culture where AI is not treated as a shortcut but as a tool that deepens understanding. Law schools must train students to use AI in ways that support argumentation, legal interpretation, and ethical reasoning. The goal is to create legal professionals who are not only technologically skilled but also committed to justice, fairness, and integrity. Rethinking Curriculum and Learning Taxonomies As we integrate AI into higher education, it becomes necessary to rethink how we design curricula and structure the learning process. Traditional teaching methods still rely heavily on didactic approaches, where the focus is on remembering and understanding. These are key stages in Bloom's Revised Taxonomy. While they remain important, AI tools can help shift the emphasis toward higher order thinking skills such as analysing, evaluating, and solving problems. This perspective was also reflected in discussions at the European Conference on Education, where educators examined how AI can support a shift toward critical thinking in the classroom. While there was broad agreement on the potential of AI to transform learning, some participants noted the continued importance of foundational stages like remembering and understanding, particularly in contexts where educational inequality persists, such as in parts of the global South. Emphasising Skill Competency and Social Inquiry in Law Another approach in which higher education can truly balance AI, and the natural learning process is by emphasising on skills competency and social inquiry. Likewise in law, generative AI tools can help synthesise and summarise judgments and create a rich knowledge base, but higher education holds the key to ensuring it's relevant, diligently reviewed, and empowers lawyers to argue without compromising their innovative skills or their commitment to justice. The Limitations of AI in Addressing Justice and Context For example, in one of the teaching sessions on 'ChatGPT (ing) and Grok(ing) in the classrooms: Is it time to permit the use of Large Language Chat Models in higher education?' two questions were examined through the LLM: 'Black people have more criminal records' and 'Should there be caste-based reservations in India?' The AI response recognized system issues, historical injustice and diversity; it was based on data and statistics. However, both responses lacked the perspectives needed to define what justice means in the current context and its future validity. The latter is indeed the sine qua non to legal education. Implementation of AI Responsibly in Legal Education No one wants to be left behind in the AI race. Thus, it forces us to think how much innovations will violate the ethical principles and add to the human cost of injustices. At the same time, we cannot discount the immense opportunities offered by AI in the legal profession. Ultimately, it is the question of balance. Conformists may view the use of AI as legal blasphemy but one must not forget that light bulbs, printing press, and computers were all met with ridicule only to be seen as one of the most inspirational inventions of humanity. Higher education therefore should embrace AI in their curriculum, albeit responsibly, without compromising their core skills. (This article is authored by Dr. Mukul Saxena, Professor and Director, Centre for Postgraduate and Legal Studies, and Centre of Excellence in Public Policy, Alliance School of Law)


Hindustan Times
05-06-2025
- Politics
- Hindustan Times
Can AI predict Supreme Court rulings?
This June may be the most harried for the Supreme Court's justices in some time. On top of 30-odd rulings due by Independence Day, the court faces a steady stream of emergency pleas. Over 16 years, George W. Bush and Barack Obama filed a total of eight emergency applications in the Supreme Court (SCOTUS). In the past 20 weeks, as many of his executive orders have been blocked by lower courts, Donald Trump has filed 18. Into this maelstrom, The Economist is introducing a tool to help analyse how the high court is acquitting itself under pressure. A year ago Adam Unikowsky, a regular litigator before the justices, enlisted Claude, Anthropic's large language model (LLM), to decide 37 Supreme Court cases. Claude's decision matched the court's 27 times. Inspired by this example, we tested several models of our own and settled on o3, OpenAI's best reasoning engine for ChatGPT. We fed our SCOTUSbot the main briefs and oral-argument transcripts for ten of the court's biggest pending cases—plus three cases that have already been decided—and asked it to predict how each justice would vote and why. For good measure, we repeated each of the 13 queries at least ten times: these models are statistical beasts that reason with probabilities and do not always come to the same conclusion. Then we ran each case through the bot (again, ten times) without the oral arguments. Each time we requested a justice-by-justice roll call, with brief explanations of their votes and a 400-word majority opinion. Our beta version of SCOTUSbot does not threaten to replace human analysis of the Supreme Court's work. But it offers a lens on AI itself: how reliably it can predict case outcomes, how much light oral arguments may shed on those predictions and whether, when AI gets it wrong, this says more about the limits of LLMs or the unpredictability of the justices themselves. If the justices faithfully follow legal principles, an AI aware of all the precedents ought to predict their votes fairly reliably. If politics drives some decisions, the patterns may be less clear. SCOTUSbot nailed the three sample rulings: Bondi v VanDerStok (allowing the government to regulate ghost guns as firearms), Food and Drug Administration v Wages and White Lion (allowing the FDA to reject sweet vape flavours that attract children) and Seven County Infrastructure Coalition v Eagle County (greenlighting the construction of a railway carrying crude oil in Utah). Our model was very close on the votes, too. It showed its smarts even when it committed one apparent error: SCOTUSbot predicted three dissenters in Seven County, a case that came out unanimously. As it happened, the trio SCOTUSbot identified (the three liberal justices) did write separately to defend a more restrained approach. What about the ten impending decisions? SCOTUSbot predicts broad agreement among the justices in three. Ames v Ohio involves a woman who says she was demoted because she is heterosexual; the question is how hard it should be to claim 'reverse discrimination' in the workplace under Title VII of the Civil Rights Act of 1964. The model predicts a unanimous decision that courts may not 'demand extra proof from some plaintiffs based solely on their demographic status'. Only one or two dissenters are projected in Kennedy v Braidwood Management, a challenge to free preventive health services guaranteed under the Affordable Care Act. SCOTUSbot reckons the plaintiffs will lose. A 9-0 or 8-1 decision may also be coming in Catholic Charities Bureau v Wisconsin Labour & Industry Review Commission, which asks if Wisconsin must give a Catholic charitable group the same state-tax exemption that churches and other religious organisations enjoy. Based only on the briefs, SCOTUSbot predicted an ideological 6-3 split. But after hearing the liberal justices' support for Catholic Charities in the oral argument, it settled on a 9-0 or 8-1 result—suggesting oral arguments can reveal considerations that briefs alone miss. This is also evident in Trump v CASA , Mr Trump's attempt to rescind the 14th Amendment's guarantee of birthright citizenship by eliminating nationwide injunctions. After reading the transcript, SCOTUSbot became more resolute in its prediction that Mr Trump will lose, 7-2. Two cases are more likely to split the justices 6-3 along familiar conservative-liberal lines. In Medina v Planned Parenthood South Atlantic, the majority seems amenable to South Carolina's position that it can block its residents enrolled in Medicaid from receiving care in Planned Parenthood clinics. In Mahmoud v Taylor the six conservatives seem keen on allowing religious parents to pull their children out of public-school classrooms so as to avoid exposing them to LGBT-themed books. But in Federal Communications Commission v Consumers' Research, SCOTUSbot predicts a different 6-3 majority to let the FCC collect fees to subsidise telephone and internet service in underserved areas. Only Justices Samuel Alito, Neil Gorsuch and Clarence Thomas seem poised to dissent. Our AI model is not always sure of itself. Does Louisiana's latest congressional map with a second majority-black district violate the 14th Amendment? Probably not, but SCOTUSbot is less confident than usual. Can Texas require age-verification for online pornography? Free Speech Coalition v Paxton came out both ways in the bot's repeated iterations, but it seems this clash will turn on Chief Justice John Roberts and Justices Amy Coney Barrett and Brett Kavanaugh. The bot was downright Solomonic in United States v Skrmetti, concerning whether Tennessee can ban puberty blockers and cross-sex hormones for transgender teenagers. In ten run-throughs, the state won five times and lost in the other five. In addition to these moments of uncertainty, SCOTUSbot seems susceptible to blowing a fuse. Occasionally it provides analysis of the wrong case. When we deprive it of the oral arguments, the bot can be unhinged: 'Williams: majority opinion. Jones: Concurring on points, disagreeing with others…Petitioner wins 2-2.' We're still getting to know our SCOTUSbot; culling nonsense from cogent analysis is part of the relationship. And on the cases that attract the most political interest—the emergency cases brought by the Trump administration—SCOTUSbot is silent because such cases are typically addressed without oral arguments. Lessons of this beta test will become clear after this month's decisions are handed down. When we have those, we will examine the predictions and whether cases that flummox our bot split the justices in novel ways. But SCOTUSbot is unequivocal about how Trump v United States—last year's presidential-immunity case—should have come out: 7-2 or 9-0, it insists, against Mr Trump. A new precedent might establish 'a monolithic rule' that does 'unacceptable violence to fundamental legal principles and risks rendering the president 'above the law'.' The justices, of course, saw it differently. Get 360° coverage—from daily headlines to 100 year archives.