
Justice at stake as generative AI enters the courtroom
By Thomas URBAIN
Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself.
Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court.
"It's probably used more than people expect," said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the U.S. legal system. "Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it."
In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom -- in the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists.
"I believe in forgiveness," said a digital proxy of Pelkey created by his sister, Stacey Wales.
The judge voiced appreciation for the avatar, saying it seemed authentic.
"I knew it would be powerful," Wales told AFP, "that that it would humanize Chris in the eyes of the judge."
The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss.
Since the hearing, examples of GenAI being used in U.S. legal cases have multiplied.
"It is a helpful tool and it is time-saving, as long as the accuracy is confirmed," said attorney Stephen Schwartz, who practices in the northeastern state of Maine. "Overall, it's a positive development in jurisprudence."
Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks.
"You can't completely rely on it," Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy. "We are all aware of a horror story where AI comes up with mixed-up case things."
The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications.
In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a "collective debacle."
The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors.
And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts.
"Courts need to be prepared to handle that," Cleary said.
Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient.
"We have a huge number of people who don't have access to legal services," Linna said. "These tools can be transformative; of course we need to be thoughtful about how we integrate them."
Federal judges in the U.S. capital have written decisions noting their use of ChatGPT in laying out their opinions.
"Judges need to be technologically up-to-date and trained in AI," Linna said.
GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor.
Facts or case law pointed out by GenAI might sway a judge's decision, and could be different than what a legal clerk would have come up with.
But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.
© 2025 AFP
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Asahi Shimbun
9 hours ago
- Asahi Shimbun
EDITORIAL: Trump must not be allowed to insert ideology into AI responses
U.S. President Donald Trump at this year's Group of Seven summit (Asahi Shimbun file photo) U.S. President Donald Trump has announced an AI Action Plan with the goal of making the United States the world leader in development of artificial intelligence. It is a major change of direction from the Biden administration, which emphasized control of the risks associated with AI. The Trump administration wants to accelerate technology and infrastructure development through deregulation and increased investment. The action plan will encourage the export of AI technology to friendly nations in a bid for supremacy in competition with an emerging China. A surprising aspect of the plan is the level of intervention it envisages into the output from generative AI technology such as ChatGPT. National guidelines for AI risk control were revised to eliminate from answers references to diversity, equity and inclusion, widely known as DEI, and climate change. The federal government is supposed to sign contracts only with companies whose AI technology is objective and does not contain ideological bias. While the Trump administration says such measures are needed to protect freedom of speech and American values, the action plan conversely carries the risk of gagging free speech. To begin with, it is very difficult to establish neutral standards to help determine ideological bias. The bias could be defined according to the intent of the administration in power. DEI initially sought to counter discrimination based on gender, race, nationality or disability and to ensure fair opportunity for those who had been treated unfavorably in the past. Trump has long been negative about measures related to DEI and global warming. Since the start of his second term, he has implemented various steps reflecting that thinking. We cannot ignore the attempt to interfere with generative AI, which is spreading rapidly in society and people's daily lives. It is possible that the federal government's procurement conditions will become the standards regarding generative AI. That would have a major impact on Japan as it depends on the United States for much of its generative AI technology. Citing an example of bias in generative AI, Trump pointed to white male figures from American history who were transformed in AI answers into women or members of other races. Generative AI undertakes deep learning using a massive amount of data and subsequently goes through repeated tuning to produce answers. If there is bias in the data used for deep learning, the answers will reflect such bias and the tuning process could generate the wrong answer. Put another way, if the learning data is selected arbitrarily and certain algorithms are intentionally chosen during the tuning process, it would be possible to produce answers that reflected a certain bias. This is one characteristic of generative AI that we must always keep in mind. It could be used by iron-fisted regimes to gag free speech. It was pointed out that the generative AI technology released this year by the Chinese startup DeepSeek avoided or rejected answers that were inconvenient to Beijing, such as regarding the Tiananmen Square incident. Trump's version of intervention that seeks to delete certain references is fundamentally no different. --The Asahi Shimbun, Aug. 4


Japan Today
9 hours ago
- Japan Today
Japan ready to compile extra budget to cushion U.S. tariff blow, Ishiba says
Japan's Prime Minister Shigeru Ishiba, wearing a formal attire, walks toward the upper house hall to attend an opening ceremony of the parliamentary session in Tokyo on Friday. By Leika Kihara Japanese Prime Minister Shigeru Ishiba said on Monday the government is ready to compile an extra budget to cushion the economic blow from U.S. tariffs, a move that would add strain to the country's already worsening finances. After suffering a stinging defeat in last month's upper house election, Ishiba's minority coalition is under pressure to heed opposition parties' demand to boost spending and cut Japan's sales tax. "We will compile one if necessary, taking into account discussions with other parties," Ishiba told parliament when asked by an opposition lawmaker whether the government would compile an extra budget that includes tax cuts. If the government were to compile a stimulus package, an extra budget to fund the spending would be submitted to an extraordinary parliament session likely to be convened in September. Japan's trade deal struck with President Donald Trump last month lowers U.S. tariffs for imports of goods including its mainstay automobiles, easing the pain for the export-reliant economy. But there is no clarity on when U.S. tariffs for automobiles and auto parts will be cut to 15% from the current 25%, clouding the outlook for Japan's fragile recovery. Compiling an extra budget has become a regular practice in Japan as politicians call for increasing spending to support the economy, keeping its fiscal policy loose even as other countries rolled back crisis-mode spending after the COVID-19 pandemic. Ishiba has not commented on the possible size of an extra budget, but some analysts expect it could reach around 10 trillion yen, which would require additional debt issuance. The extra budget would come on top of a record 115.5 trillion yen budget for the current fiscal year. Of the total, 24.5% is being spent on financing debt. Such deficit funding costs will likely rise further as the Bank of Japan eyes more interest rate hikes, analysts say. With rising food costs hurting consumption, opposition parties have called for slashing or eliminating Japan's sales tax rate, which is set at 10% except for 8% for food. Ishiba, who is regarded as a fiscal hawk, has been cautious about cutting the sales tax, which funds social welfare costs for a rapidly ageing population. A flurry of big spending packages and ballooning social welfare costs for a rapidly aging population have left Japan with a debt pile 250% the size of its economy - the highest among major economies. © Thomson Reuters 2025.


Japan Today
2 days ago
- Japan Today
AI can be your wingman when online dating – but should you let it?
By Natasha McKeever and Luke Brunning Many dating app companies are enthusiastic about incorporating generative AI into their products. Whitney Wolfe Herd, founder of dating app Bumble, wants gen-AI to 'help create more healthy and equitable relationships'. In her vision of the near future, people will have AI dating concierges who could 'date' other people's dating concierges for them, to find out which pairings were most compatible. Dating app Grindr is developing an AI wingman, which it hopes to be up and running by 2027. Match Group, owner of popular dating apps including Tinder, Hinge and OK Cupid, have also expressed keen interest in using gen-AI in their products, believing recent advances in AI technology 'have the power to be transformational, making it more seamless and engaging for users to participate in dating apps'. One of the ways they think gen-AI can do this is by enhancing 'the authenticity of human connections'. Use of gen-AI in online dating is not just some futuristic possibility, though. It's already here. Want to enhance your photos or present yourself in a different style? There are plenty of online tools for that. Similarly, if you want AI to help 'craft the perfect, attention-grabbing bio' for you, it can do that. AI can even help you with making conversation, by analysing your chat history and suggesting ways to reply. Extra help It isn't just dating app companies who are enthusiastic about AI use in dating apps either. A recent survey carried out by Cosmopolitan magazine and Bumble of 5,000 gen-Zers and millennials found that 69% of respondents were excited about 'the ways AI could make dating easier and more efficient'. An even higher proportion (86%) 'believe it could help solve pervasive dating fatigue'. A surprising 86% of men and 77% of the women surveyed would share their message history with AI to help guide their dating app conversations. It's not hard to see why AI is so appealing for dating app users and providers. Dating apps seem to be losing their novelty: many users are reportedly abandoning them due to so-called 'dating app fatigue' – feeling bored and burnt out with dating apps. Apps and users might be hopeful that gen-AI can make dating apps fun again, or if not fun, then at least that it will make them actually lead to dates. Some AI dating companions claim to get you ten times more dates and better dates at that. Given that men tend to get fewer matches on dating apps than women, it's also not surprising that we're seeing more enthusiasm from men than women about the possibilities AI could bring. Talk of gen-AI in connection to online dating gives rise to many ethical concerns. We at the Ethical Dating Online Network, an international network of over 30 multi-disciplinary academics interested in how online dating could be more ethical, think that dating app companies need to convincingly answer these worries before rushing new products to market. Here are a few standout issues. Pitfalls of AI dating Technology companies correctly identify some contemporary social issues, such as loneliness, anxiety at social interactions, and concerns about dating culture, as hindering people's dating lives. But turning to more technology to solve these issues puts us at risk of losing the skills we need to make close relationships work. The more we can reach for gen-AI to guide our interactions, the less we might be tempted to practise on our own, or to take accountability for what we communicate. After all, an AI 'wingman' is of little use when meeting in person. Also, AI tools risk entrenching much of dating culture that people find stressful. Norms around 'banter', attractiveness or flirting can make the search for intimacy seem like a competitive battleground. The way AI works – learning from existing conversations – means that it will reproduce these less desirable aspects. Instead of embracing those norms and ideals, and trying to equip everyone with the tools to seemingly meet impossibly high standards, dating app companies could do more to 'de-escalate' dating culture: make it calmer, more ordinary and help people be vulnerable. For example, they could rethink how they charge for their products, encourage a culture of honesty, and look at alternatives to the 'swiping' interfaces. The possibility of misrepresentation is another concern. People have always massaged the truth when it comes to dating, and the internet has made this easier. But the more we are encouraged to use AI tools, and as they are embedded in dating apps, bad actors can more simply take advantage of the vulnerable. An AI-generated photo, or conversation, can lead to exchanges of bank details, grooming and sexual exploitation. Stopping short of fraud, however, is the looming intimate authenticity crisis. Online dating awash with AI generated material risks becoming a murky experience. A sincere user might struggle to identify like-minded matches on apps where use of AI is common. This interpretive burden is annoying for anyone, but it will exacerbate the existing frustrations women, more so than men, experience on dating apps as they navigate spaces full of with timewasting, abuse, harassment and unwanted sexualisation. Indeed, women might worry that AI will turbo-charge the ability of some men to prove a nuisance online. Bots, automation, conversation-generating tools, can help some men to lay claim to the attention of many women simultaneously. AI tools may seem like harmless fun, or a useful timesaver. Some people may even wholeheartedly accept that AI generated content is not 'authentic' and love it anyway. Without clear guardrails in place, however, and more effort by app companies to provide informed choices based on transparency about how their apps work, any potential benefits of AI will be obscured by the negative impact it has to intimacy online. Natasha McKeever and Luke Brunning are lecturers in Applied Ethics, University of Leeds. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts. External Link © The Conversation