
Justice at stake as generative AI enters the courtroom
Agencies
Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself.
Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court.
'It's probably used more than people expect,' said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the U.S. legal system.
'Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it.' In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroomin the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists.
'I believe in forgiveness,' said a digital proxy of Pelkey created by his sister, Stacey Wales.
The judge voiced appreciation for the avatar, saying it seemed authentic.
'I knew it would be powerful,' Wales told AFP, 'that that it would humanize Chris in the eyes of the judge.'
The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss.
Since the hearing, examples of GenAI being used in U.S. legal cases have multiplied.
'It is a helpful tool and it is time-saving, as long as the accuracy is confirmed,' said attorney Stephen Schwartz, who practices in the northeastern state of Maine. 'Overall, it's a positive development in jurisprudence.'
Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks.
'You can't completely rely on it,' Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy.
'We are all aware of a horror story where AI comes up with mixed-up case things.' The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications.
In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a 'collective debacle.' The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors.
And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts.
'Courts need to be prepared to handle that,' Cleary said.
Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient.
'We have a huge number of people who don't have access to legal services,' Linna said. 'These tools can be transformative; of course we need to be thoughtful about how we integrate them.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Al Jazeera
9 hours ago
- Al Jazeera
Google's AI video tool amplifies fears of an increase in misinformation
In both Tehran and Tel Aviv, residents have faced heightened anxiety in recent days as the threat of missile strikes looms over their communities. Alongside the very real concerns for physical safety, there is growing alarm over the role of misinformation, particularly content generated by artificial intelligence, in shaping public perception. GeoConfirmed, an online verification platform, has reported an increase in AI-generated misinformation, including fabricated videos of air strikes that never occurred, both in Iran and Israel. This follows a similar wave of manipulated footage that circulated during recent protests in Los Angeles, which were sparked by a rise in immigration raids in the second-most populous city in the United States. The developments are part of a broader trend of politically charged events being exploited to spread false or misleading narratives. The launch of a new AI product by one of the largest tech companies in the world has added to those concerns of detecting fact from fiction. Late last month, Google's AI research division, DeepMind, released Veo 3, a tool capable of generating eight-second videos from text prompts. The system, one of the most comprehensive ones currently available for free, produces highly realistic visuals and sound that can be difficult for the average viewer to distinguish from real footage. To see exactly what it can do, Al Jazeera created a fake video in minutes using a prompt depicting a protester in New York claiming to be paid to attend, a common talking point Republicans historically have used to delegitimise protests, accompanied by footage that appeared to show violent unrest. The final product was nearly indistinguishable from authentic footage. Al Jazeera also created videos showing fake missile strikes in both Tehran and Tel Aviv using the prompts 'show me a bombing in Tel Aviv' and then a similar prompt for Tehran. Veo 3 says on its website that it blocks 'harmful requests and results', but Al Jazeera had no problems making these fake videos. 'I recently created a completely synthetic video of myself speaking at Web Summit using nothing but a single photograph and a few dollars. It fooled my own team, trusted colleagues, and security experts,' said Ben Colman, CEO of deepfake detection firm Reality Defender, in an interview with Al Jazeera. 'If I can do this in minutes, imagine what motivated bad actors are already doing with unlimited time and resources.' He added, 'We're not preparing for a future threat. We're already behind in a race that started the moment Veo 3 launched. Robust solutions do exist and work — just not the ones the model makers are offering as the be-all, end-all.' Google says it is taking the issue seriously. 'We're committed to developing AI responsibly, and we have clear policies to protect users from harm and govern the use of our AI tools. Any content generated with Google AI includes a SynthID watermark, and we add a visible watermark to Veo videos as well,' a company spokesperson told Al Jazeera. 'They don't care about customers' However, experts say the tool was released before those features were fully implemented, a move some believe was reckless. Joshua McKenty, CEO of deepfake detection company Polyguard, said that Google rushed the product to market because it had been lagging behind competitors like OpenAI and Microsoft, which have released more user-friendly and publicised tools. Google did not respond to these claims. 'Google's trying to win an argument that their AI matters when they've been losing dramatically,' McKenty said. 'They're like the third horse in a two-horse race. They don't care about customers. They care about their own shiny tech.' That sentiment was echoed by Sukrit Venkatagiri, an assistant professor of computer science at Swarthmore College. 'Companies are in a weird bind. If you don't develop generative AI, you're seen as falling behind and your stock takes a hit,' he said. 'But they also have a responsibility to make these products safe when deployed in the real world. I don't think anyone cares about that right now. All of these companies are putting profit — or the promise of profit — over safety.' Google's own research, published last year, acknowledged the threat generative AI poses. 'The explosion of generative AI-based methods has inflamed these concerns [about misinformation], as they can synthesise highly realistic audio and visual content as well as natural, fluent text at a scale previously impossible without an enormous amount of manual labour,' the study read. Demis Hassabis, CEO of Google DeepMind, has long warned his colleagues in the AI industry against prioritising speed over safety. 'I would advocate not moving fast and breaking things,' he told Time in 2023. He declined Al Jazeera's request for an interview. Yet despite such warnings, Google released Veo 3 before fully implementing safeguards, leading to incidents like the one the National Guard had to debunk in Los Angeles after a TikTok account made a fake 'day in the life' video of a soldier that said he was preparing for 'today's gassing' — referring to releasing tear gas on protesters. Mimicking real events The implications of Veo 3 extend far beyond protest footage. In the days following its release, several fabricated videos mimicking real news broadcasts circulated on social media, including one of a false report about a home break-in that included CNN graphics. Another clip falsely claimed that JK Rowling's yacht sank off the coast of Turkiye after an orca attack, attributing the report to Alejandra Caraballo of Harvard Law's Cyberlaw Clinic, who built the video to test out the tool. In a post, Caraballo warned that such tech could mislead older news consumers in particular. 'What's worrying is how easy it is to repeat. Within ten minutes, I had multiple versions. This makes it harder to detect and easier to spread,' she wrote. 'The lack of a chyron [banner on a news broadcast] makes it trivial to add one after the fact to make it look like any particular news channel.' In our own experiment, we used a prompt to create fake news videos bearing the logos of ABC and NBC, with voices mimicking those of CNN anchors Jake Tapper, Erin Burnett, John Berman, and Anderson Cooper. 'Now, it's just getting harder and harder to tell fact from fiction,' Caraballo told Al Jazeera. 'As someone who's been researching AI systems for years, even I'm starting to struggle.' This challenge extends to the public, as well. A study by Penn State University found that 48 percent of consumers were fooled by fake videos circulated via messaging apps or social media. Contrary to popular belief, younger adults are more susceptible to misinformation than older adults, largely because younger generations rely on social media for news, which lacks the editorial standards and legal oversight of traditional news organisations. A UNESCO survey from December showed that 62 percent of news influencers do not fact-check information before sharing it. Google is not alone in developing tools that facilitate the spread of synthetic media. Companies like Deepbrain offer users the ability to create AI-generated avatar videos, though with limitations, as it cannot produce full-scene renders like Veo 3. Deepbrain did not respond to Al Jazeera's request for comment. Other tools like Synthesia and Dubverse allow video dubbing, primarily for translation. This growing toolkit offers more opportunities for malicious actors. A recent incident involved a fabricated news segment in which a CBS reporter in Dallas was made to appear to say racist remarks. The software used remains unidentified. CBS News Texas did not respond to a request for comment. As synthetic media becomes more prevalent, it poses unique risks that will allow bad actors to push manipulated content that spreads faster than it can be corrected, according to Colman. 'By the time fake content spreads across platforms that don't check these markers [which is most of them], through channels that strip them out, or via bad actors who've learned to falsify them, the damage is done,' Colman said.


Qatar Tribune
a day ago
- Qatar Tribune
Young Mozambican inventor bringing the blind smart ‘vision'
Agencies When Armando Ernesto Chau straps on the futuristic smart glasses that a young Mozambican robotics student is developing in the family dining room, he has a vision of a life less confined to his modest home. Chau is the prototype tester for Joao Antonio Rego, a 24-year-old robotics and electronic engineering student driven to provide visually impaired Mozambicans with assistance that goes beyond a simple cane. Since he lost his sight 20 years ago, the 45-year-old father has not worked and rarely leaves his home in Matola, outside the capital Maputo. Rego's electronic glasses — battery-powered devices embedded with sensors that scan for obstacles ahead and emit warning vibrations — offer the promise of new possibilities. 'It is vibrating ... it is those bushes,' Chau said, demonstrating for AFP Rego's Vision Hope 0.2. 'Maybe, there is a window here... yes.' 'Because of these obstacles, it vibrates. So I go back,' he said. 'It stopped. See? Then it says there is something on this side... When I turn, it is quiet.' Resembling a virtual reality eye mask, this is Rego's latest prototype since he launched his Vision Hope project in 2021, winning Mozambique's Young Creative Award for technological innovation the following year. New features include a larger 120-degree range and more accurate sensors, explained Rego, a student at Eduardo Mondlane University. The battery, attached to a strap that is worn over a shoulder, is on a smart system that saves power and warns when it is running low. A GPS allows others to know the whereabouts of the user. Rego is already working on improvements in his dining room workshop. 'I want the next version to have sensors capable of detecting very thin obstacles like wires and threads,' he told AFP. 'The coating also needs to be waterproof,' he said. Slim and serious, Rego was inspired to help when, years ago, he saw a visually impaired woman fall in a busy street in downtown Maputo, said his mother, Helena Inacio. 'Seeing that woman on the ground disturbed him. He vowed that he would create glasses,' she told AFP. She had asked: ''Glasses for what? So that blind people can see?' He said, 'No, to give direction.'' 'I thought it was fantasy,' Inacio said. Rego moved his lab out of his bedroom for better ventilation after a health scare led a doctor to warn about the risks of fumes from his soldering work. 'I had health problems and after an X-ray, they said there were some spots on my lungs which might have been caused by chemical fumes, like tin. It was temporary, but I must always take precautions,' he said. Rego's own vision is to secure partnerships that will allow him to one day produce and distribute his glasses across his impoverished country, where nearly 2.7 million people suffer vision loss, according to the International Agency for the Prevention of Blindness. Chau, who lost his sight after falling ill in 2005 and undergoing treatment in hospital, has made some suggestions for the next iteration. 'I told him to first improve the roadside verification system,' he said. He would also like a sensor that can detect the pools of stagnant water that are common in his area. And, if possible, a way for detected obstacles to be identified. 'A system that communicates... about what kind of obstacle is in front of me, if it is a human being, a car,' he said. 'If the glasses are made the way I suggest, it will help us a lot, me and many other visually impaired people out there,' said Chau. AFP When they are in production and he can get his own pair, the glasses will give Chau a new lease of life, said his wife, Felizarda Nhampule. 'Sometimes he stays here at home alone while I go out and do my errands. Sometimes he wants to go out somewhere but can't,' she said.


Al Jazeera
2 days ago
- Al Jazeera
US judge allows company to train AI using copyrighted literary materials
A United States federal judge has ruled that the company Anthropic made 'fair use' of the books it utilised to train artificial intelligence (AI) tools without the permission of the authors. The favourable ruling comes at a time when the impacts of AI are being discussed by regulators and policymakers, and the industry is using its political influence to push for a loose regulatory framework. 'Like any reader aspiring to be a writer, Anthropic's LLMs [large language models] trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different,' US District Judge William Alsup said. A group of authors had filed a class-action lawsuit alleging that Anthropic's use of their work to train its chatbot, Claude, without their consent was illegal. But Alsup said that the AI system had not violated the safeguards in US copyright laws, which are designed for 'enabling creativity and fostering scientific progress'. He accepted Anthropic's claim that the AI's output was 'exceedingly transformative' and therefore fell under the 'fair use' protections. Alsup, however, did rule that Anthropic's copying and storage of seven million pirated books in a 'central library' infringed author copyrights and did not constitute fair use. The fair use doctrine, which allows limited use of copyrighted materials for creative purposes, has been employed by tech companies as they create generative AI. Technology developpers often sweeps up large swaths of existing material to train their AI models. Still, fierce debate continues over whether AI will facilitate greater artistic creativity or allow the mass-production of cheap imitations that render artists obsolete to the benefit of large companies. The writers who brought the lawsuit — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — alleged that Anthropic's practices amounted to 'large-scale theft', and that the company had sought to 'profit from strip-mining the human expression and ingenuity behind each one of those works'. While Tuesday's decision was considered a victory for AI developpers, Alsup nevertheless ruled that Anthropic must still go to trial in December over the alleged theft of pirated works. The judge wrote that the company had 'no entitlement to use pirated copies for its central library'.