logo
#

Latest news with #ChrisPelkey

Justice at Stake as Generative AI Enters the Courtroom
Justice at Stake as Generative AI Enters the Courtroom

Asharq Al-Awsat

time21 hours ago

  • Asharq Al-Awsat

Justice at Stake as Generative AI Enters the Courtroom

Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself. Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court. "It's probably used more than people expect," said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the US legal system. "Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it'. In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom -- in the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists. "I believe in forgiveness," said a digital proxy of Pelkey created by his sister, Stacey Wales. The judge voiced appreciation for the avatar, saying it seemed authentic. "I knew it would be powerful," Wales told , "that that it would humanize Chris in the eyes of the judge." The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss. Since the hearing, examples of GenAI being used in US legal cases have multiplied. "It is a helpful tool and it is time-saving, as long as the accuracy is confirmed," said attorney Stephen Schwartz, who practices in the northeastern state of Maine. "Overall, it's a positive development in jurisprudence." Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks. "You can't completely rely on it," Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy. "We are all aware of a horror story where AI comes up with mixed-up case things." The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications. In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a "collective debacle." The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors. And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts. "Courts need to be prepared to handle that," Cleary said. Transformation Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient. "We have a huge number of people who don't have access to legal services," Linna said. "These tools can be transformative; of course we need to be thoughtful about how we integrate them." Federal judges in the US capitol have written decisions noting their use of ChatGPT in laying out their opinions. "Judges need to be technologically up-to-date and trained in AI," Linna said. GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor. Facts or case law pointed out by GenAI might sway a judge's decision, and could be different than what a legal clerk would have come up with. But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.

AI-generated victim shouldn't have been allowed to speak at killer's sentencing
AI-generated victim shouldn't have been allowed to speak at killer's sentencing

Yahoo

time14-05-2025

  • Yahoo

AI-generated victim shouldn't have been allowed to speak at killer's sentencing

Earlier this month in Maricopa County Superior Court, a dead man named Chris Pelkey testified at the sentencing hearing of the man who killed him. Except, of course, he didn't. Except, it looked like he did. Sort of. Gabriel Horcasitas, 54, was convicted of manslaughter and endangerment in the shooting death of 37-year-old Pelkey. The killing was the end result of a road rage incident that occurred on Nov. 13, 2021. Judge Todd Lang allowed Pelkey's family to play an AI-generated version of him making a statement at the sentencing hearing. It begins with Pelkey's digital twin telling viewers that he is an AI image. It then shows some video of the actual Pelkey, then goes back to the AI version, who thanks those who spoke on his behalf, then says to the defendant: 'To Gabriel Horcasitas, the man who shot me: It is a shame we encountered each other that day in those circumstances. In another life we probably could have been friends. I believe in forgiveness and in God, who forgives. I always have. And I still do.' Horcasitas received 12½ years in prison for both charges. Shortly after the sentencing, his attorney, Jason Lamm, said he would appeal Lamm said, 'While victims have a right to address the court, reincarnating Chris Pelkey through AI, and, frankly, putting words in his mouth because nobody would know what he was actually going to say, it just felt wrong on many levels.' That's because, as sincere, moving, heartfelt and even conciliatory that the AI video was, it was … wrong. On many levels. Opinion: Mainstream media aren't telling us the full story on AI Advances in AI allow the dead to 'speak' to us. But only with words that someone else puts in their mouths. In this case, Pelkey's sister, Stacey Wales, wrote her brother's victim impact statement. She told CNN, 'The only thing that kept entering my head that I kept hearing was Chris and what he would say. I had to very carefully detach myself in order to write this on behalf of Chris because what he was saying is not necessarily what I believe, but I know it's what he would think.' For her family and for anyone who loved Pelkey, that is undoubtedly true. And I'd guess that at the funerals or memorial services for some people, an AI visit from the great beyond may afford grieving loved ones a sense of comfort. But a courtroom can't be a place where what someone thinks a deceased person would say is offered up by an AI avatar. Or, as Gary Marchant, an ASU professor and member of the Arizona Supreme Court's committee on AI, put it, 'Even though in this case it was very well-meaning and honest, it can easily cross over to much more dishonest and much more strategic, much more self-serving, so I think we can't [set] that precedent to allow the fake videos into court.' Chris Pelkey clearly was much loved, and his loss was deeply felt. That message was conveyed at the sentencing by others. In life, he could speak for himself. In death, he did not need to. Reach Montini at Like this column? Get more opinions in your email inbox by signing up for our free opinions newsletter, which publishes Monday through Friday. This article originally appeared on Arizona Republic: Chris Pelkey spoke kindly via AI to his killer. It was wrong | Opinion

Chris Pelkey's AI-generated victim impact statement draws criticism
Chris Pelkey's AI-generated victim impact statement draws criticism

Yahoo

time13-05-2025

  • Yahoo

Chris Pelkey's AI-generated victim impact statement draws criticism

The Brief An AI-generated version of road rage victim Chris Pelkey speaking to his killer in court has gone viral. We're hearing from the lawyer of Gabriel Horcacitas, the man who killed Chris, who says it was "inauthentic." PHOENIX - An AI-generated version of a road rage victim speaking at his killer's sentencing is turning heads. Big picture view It was the first time in Arizona history, and possibly nationwide, that AI has been used for a victim's own impact statement. AI is rapidly changing our world and industries, and the law is no different. It has strict rules on what can come into legal proceedings and sanctions for having AI cite fake cases in briefs. That's why some are urging caution before allowing this kind of technology into courtrooms. The backstory The unthinkable happened in an Arizona courtroom in the road rage death case of Chris Pelkey. "To Gabriel Horcacitas, the man who shot me, it is a shame we encountered each other that day in those circumstances," the AI-generated video of Chris said. He spoke directly to the man accused of killing him at his sentencing. "I am a version of Chris Pelkey recreated through AI that uses my picture and my voice profile," Chris said. Chris's sister, Stacey, helped create the video. "I had my own thoughts and feelings about how much time I wanted the sentence to be," she said. That's why she said she had the video saying what she believes Chris would have said. "I believe in forgiveness and in God, who forgives," Chris said. The other side Others fear using this technology is stretching beyond ethical and moral boundaries. "Human beings have thoughts, feelings and emotions. It doesn't matter how much we try to simulate that with AI. It's simply inauthentic," Jason Lamm, the lawyer representing Horcasitas, said. Horcasitas was convicted of manslaughter and endangerment in Chris's death. He's now filed an appeal. "It's just simply inauthentic to put the words in the mouth of the likeness. It's much like Geppetto putting words in the words of Pinnocio's mouth. Those words were a stark contrast from the reality that numerous witnesses testified to, those being Chris Pelky's last words, of challenging my client to a fight violently getting out of his car in a crowded intersection waving his arms in the air," Lamm said. It's not unusual for victim impact statements to involve photo galleries, PowerPoints or videos. But, Lamm says AI is very different. "It's one thing to show a video or a photo where we have some indicia of reliability and authenticity. But when it comes to AI, you can make a likeness that's presented, in this case to a court, to say absolutely anything you want, no matter how untethered it is to the facts and reality," Lamm explained. He's not alone in his fears either. There are dozens of law school review articles that contemplate the ethical dilemma of this technology. What's next The Arizona Supreme Court has created a steering committee for AI's role in the courts.

Indian-origin scientist in UK receives ultra-rare, 'world's first' moon dust from China
Indian-origin scientist in UK receives ultra-rare, 'world's first' moon dust from China

Hindustan Times

time09-05-2025

  • Hindustan Times

Indian-origin scientist in UK receives ultra-rare, 'world's first' moon dust from China

In a groundbreaking case in Arizona, an AI avatar of Chris Pelkey, a man killed in a 2021 road rage incident, addressed his killer during trial. Pelkey's sister created the avatar to convey forgiveness, reflecting his character. The judge appreciated the emotional impact, sentencing the shooter, Gabriel Horcasitas, to 10.5 years for manslaughter. The family found solace in the AI representation.

AI deepfakes make their way into courtrooms
AI deepfakes make their way into courtrooms

Yahoo

time09-05-2025

  • Yahoo

AI deepfakes make their way into courtrooms

Despite pushback from judges, the use of generative AI tools in legal proceedings continues to grow. Initially, it was used in courtrooms to create fake cases, but now it's evolving with advanced video and audio technology. In a recent Arizona case, the family of a murder victim presented a video featuring an AI version of Chris Pelkey, who was killed in 2021. This AI-generated "clone" addressed his alleged killer in court, marking the first known use of a deepfake in a victim impact statement. In the video, the generated version of Pelkey spoke directly to the accused, expressing regret over their encounter. The judge sentenced the accused to 10.5 years in prison, noting that the AI-generated statement influenced his decision. The Pelkey family created the video by training an AI model on clips of him and applying an "old age" filter to show what he might look like today. Gary Marchant, a law professor at Arizona State University who studies ethics and new technologies like AI, praised Pelkey's family for making a statement that seemed to go against their own goal of getting the toughest punishment for Horcasitas. However, he expressed concern about the example it sets. While prosecutors and defense attorneys have traditionally used visual aids, charts, and other illustrations to support their arguments, Marchant noted that artificial intelligence introduces new ethical challenges. Marchant remarked that the situation is quite complicated. He pointed out that you can see someone appearing to speak who isn't doing so; in reality, that person is deceased and not speaking at all. He believes this creates an additional layer of complexity that could lead to risky situations. In another instance, a man in New York, Jerome Dewald, used a deepfake video to support his legal defense in a contract dispute. The judge was confused, thinking the computer generated figure was Dewald's attorney. Dewald later clarified that the video was created by him to help explain his case more clearly, not to mislead the court. These examples highlight the growing trend of generative AI in courtrooms, which began gaining traction with the popularity of chatbots like ChatGPT. Lawyers have used AI to draft legal documents, but this has led to issues, including the submission of fake case names generated by AI. Some lawyers have faced sanctions for using AI inappropriately, raising questions about the rules surrounding AI in legal settings. The main ethical concern with using artificial intelligence in legal cases is the risk of bias and unfair results because of biased training data. Because it learns from the information it is given, and if that information contains past biases, the AI will likely continue and even increase those biases, resulting in unfair outcomes. The lack of clarity around how an AI makes its decisions will reduce trust in the legal system and make it hard for lawyers to explain their arguments. There are also worries about data privacy and security, since AI often needs access to sensitive client information. While courts have punished the misuse of AI, the guidelines for acceptable use remain unclear. Recently, a federal panel voted to seek public input on rules to ensure AI-assisted evidence meets the same standards as human evidence. Supreme Court Chief Justice John Roberts acknowledged both the potential benefits and risks of AI in the courtroom, emphasising the need for careful consideration as this technology becomes more prevalent. One thing is clear: AI deepfakes are likely to continue appearing in legal settings. "AI deepfakes make their way into courtrooms" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store