This man was killed four years ago. His AI clone just spoke in court.
People just can't stop using generative AI tools in legal proceedings, despite repeated pushback from frustrated judges. While AI initially appeared in courtrooms through bogus 'hallucinated' cases the trend has taken a turn—driven by increasingly sophisticated AI video and audio tools. In some instances, AI is even being used to seemingly bring victims back from the dead.
This week, a crime victim's family presented a brief video in an Arizona courtroom depicting an AI version of 37-year-old Chris Pelkey. Pelkey was shot and killed in 2021 in a road rage incident. Now, four years later, the AI-generated 'clone' appeared to address his alleged killer in court. The video, first reported by local outlet ABC15, appears to be the first known example of a generative AI deepfake used in a victim impact statement.
'To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,' the AI replica of Pelkey says in the video. 'In another life, we probably could have been friends.'
The video shows the AI version of Pelkey—a burly, bearded Army veteran—wearing a green hoodie and gray baseball cap. Pelkey's family reportedly created the video by training an AI model on various clips of Pelkey. An 'old age' filter was then applied to simulate what Pelkey might look like today. In the end, the judge sentenced Horcasitas to 10.5 years in prison for manslaughter, a decision he said was at least partly influenced by the AI-generated impact statement.
'This is the best I can ever give you of what I would have looked like if I got the chance to grow old,' the Pelkey deepfake said. 'Remember, getting old is a gift that not everybody has, so embrace it and stop worrying about those wrinkles.'
A New York man used an AI deepfake to help argue his case
The AI-generated impact statement comes just a month after a defendant in New York State court, 74-year-old Jerome Dewald, used a deepfake video to assist in delivering his own legal defense. When Dewald appeared in court over a contract dispute with a former employer, he presented a video showing a man in a sweater and blue dress shirt speaking directly to the camera. The judge, confused by the video, asked Dewald if the person on screen was his attorney. In reality, it was an AI-generated deepfake.
'I generated that,' Dewald said according to The New York Times. 'That is not a real person.'
The judge wasn't pleased and reprimanded Dewald for failing to disclose that he had used AI software to aid his defense. Speaking with the NYT after the hearing, Dewald claimed he hadn't intended to mislead the court but used the AI tool as a way to more clearly articulate his defense. He said he initially planned to have the deepfake resemble himself but switched to the version shown in court after encountering technical difficulties.
'My intent was never to deceive but rather to present my arguments in the most efficient manner possible,' Dewald reportedly said in a letter to the judges.
Related: [This AI chatbot will be playing attorney in a real US court]
AI models have 'hallucinated' fake legal cases
The two cases represent the latest examples of generative AI seeping into courtrooms, a trend that began gaining traction several years ago following the surge of public interest in popular chatbots like OpenAI's ChatGPT. Lawyers across the country have reportedly used these large language models to help draft legal filings and collect information. That has led to some embarrassing instances where models have 'hallucinated' entirely fabricated case names and facts that eventually make their way into legal proceedings.
In 2023, two New York-based lawyers were sanctioned by a judge after they submitted a brief containing six fake case citations generated by ChatGPT. Michael Cohen, the former personal lawyer of President Donald Trump, reportedly sent fake AI-generated legal cases to his attorney that ended up in a motion submitted to federal judges. Another lawyer in Colorado was suspended after reportedly submitting AI-generated legal cases. OpenAI has even been sued by a Georgia radio host who claimed a ChatGPT response accused him of being involved in a real embezzlement case he had nothing to do with.
Get ready for more AI in courtrooms
Though courts have punished attorneys and defendants for using AI in ways that appear deceptive, the rules around whether it's ever acceptable to use these tools remain murky. Just last week, a federal judicial panel voted 8–1 to seek public comment on a draft rule aimed at ensuring that AI-assisted evidence meets the same standards as evidence presented by human expert witnesses. Supreme Court Chief Justice John Roberts also addressed the issue in his 2023 annual report, noting both the potential benefits and drawbacks of allowing more generative AI in the courtroom. On one hand, he observed, AI could make it easier for people with limited financial resources to defend themselves. At the same time, he warned that the technology risks 'invading privacy interests and dehumanizing the law.'
One thing seems certain: We haven't seen the last of AI deepakes in courtrooms.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Los Angeles Times
19 minutes ago
- Los Angeles Times
Judge in Britain warns of risk to justice after lawyers cited fake AI-generated cases in court
LONDON — Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said — warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.' In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about 'suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked,' leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a $120-million lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was 'extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around.' In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had 'not provided to the court a coherent explanation for what happened.' The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a 'powerful technology' and a 'useful tool' for the law. 'Artificial intelligence is a tool that carries with it risks as well as opportunities,' the judge said. 'Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.' Lawless writes for the Associated Press.


Washington Post
4 hours ago
- Washington Post
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
LONDON — Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said — warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.'
Yahoo
7 hours ago
- Yahoo
Jesse Watters Trots Out Dehumanizing Analogy for Kilmar Abrego Garcia's Return
Fox News host Jesse Watters criticized the Trump administration for bringing Kilmar Abrego Garcia back to the United States, saying the wrongly deported man's return was like taking a rental car to the car wash. 'I don't think they should have brought him back,' Watters said on The Five, shortly after news broke that Abrego Garcia is facing two counts of human smuggling in Tennessee. 'This is a national security situation. The guy is a designated terrorist. He belongs somewhere else. What are we going to do? We're going to spend two years and $50 million trying this guy and imprisoning this guy, feeding him, giving him healthcare, and then flying him home?' Watters said incredulously. 'This is like renting a car and taking it to a car wash before you return it,' he added. 'What's the point? It's not your car, and it's going back anyway.' Attorney General Pam Bondi said Abrego Garcia would first serve time in a U.S. prison if convicted, then be removed from the country once again. Garcia had been held in El Salvador's Terrorism Confinement Center even after the Trump administration admitted his deportation was an 'administrative error.' When the Supreme Court ordered that it 'facilitate' his return, the White House insisted that it was powerless to do so. Friday's events proved the administration was lying, The Five co-host Jessica Tarlov said Friday. '[White House Press Secretary] Karoline Leavitt—as well as other members of the administration, from the president himself to Kristi Noem—lied to the American people when they said they couldn't bring him back,' Tarlov said. 'Well, I guess you could get him back.' Andrew Rossman, a lawyer for Abrego Garcia, made the same point. 'Today's action proves what we've known all along—that the administration had the ability to bring him back and just refused to do so,' he told The New York Times. 'It's now up to our judicial system to see that Mr. Abrego Garcia receives the due process that the Constitution guarantees to all persons.' Abrego Garcia was sent to Tennessee, where the indictment was filed in May and unsealed Friday. The Times reports that an imprisoned man's information about Abrego Garcia moved the case forward. Prosecutors couldn't agree how to proceed, however, and one ended up resigning.