logo
How AI Allowed Road Rage Shooting Victim to Address His Killer from Beyond the Grave

How AI Allowed Road Rage Shooting Victim to Address His Killer from Beyond the Grave

Yahoo07-05-2025

The family of Christopher Pelkey, a 2021 road rage victim, used artificial intelligence to recreate his image and voice for a victim impact statement during the sentencing of his killer
The AI-generated video featured Pelkey saying he forgave the man who shot him
Pelkey's family described the experience as healing, saying the AI representation captured his true spirit and provided them with emotional closure by allowing them to see him one last time
Christopher Pelkey was shot and killed in a road rage incident in Arizona in 2021 — but last month, his family members saw him again thanks to artificial intelligence.
This was the first time in Arizona judicial history that AI has been used to create a victim impact statement for someone who has already died, the Guardian and ABC15 reported.
Pelkey's sister, Stacey Wales, and his brother-in-law, used AI technology to recreate his image and voice to talk about his life and the day he met the man who shot and killed him during a confrontation in Chandler.
'To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,' the AI video recording of Pelkey said. 'In another life, we probably could have been friends.'
'I believe in forgiveness, and a God who forgives. I always have, and I still do,' Pelkey's AI recording added.
The AI video included real video clips of Pelkey, showcasing his personality and humor. It showed a real photo he once took with an 'old age' filter. "Remember, getting old is a gift that not everybody has, so embrace it and stop worrying about those wrinkles,' the AI version of Pelkey said.
Authorities say that Pelkey was 37 years old when he was killed in November of 2021 while stopped at a red light, along with Horcasitas.
When both vehicles were stopped at a red light, Horcasitas repeatedly honked at Pelkey, who got out of his truck, waved his arms and approached Horcasitas' car, according to an Arizona court memorandum.
Horcasitas shot twice, killing Pelkey with one bullet — leading the state to hit Horcasitas with murder charges, the court memorandum said. Ultimately, Horcasitas was sentenced to 10-and-a-half years for manslaughter.
The judge overseeing the case, Todd Lang, appeared to become emotional after the AI presentation.
"As justifiably angry as the family is, I heard the forgiveness, and I know Mr. Horcasitas appreciated it, but so did I," Lang said. "I feel that that was genuine, that his obvious forgiveness of Mr. Horcasitas reflects the character I heard about today."
Pelkey was a veteran of the United States Army and devoutly religious, according to his obituary. He was involved in many church missions across the world.
He was survived by various family members and friends and his 'beloved' cat Sausage. 'The list of names of people that were affected by his life will never fully be known on this side of heaven,' his obituary read.
Want to keep up with the latest crime coverage? Sign up for PEOPLE's free True Crime newsletter for breaking crime news, ongoing trial coverage and details of intriguing unsolved cases.
Wales, Pelkey's sister, said everyone who knew him agreed that the AI video 'was a true representation of the spirit and soul of how Chris would have thought about his own sentencing as a murder victim,' Wales said to ABC 15.
Pelkey's family told the outlet that they found peace in the process because they got to see him one last time. Pelkey's brother, John, said he felt 'waves of healing' from seeing his brother's face and believes Pelkey would have forgiven his killer.
'That was the man I knew,' John said.
Read the original article on People

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court

San Francisco Chronicle​

time36 minutes ago

  • San Francisco Chronicle​

UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court

LONDON (AP) — Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said — warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.' In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about 'suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked,' leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was 'extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around.' In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had 'not provided to the court a coherent explanation for what happened.' The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a 'powerful technology' and a 'useful tool' for the law. 'Artificial intelligence is a tool that carries with it risks as well as opportunities,' the judge said. 'Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.'

UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court

Yahoo

time37 minutes ago

  • Yahoo

UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court

LONDON (AP) — Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said — warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.' In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about 'suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked,' leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was 'extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around.' In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had 'not provided to the court a coherent explanation for what happened.' The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a 'powerful technology' and a 'useful tool' for the law. 'Artificial intelligence is a tool that carries with it risks as well as opportunities,' the judge said. 'Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.' Jill Lawless, The Associated Press Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court
UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court

Associated Press

time42 minutes ago

  • Associated Press

UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court

LONDON (AP) — Lawyers have cited fake cases generated by artificial intelligence in court proceedings in England, a judge has said — warning that attorneys could be prosecuted if they don't check the accuracy of their research. High Court justice Victoria Sharp said the misuse of AI has 'serious implications for the administration of justice and public confidence in the justice system.' In the latest example of how judicial systems around the world are grappling with how to handle the increasing presence of artificial intelligence in court, Sharp and fellow judge Jeremy Johnson chastised lawyers in two recent cases in a ruling on Friday. They were asked to rule after lower court judges raised concerns about 'suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked,' leading to false information being put before the court. In a ruling written by Sharp, the judges said that in a 90 million pound ($120 million) lawsuit over an alleged breach of a financing agreement involving the Qatar National Bank, a lawyer cited 18 cases that did not exist. The client in the case, Hamad Al-Haroun, apologized for unintentionally misleading the court with false information produced by publicly available AI tools, and said he was responsible, rather than his solicitor Abid Hussain. But Sharp said it was 'extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around.' In the other incident, a lawyer cited five fake cases in a tenant's housing claim against the London Borough of Haringey. Barrister Sarah Forey denied using AI, but Sharp said she had 'not provided to the court a coherent explanation for what happened.' The judges referred the lawyers in both cases to their professional regulators, but did not take more serious action. Sharp said providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases,' perverting the course of justice, which carries a maximum sentence of life in prison. She said in the judgment that AI is a 'powerful technology' and a 'useful tool' for the law. 'Artificial intelligence is a tool that carries with it risks as well as opportunities,' the judge said. 'Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store