The Judge's Reaction to an AI-Generated Victim Impact Statement Was Not What We Expected
A slain Arizona man's family used AI to bring him back from the dead for his killer's sentencing hearing — and the judge presiding over the case apparently "loved" it.
As 404 Media reports, judge Todd Lang was flabbergasted when he saw the AI-generated video of victim Chris Peskey that named and "forgave" the man who killed him in 2021.
"To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances," the video, which Peskey's sister Stacey Wales generated, intoned. "In another life we probably could have been friends. I believe in forgiveness, in God who forgives, I always have. And I still do."
Found guilty earlier this year, Horcasitas' sentencing was contingent, as many cases are, upon various factors, including impact statements from the victim's family.
As Wales told 404 Media, her husband Tim was initially freaked out when she introduced the idea of creating a digital clone of her brother for the hearing and told her she was "asking a lot."
Ultimately, the video was accepted in the sentencing hearing, the first known instance of an AI clone of a deceased person being used in such a way.
And the gambit appears to have paid off.
"I loved that AI, and thank you for that," Lang said, per a video of his pre-sentencing speech. "As angry as you are, and as justifiably angry as the family is, I heard the forgiveness, and I know Mr. Horcasitas could appreciate it, but so did I."
"I feel like calling him Christopher as we've gotten to know him today," Lang continued. "I feel that that was genuine, because obviously the forgiveness of Mr. Horcasitas reflects the character I heard about today."
Lang acknowledged that although the family itself "demanded the maximum sentence," the AI Pelkey "spoke from his heart" and didn't call for such punishment.
"I didn't hear him asking for the maximum sentence," the judge said.
Horcasitas' lawyer also referenced the Peskey avatar when defending his client and, similarly, said that he also believes his client and the man he killed could have been friends had circumstances been different.
That entreaty didn't seem to mean much, however, to Lang. He ended up sentencing Horcasitas to 10.5 years for manslaughter, which was a year and a half more than prosecutors were seeking.
It's a surprising reaction, showing that many are not only open to AI being used this way, but also in favor of it — evidence that the chasm between AI skeptics and adopters could be widening.
More on AI fakery: Slop Farmer Boasts About How He Uses AI to Flood Social Media With Garbage to Trick Older Women
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
6 hours ago
- Forbes
FBI Warns iPhone And Android Messaging App Users Not To Click
As the onslaught, and that's what it is, of cyberattacks targeting smartphone users continues, the Federal Bureau of Investigation has some drastic, yet practical advice: do not click on these messages. Here's what the FBI says that iPhone and Android users must do to stay protected against the cybercriminal hackers carrying out these attacks. It feels like the threat from criminal hackers has never been greater, at least if the number of cybersecurity advisories, law enforcement warnings, and critical alerts from some of the best threat intelligence analysts in the business is anything to go by. From Windows users being urged not to open malicious LNK files and ransomware attacks that are skyrocketing. But it's smartphone users who are once again on the sharp end of many of these evolving threat campaigns. Android users have watched as their contacts are turned into hackers, and iPhone users are now advised to have a special code ready to repel malicious callers. The FBI has not been shy of coming forward with warnings itself: a public service cybersecurity advisory concerning an ongoing malicious texting campaign impersonating officials in U.S. federal and government roles, another regarding smartphone attacks against lawful foreigners in the U.S., and even postings to social media platforms such as X warning members of the public about attackers using AI-generated messages. One thing that many of these threats have in common is the use of smartphone messaging to execute their attacks. SMS phishing, known as smishing by some, and voice message phishing which can be referred to as vishing by the same people who seem to hate the English language, are often employed and often driven by AI tools in the background. Yet, no matter how advanced the attack might seem. No matter how sophisticated it all appears, what with AI being involved and brand or officialdom impersonation involved. They nearly all rely on the same tactic to infect the user, to get the victim to download malware or hand over login credentials. Yes, you've guessed it: a malicious link. Which is where the blunt but accurate FBI warning enters the security equation: Do not click. 'Don't click on anything in an unsolicited email or text message. Look up the company's phone number on your own (don't use the one a potential scammer is providing), and call the company to ask if the request is legitimate,' the FBI said.
Yahoo
a day ago
- Yahoo
Gerry Adams's lawyer to pursue chatbots for libel
The high-profile media lawyer who represented Gerry Adams in his libel trial against the BBC is now preparing to sue the world's most powerful AI chatbots for defamation. As one of the most prominent libel lawyers in the UK, Paul Tweed said that artificial intelligence was the 'new battleground' in trying to prevent misinformation about his clients from being spread online. Mr Tweed is turning his attention to tech after he recently helped the former Sinn Fein leader secure a €100,000 (£84,000) payout over a BBC documentary that falsely claimed he sanctioned the murder of a British spy. The Belfast-based solicitor said he was already building a test case against Meta that could trigger a flurry of similar lawsuits, as he claims to have exposed falsehoods shared by chatbots on Facebook and Instagram. It is not the first time tech giants have been sued for defamation over questionable responses spewed out by their chatbots. Robby Starbuck, the US activist known for targeting diversity schemes at major companies, has sued Meta for defamation alleging that its AI chatbot spread a number of false claims about him, including that he took part in the Capitol riots. A Norwegian man also filed a complaint against OpenAI after its ChatGPT software incorrectly stated that he had killed two of his sons and been jailed for 21 years. Mr Tweed, who has represented celebrities such as Johnny Depp, Harrison Ford and Jennifer Lopez, said: 'My pet subject is generative AI and the consequences of them repeating or regurgitating disinformation and misinformation.' He believes statements put out by AI chatbots fall outside the protections afforded to social media companies, which have traditionally seen them avoid liability for libel. If successful, Mr Tweed will expose social media companies that have previously argued they should not be responsible for claims made on their platforms because they are technology companies rather than traditional publishers. Mr Tweed said: 'I've been liaising with a number of well-known legal professors on both sides of the Atlantic and they agree that there's a very strong argument that generative AI will fall outside the legislative protections.' The lawyer said that chatbots are actually creating new content, meaning they should be considered publishers. He said that the decision by many tech giants to move their headquarters to Ireland for lower tax rates had also opened them up to being sued in Dublin's high courts, where libel cases are typically decided by a jury. This setup is often seen as more favourable to claimants, which Mr Tweed himself says has fuelled a wave of 'libel tourism' in Ireland. He also said Dublin's high courts are attractive as a lower price option compared to London, where he said the costs of filing libel claims are 'eye-watering'. He said: 'I think it's absurd now, the level of costs that are being claimed. The libel courts in London are becoming very, very expensive and highly risky now. The moment you issue your claim form, the costs go into the stratosphere. 'It's not in anyone's interest for people to be deprived of access to justice. It will get to the point where nobody sues for libel unless you're a billionaire.' Meta was contacted for for comment. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
a day ago
- Yahoo
Reddit Lawsuit Against Anthropic AI Has Stakes for Sports
In a new lawsuit, Reddit accuses AI company Anthropic of illegally scraping its users' data—including posts authored by sports fans who use the popular online discussion platform. Reddit's complaint, drafted by John B. Quinn and other attorneys from Quinn Emanuel Urquhart & Sullivan, was filed on Wednesday in a California court. It contends Anthropic breached the Reddit user agreement by scraping Reddit content through its web crawler, ClaudeBot. The web crawler provides training data for Anthropic's AI tool, Claude, which relies on large language models (LLMs) that distill data and language. More from Prime Video NASCAR Coverage Uses AI to Show Hidden Race Story Indy 500 Fans Use Record Amount of Data During Sellout Race Who Killed the AAF? League's Demise Examined in Latest Rulings Other claims in the complaint include tortious interference and unjust enrichment. Scraping Reddit content is portrayed as undermining Reddit's obligations to its more than 100 million daily active unique users, including to protect their privacy. Reddit also contends Anthropic subverts its assurances to users that they control their expressions, including when deleting posts from public view. Scraping is key to AI. Automated technology makes requests to a website, then copies the results and tries to make sense of them. Anthropic, Reddit claims, finds Reddit data 'to be of the highest quality and well-suited for fine-tuning AI models' and useful for training AI. Anthropic allegedly violates users' privacy, since those users 'have no way of knowing' their data has been taken. Reddit, valued at $6.4 billion in its initial public offering last year, has hundreds of thousands of 'subreddits,' or online communities that cover numerous shared interests. Many subreddits are sports related, including r/sports, which has 22 million fans, r/nba (17 million) and the college football-centered r/CFB (4.4 million). Some pro franchises, including the Miami Dolphins (r/miamidolphins) and Dallas Cowboys (r/cowboys), have official subreddits. Reddit contends its unique features elevate its content and thus make the content more attractive to scraping endeavors. Reddit users submit posts, which can include original commentary, links, polls and videos, and they upvote or downvote content. This voting influences whether a post appears on the subreddit's front page or is more obscurely placed. Subreddit communities also self-police, with prohibitions on personal attacks, harassment, racism and spam. These practices can generate thoughtful and detailed commentary. Reddit estimates that ClaudeBot's scraping of Reddit has 'catapulted Anthropic into its valuation of tens of billions of dollars.' Meanwhile, Reddit says the company and its users lose out, because they 'realize no benefits from the technology that they helped create.' Anthropic allegedly trained ClaudeBot to extract data from Reddit starting in December 2021. Anthropic CEO Dario Amodei is quoted in the complaint as praising Reddit content, especially content found in prominent subreddits. Although Anthropic indicated it had stopped scraping Reddit in July 2024, Reddit says audit logs show Anthropic 'continued to deploy its automated bots to access Reddit content' more than 100,000 times in subsequent months. Reddit also unfavorably compares Anthropic to OpenAI and Google, which are 'giants in the AI space.' Reddit says OpenAI and Google 'entered into formal partnerships with Reddit' that permitted them to use Reddit content but only in ways that 'protect Reddit and its users' interests and privacy.' In contrast, Anthropic is depicted as engaging in unauthorized activities. In a statement shared with media, an Anthropic spokesperson said, 'we disagree with Reddit's claims, and we will defend ourselves vigorously.' In the weeks ahead, attorneys or Anthropic will answer Reddit's complaint and argue the company has not broken any laws. Reddit v. Anthropic has implications beyond the posts of Reddit users. Web crawlers scraping is a constant activity on the Internet, including message boards, blogs and other forums where sports fans and followers express viewpoints. The use of this content to train AI without knowledge or explicit consent by users is a legal topic sure to stir debate in the years ahead. Best of College Athletes as Employees: Answering 25 Key Questions