logo
Officials plan to seek the death penalty for a Tennessee man charged with killing 4

Officials plan to seek the death penalty for a Tennessee man charged with killing 4

CTV News3 days ago
This photo provided by Tennessee Bureau of Investigation shows Austin Robert Drummond. (Tennessee Bureau of Investigation via AP)
TITPONVILLE, Tenn. — Prosecutors plan to seek the death penalty against a Tennessee man charged with killing the parents, grandmother and uncle of an infant found abandoned in a home's front yard on Thursday.
Austin Robert Drummond, 28, appeared by a video feed from jail before a judge in Tiptonville, two days after he was arrested in the killings that set the mostly rural region of western Tennessee on edge.
At the hearing, District Attorney Danny Goodman told the judge that the state intends to seek the death penalty. Lake County General Sessions Judge Andrew T. Cook ordered Drummond held without bond because it a capital case.
Drummond was wearing a black and white striped jumpsuit and seated in a chair. He told a judge he operated a business and he said he wants a speedy trial in the case. The judge entered a guilty plea on his behalf because a lawyer had not been appointed yet.
The judge then ruled him indigent and granted a court-appointed attorney.
Drummond is charged with four counts of first-degree murder, aggravated kidnapping and weapons offenses. A weeklong search for Drummond ended when he was taken into custody in Jackson, located about 70 miles southeast of the location of the July 29 slayings.
The ordeal began after an infant in a car seat was found in a front yard in the Tigrett area, roughly 40 miles (65 kilometres) from Tiptonville. The Dyer County Sheriff's Office said in a statement that a caller reported a minivan or midsize SUV had dropped the infant at a 'random individual's front yard.'
The sheriff's office later said they were working with investigators in neighboring Lake County, where four people had been found dead from gunshot wounds. Officials determined those people were the child's parents, grandmother and uncle.
Authorities later identified them as James M. Wilson, 21; Adrianna Williams, 20; Cortney Rose, 38; and Braydon Williams, 15. Wilson and Adrianna Williams were the infant's parents, while Rose was Adrianna and Braydon Williams' mother.
Investigators determined the four relatives had not been seen since the night before, according to Goodman, the district attorney. A relative had called 911 after finding two vehicles in a remote area, and the four bodies were found in nearby woods, Goodman said.
Tennessee Bureau of Investigation Director David Rausch has said it's believed the killings were a targeted attack by Drummond. The baby is safe and being cared for by other relatives.
Hours before Tuesday's arrest, Jackson Police posted a roughly 20-second video surveillance clip showing someone they said was Drummond wearing camouflage and carrying a firearm, trying to enter the door of an unspecified building.
Jackson Police Chief Thom Corley said the calls from the public helped officers arrest Drummond, who had been staying in a vacant building near the woods where he was taken into custody.
Drummond's criminal history includes prison time for robbing a convenience store and threatening to go after jurors. He was also charged with attempted murder while behind bars.
The TBI has charged three other people they said helped Goodman after the killings.
Adrian Sainz, The Associated Press
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘No safety rules': Concerns grow as AI-generated videos spread hate online
‘No safety rules': Concerns grow as AI-generated videos spread hate online

CTV News

time25 minutes ago

  • CTV News

‘No safety rules': Concerns grow as AI-generated videos spread hate online

At first it appears to be a quirky video clip generated by artificial intelligence to make people laugh. In it, a hairy Bigfoot wearing a cowboy hat and a vest emblazoned with the American flag sits behind the wheel of a pickup truck. 'We are going today to the LGBT parade,' the apelike creature says with a laugh. 'You are going to love it.' Things then take a violent and disturbing turn as Bigfoot drives through a crowd of screaming people, some of them holding rainbow flags. The clip posted in June on the AmericanBigfoot TikTok page has garnered more than 360,000 views and hundreds of comments, most of them applauding the video. In recent months similar AI-generated content has flooded social media platforms, openly promoting violence and spreading hate against members of LGBTQ2S+, Jewish, Muslim and other minority groups. While the origin of most of those videos is unclear, their spread on social media is sparking outrage and concern among experts and advocates who say Canadian regulations cannot keep up with the pace of hateful AI-generated content, nor adequately address the risks it poses to public safety. Egale Canada, an LGBTQ2S+ advocacy organization, says the community is worried about the rise of transphobic and homophobic misinformation content on social media. 'These AI tools are being weaponized to dehumanize and discredit trans and gender diverse people and existing digital safety laws are failing to address the scale and speed of this new threat,' executive director Helen Kennedy said in a statement. Rapidly evolving technology has given bad actors a powerful tool to spread misinformation and hate, with transgender individuals being targeted disproportionately, Kennedy said. 'From deepfake videos to algorithm-driven amplification of hate, the harms aren't artificial– they're real." The LGBTQ2S+ community isn't the only target, said Evan Balgord, executive director of the Canadian Anti-Hate Network. Islamophobic, antisemitic and anti-South Asian content made with generative AI tools is also widely circulating on social media, he said. 'When they create the environment where there's a lot of celebration of violence towards those groups, it does make violence towards those groups happening in person or on the streets more likely,' Balgord warned in a phone interview. Canada's digital safety laws were already lagging behind and advancements in AI have made things even more complicated, he said. 'We have no safety rules at all when it comes to social media companies, we have no way of holding them accountable whatsoever.' Bills aimed at addressing harmful online content and establishing a regulatory AI framework died when Parliament was prorogued in January, said Andrea Slane, a legal studies professor at Ontario Tech University who has done extensive research on online safety. Slane said the government needs to take another look at online harms legislation and reintroduce the bill 'urgently.' 'I think Canada is in a situation where they really just need to move,' she said. Justice Minister Sean Fraser told The Canadian Press in June that the federal government will take a 'fresh' look at the Online Harms Act but it hasn't decided whether to rewrite or simply reintroduce it. Among other things, the bill aimed to hold social media platforms accountable for reducing exposure to harmful content. A spokesperson for the newly crated Ministry of Artificial Intelligence and Digital Innovation said the government is taking the issue of AI-generated hateful content seriously, especially when it targets vulnerable minority groups. Sofia Ouslis said existing laws do provide 'important protections,' but admitted they didn't aim to address the threat of generative AI when they were designed. 'There's a real need to understand how AI tools are being used and misused — and how we can strengthen the guardrails,' she said in a statement. 'That work is ongoing.' The work involves reviewing existing frameworks, monitoring court decisions 'and listening closely to both legal and technological experts,' Ouslis said. She added that Prime Minister Mark Carney's government has also committed to making the distribution of non-consensual sexual deepfakes a criminal offence. 'In this fast-moving space, we believe it's better to get regulation right than to move too quickly and get it wrong,' she said, noting that Ottawa is looking to learn from the European Union and the United Kingdom. Slane said the European Union has been ahead of others in regulating AI and ensuring digital safety, but despite being at the 'forefront,' there is a feeling there that more needs to be done. Experts say regulating content distributed by social media giants is particularly difficult because those companies aren't Canadian. Another complicating factor is the current political climate south of the border, where U.S. tech companies are seeing reduced regulations and restrictions, making them 'more powerful and feeling less responsible, said Slane. Although generative AI has been around for a few years, there's been a 'breakthrough' in recent months making it easier to produce good quality videos using tools that are mostly available for free or at a very low price, said Peter Lewis, Canada Research Chair in trustworthy artificial intelligence. 'I've got to say it's really accessible to almost anybody with a little bit of technical knowledge and access to the right tools right now,' he said. Lewis, who is also an assistant professor at Ontario Tech University, said that large language models such as ChatGPT have implemented safeguards in an effort to filter out harmful or illegal content. But more needs to be done in the video space to create such guardrails, he said. 'You and I could watch the video and probably be horrified,' he said, adding 'it's not clear necessarily that the AI system has the ability to sort of reflect on what it has created.' Lewis said that while he isn't a legal expert, he believes existing laws can be used to combat the online glorification of hate and violence in the AmericanBigfoot videos. But he added the rapid development of generative AI and widespread availability of new tools 'does call for new technological solution' and collaboration between governments, consumers, advocates, social platforms and AI app developers to address the problem. 'If these things are being uploaded…we need really robust responsive flagging mechanisms to be able to get these things off the internet as quickly as possible,' he said. Lewis said using AI tools to detect and flag such videos helps, but it won't resolve the issue. 'Due to the nature of the way these AI systems work, they're probabilistic, so they don't catch everything.' This report by The Canadian Press was first published Aug. 10, 2025. Sharif Hassan, The Canadian Press

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store