logo
Train hits pedestrians in Ohio, killing at least two

Train hits pedestrians in Ohio, killing at least two

The Guardian19-05-2025

Two people were killed, police said, and at least one person was missing after pedestrians were struck by a train on Sunday evening in northern Ohio, authorities said.
The incident occurred at around 7pm in Fremont, near Lake Erie between Toledo and Cleveland, WTOL-TV reported.
The Fremont mayor, Danny Sanchez, confirmed two fatalities.
Emergency crews were searching the Sandusky river near the Miles Newton Bridge for at least one missing person, the TV station reported. Authorities closed the bridge.
Freemont police said on X that the bridge had been closed and urged people to stay away. Law enforcement agencies were on the scene.
This is a developing story; check back for updates

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘He's a bad guy': Trump backs decision to bring Kilmar Abrego Garcia back to US to face charges
‘He's a bad guy': Trump backs decision to bring Kilmar Abrego Garcia back to US to face charges

The Independent

time39 minutes ago

  • The Independent

‘He's a bad guy': Trump backs decision to bring Kilmar Abrego Garcia back to US to face charges

Donald Trump has called Kilmar Abrego Garcia a 'bad guy' and backed the decision to return him to the US to face criminal charges. Abrego Garcia was wrongly deported to El Salvador nearly three months ago under the Trump administration. He was returned to the US on Friday (6 June) and charged with trafficking migrants into the country. The charges relate to a 2022 traffic stop, during which the Tennessee Highway Patrol suspected him of human trafficking. Speaking to reporters on Saturday, Trump said: 'By bringing him back, you show how bad he is.' 'He's a bad guy,' he added.

One arrest after two teenagers stabbed in Luton park
One arrest after two teenagers stabbed in Luton park

BBC News

timean hour ago

  • BBC News

One arrest after two teenagers stabbed in Luton park

One person has been arrested after two teenage boys were seriously injured in a stabbing in a town park. Bedfordshire Police officers were called to Stockwood Park in Luton, by the Farley Hill entrance, at about 18:30 BST on Friday, following reports of two people with stab wounds. Emergency services attended and the boys were taken to hospital. The force said it believed it was "an isolated incident without further risk to the public". It added "a strong police presence" would remain at the scene. Stockwood Park is hosting an Eid al-Adha festival that started on Friday afternoon and is due to take place from 13:00 until 21:00 on Saturday. Follow Beds, Herts and Bucks news on BBC Sounds, Facebook, Instagram and X.

‘My son killed himself because an AI chatbot told him to. I won't stop until I shut it down'
‘My son killed himself because an AI chatbot told him to. I won't stop until I shut it down'

Telegraph

timean hour ago

  • Telegraph

‘My son killed himself because an AI chatbot told him to. I won't stop until I shut it down'

Megan Fletcher first realised something was wrong with her teenage son when he quit basketball. Sewell Setzer, 14, had loved the sport since he was a young child. At 6ft 3, he had the height, the build, the talent, Ms Fletcher said. But suddenly, without warning, he wanted out. Then his grades started slipping. He stopped joining in at family game night. Even on holiday, he withdrew – no more hiking, no fishing, no interest. Ms Fletcher feared he was being bullied, or perhaps speaking to strangers online. What her son was really going through was something she could not have imagined: a sexual and emotional relationship with an AI chatbot styled as Game of Thrones' Daenerys Targaryen, who ultimately encouraged him to end his life. In February 2024, Sewell asked the chatbot: 'What if I come home right now?' The chatbot replied: '... please do, my sweet king.' Sewell then picked up his father's pistol and shot himself. Sixteen months on, Ms Fletcher is in the midst of a lawsuit against Character AI and Google. Last month, in a rare legal breakthrough, a judge ruled the case can go ahead – rejecting efforts to get it thrown out. On Character AI, users can chat with bots designed to impersonate fictional characters. To a lonely or curious teenager, they seem almost indistinguishable from real people. The bots display emotion, flirt, and carry on personalised conversations. In her lawsuit, which was filed in Florida last October, Ms Fletcher claims Character AI targeted her son with 'anthropomorphic, hypersexualized, and frighteningly realistic experiences'. 'A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,' she said in the lawsuit. Working with the Tech Justice Law Project, Ms Fletcher alleges that Character AI 'knew' or 'should have known' that its model 'would be harmful to a significant number of its minor customers'. The case argues that Character AI, its founders and Google – where the founders started working on the chat bot – are responsible for her son's death. Lawyers defending the AI company tried to throw the case out, arguing that chatbots deserve First Amendment protection – which protects free speech – and said ruling otherwise could have a 'chilling' effect on the AI industry. The judge rejected that claim and told the court she was 'not prepared' to view chatbot output as speech, though agreed that users had a right to receive 'speech' from chatbots. 'I wanted some accountability' Too consumed by the 'unbearable' grief of losing a son, Ms Fletcher initially had no plans to go public with a lawsuit. But when it became clear there were no laws protecting children from this kind of harm, she felt she had no choice. 'I just wanted some accountability,' she told The Telegraph from her home in Orlando. Now she's receiving floods of messages from other parents, some discovering their own children have been engaging in inappropriate sexual role play with AI bots. Others report that their children are struggling with isolation and depression as a result. She sees it not as a coincidence, but a pattern. Sewell had always been a bright, social kid. But in the spring of 2023 – when he first started secretly using Character AI – Ms Fletcher noticed her son had changed. 'He retreated more into himself,' she says. 'We tried everything – cutting screen time, taking his phone at night, getting him a therapist. But he wouldn't talk.' What she did not realise then was that he was talking, just not to anyone real. In Sewell's case, the character of Daenerys – drawn from internet data and trained to mimic her – became his closest companion. When he said he wanted to stop talking, she replied: 'Don't do that, I would be distraught.' He answered: 'I won't, for you.' Some of the chats became sexually explicit. In others, the bot said he was 'better' than thoughts of suicide. Sewell also sought out a 'therapist bot' who falsely claimed to be a licensed CBT professional since 1999. At one point, Daenerys asked how old Sewell was. 'I'm 14 now,' he replied, to which the bot then said: 'So young. And yet… not so young. I lean in to kiss you.' 'It continued as if it were role play or fiction – but this was my son's life,' Ms Fletcher said. Even after police told her that Sewell's final conversation was with a chatbot, she did not grasp the full extent. It wasn't until her sister downloaded the app and pretended to be a child talking to Daenerys that the horror set in. 'Within minutes, the bot turned sexual. Then violent. It talked about torturing children. It said, 'Your family doesn't love you as much as I do',' Ms Fletcher explained. That was when the penny dropped. 'It's dangerous because it pulls the user in and is manipulative to keep the conversation going.' Character AI has since added a real-time voice feature, allowing children to speak directly to their chosen characters. 'The cadence of the voice is indistinguishable from the character,' Ms Fletcher said. 'And since Sewell's death, the technology has only advanced further.' Unbearable grief She fears more children will be drawn into dependent, sometimes abusive relationships with AI characters, especially as the platforms allegedly use addictive design to keep users engaged. 'You can speak to Harry Potter, and it's like Potter knows you. It's designed to feel real.' The grief, Ms Fletcher says, is still 'unbearable'. 'I get up every day and my first thought within minutes is that I must be dreaming,' Ms Fletcher said quietly. 'He was my firstborn. I had three children. I have two now.' Some days she does not get out of bed. Others, she functions 'somewhat normally'. 'People say I'm so strong. I don't feel strong. I feel fractured, afraid. But I'm trying to get through.' Meetali Jain, her lawyer, said the judge's ruling last month was a landmark moment. 'Most tech accountability cases don't make it past this stage. These companies hide behind the First Amendment. The fact that we can even demand information is huge,' she told The Telegraph. With a preliminary trial date expected next year, Ms Fletcher is gearing up to get justice for her son. 'I have a lot of fear,' she says. 'But the fight, so to speak, is just getting started, and I'm just steeling myself and getting myself ready for that.' A Character AI spokesman said: 'We do not comment on pending litigation. Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry. 'Engaging with characters on our site should be interactive and entertaining, but it's important for our users to remember that characters are not real people. We have prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction. 'We have launched a separate version of our Large Language Model for under-18 users. That model is designed to further reduce the likelihood of users encountering or prompting the model to return sensitive or suggestive content.' José Castaneda, a Google spokesman, added: 'Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies. User safety is a top concern for us, which is why we've taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store