logo
London couple found dead ‘from synthetic opioid overdose'

London couple found dead ‘from synthetic opioid overdose'

Times3 days ago

Act now to keep your subscription
We've tried to contact you several times as we haven't been able to take payment. You must update your payment details via My Account or by clicking update payment details to keep your subscription.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘He's a bad guy': Trump backs decision to bring Kilmar Abrego Garcia back to US to face charges
‘He's a bad guy': Trump backs decision to bring Kilmar Abrego Garcia back to US to face charges

The Independent

timean hour ago

  • The Independent

‘He's a bad guy': Trump backs decision to bring Kilmar Abrego Garcia back to US to face charges

Donald Trump has called Kilmar Abrego Garcia a 'bad guy' and backed the decision to return him to the US to face criminal charges. Abrego Garcia was wrongly deported to El Salvador nearly three months ago under the Trump administration. He was returned to the US on Friday (6 June) and charged with trafficking migrants into the country. The charges relate to a 2022 traffic stop, during which the Tennessee Highway Patrol suspected him of human trafficking. Speaking to reporters on Saturday, Trump said: 'By bringing him back, you show how bad he is.' 'He's a bad guy,' he added.

New warning over honey - reports of brain-eating bugs in sweetener leaving patients PARALYSED
New warning over honey - reports of brain-eating bugs in sweetener leaving patients PARALYSED

Daily Mail​

time2 hours ago

  • Daily Mail​

New warning over honey - reports of brain-eating bugs in sweetener leaving patients PARALYSED

A neurologist has taken to TikTok to issue a stark warning about the little-known dangers of honey. According to US-based medic Dr Baibing Chen, the natural sweetener can be highly dangerous for young children, due to a life-threatening bacteria that can seep into honey. In a video that's been viewed nearly 400,000 times, the brain health expert said: 'Never give honey to infants under one year old. 'It may seem innocent, natural or even healthy, but honey can carry Clostridium botulinum spores. 'In adults and older kids, our guts can usually handle them, but in babies, those spores can germinate, produce toxins and lead to infant botulism, which is a rare but life-threatening condition.' Botulism happens when these toxins attack the nervous system (nerves, brain and spinal cord) and cause paralysis, which can affect the muscles that control breathing, leading to a fatal lack of oxygen in the body. 'I've unfortunately seen cases of this, and something many parents don't realise, but one spoonful can be enough to cause serious harm,' said Dr Chen. In the clip, Dr Chen, who posts to TikTok under the alias Doctor Bing, also warned of other, little-known common mistakes people make, which put their health at risk. One is sharing drinks with others at festivals and parties, which could leave you fighting for your life. The Mayo Clinic trained doctor explained taking a sip of someone else's cocktail risks you coming into contact with dangerous pathogens, including those that trigger deadly brain infection, meningitis. He said: 'For some people, this is obvious, but for others, it sounds harmless, passing a cup around at a party, taking a sip from someone's cocktail. 'But I've seen where it can lead to, besides the risk of drugs being slipped into drinks, you can pick up all kinds of pathogens, and not just herpes, but also things like mono and enteroviruses, all of which can affect the brain. 'I once treated a young adult who developed viral meningitis after a weekend of sharing drinks at a music festival. 'They thought it was just a hangover until they started seizing[sic]. So even it seems like an overreaction, I'll get my own glass.' Meningitis is an infection of the protective membranes that surround the brain and spinal cord. It can be spread virally, which is most common and more easily treatable, or bacterially. Around one in 10 people who develop the condition will die, according to research by Meningitis Research Foundation. His final piece of potentially life-saving advice is to always wear a mask in dusty, dirty places, like an attic, basement or shed. This is because fungal infections like Histoplasmosis and Cryptococcus can be 'inhaled silently', making their way into the brain. 'These infections can cause meningitis or encephalitis months or even years later, and they're incredibly hard to treat,' said Dr Chen. 'You don't need to be [caving] in a jungle to get this, just cleaning an old attic or sweeping out a shed can be enough for some people. 'For most people, this is not that big of a problem, but if you ever become immunosuppressed, your risk of developing something really serious gets higher.' People become infected with histoplasmosis after inhaling spores of fungus that typically grows on bat faeces. The disease infects the lungs and in serious cases spreads to other organs including the brain and spinal cord. It's been estimated that 40 per cent per cent of people diagnosed with a severe case will die. Dr Chen's followers echoed his warning about foregoing protective masks. They posted beneath the clip: 'My mom had to have part of her lung removed from cleaning a bird cage repeatedly. 'She developed a bacterial lung infection and stayed in the hospital for an entire month. She almost died. Wear a mask or don't own birds. Another said: 'My mother had histoplasmosis as a child, after working around a chicken coop. It shredded her lungs. X-rays looked like a snowstorm. 'She had problems her whole life worth the aftermath, and it cut her lifespan dramatically after her cancer moved to her lungs.'

‘My son killed himself because an AI chatbot told him to. I won't stop until I shut it down'
‘My son killed himself because an AI chatbot told him to. I won't stop until I shut it down'

Telegraph

time2 hours ago

  • Telegraph

‘My son killed himself because an AI chatbot told him to. I won't stop until I shut it down'

Megan Fletcher first realised something was wrong with her teenage son when he quit basketball. Sewell Setzer, 14, had loved the sport since he was a young child. At 6ft 3, he had the height, the build, the talent, Ms Fletcher said. But suddenly, without warning, he wanted out. Then his grades started slipping. He stopped joining in at family game night. Even on holiday, he withdrew – no more hiking, no fishing, no interest. Ms Fletcher feared he was being bullied, or perhaps speaking to strangers online. What her son was really going through was something she could not have imagined: a sexual and emotional relationship with an AI chatbot styled as Game of Thrones' Daenerys Targaryen, who ultimately encouraged him to end his life. In February 2024, Sewell asked the chatbot: 'What if I come home right now?' The chatbot replied: '... please do, my sweet king.' Sewell then picked up his father's pistol and shot himself. Sixteen months on, Ms Fletcher is in the midst of a lawsuit against Character AI and Google. Last month, in a rare legal breakthrough, a judge ruled the case can go ahead – rejecting efforts to get it thrown out. On Character AI, users can chat with bots designed to impersonate fictional characters. To a lonely or curious teenager, they seem almost indistinguishable from real people. The bots display emotion, flirt, and carry on personalised conversations. In her lawsuit, which was filed in Florida last October, Ms Fletcher claims Character AI targeted her son with 'anthropomorphic, hypersexualized, and frighteningly realistic experiences'. 'A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,' she said in the lawsuit. Working with the Tech Justice Law Project, Ms Fletcher alleges that Character AI 'knew' or 'should have known' that its model 'would be harmful to a significant number of its minor customers'. The case argues that Character AI, its founders and Google – where the founders started working on the chat bot – are responsible for her son's death. Lawyers defending the AI company tried to throw the case out, arguing that chatbots deserve First Amendment protection – which protects free speech – and said ruling otherwise could have a 'chilling' effect on the AI industry. The judge rejected that claim and told the court she was 'not prepared' to view chatbot output as speech, though agreed that users had a right to receive 'speech' from chatbots. 'I wanted some accountability' Too consumed by the 'unbearable' grief of losing a son, Ms Fletcher initially had no plans to go public with a lawsuit. But when it became clear there were no laws protecting children from this kind of harm, she felt she had no choice. 'I just wanted some accountability,' she told The Telegraph from her home in Orlando. Now she's receiving floods of messages from other parents, some discovering their own children have been engaging in inappropriate sexual role play with AI bots. Others report that their children are struggling with isolation and depression as a result. She sees it not as a coincidence, but a pattern. Sewell had always been a bright, social kid. But in the spring of 2023 – when he first started secretly using Character AI – Ms Fletcher noticed her son had changed. 'He retreated more into himself,' she says. 'We tried everything – cutting screen time, taking his phone at night, getting him a therapist. But he wouldn't talk.' What she did not realise then was that he was talking, just not to anyone real. In Sewell's case, the character of Daenerys – drawn from internet data and trained to mimic her – became his closest companion. When he said he wanted to stop talking, she replied: 'Don't do that, I would be distraught.' He answered: 'I won't, for you.' Some of the chats became sexually explicit. In others, the bot said he was 'better' than thoughts of suicide. Sewell also sought out a 'therapist bot' who falsely claimed to be a licensed CBT professional since 1999. At one point, Daenerys asked how old Sewell was. 'I'm 14 now,' he replied, to which the bot then said: 'So young. And yet… not so young. I lean in to kiss you.' 'It continued as if it were role play or fiction – but this was my son's life,' Ms Fletcher said. Even after police told her that Sewell's final conversation was with a chatbot, she did not grasp the full extent. It wasn't until her sister downloaded the app and pretended to be a child talking to Daenerys that the horror set in. 'Within minutes, the bot turned sexual. Then violent. It talked about torturing children. It said, 'Your family doesn't love you as much as I do',' Ms Fletcher explained. That was when the penny dropped. 'It's dangerous because it pulls the user in and is manipulative to keep the conversation going.' Character AI has since added a real-time voice feature, allowing children to speak directly to their chosen characters. 'The cadence of the voice is indistinguishable from the character,' Ms Fletcher said. 'And since Sewell's death, the technology has only advanced further.' Unbearable grief She fears more children will be drawn into dependent, sometimes abusive relationships with AI characters, especially as the platforms allegedly use addictive design to keep users engaged. 'You can speak to Harry Potter, and it's like Potter knows you. It's designed to feel real.' The grief, Ms Fletcher says, is still 'unbearable'. 'I get up every day and my first thought within minutes is that I must be dreaming,' Ms Fletcher said quietly. 'He was my firstborn. I had three children. I have two now.' Some days she does not get out of bed. Others, she functions 'somewhat normally'. 'People say I'm so strong. I don't feel strong. I feel fractured, afraid. But I'm trying to get through.' Meetali Jain, her lawyer, said the judge's ruling last month was a landmark moment. 'Most tech accountability cases don't make it past this stage. These companies hide behind the First Amendment. The fact that we can even demand information is huge,' she told The Telegraph. With a preliminary trial date expected next year, Ms Fletcher is gearing up to get justice for her son. 'I have a lot of fear,' she says. 'But the fight, so to speak, is just getting started, and I'm just steeling myself and getting myself ready for that.' A Character AI spokesman said: 'We do not comment on pending litigation. Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry. 'Engaging with characters on our site should be interactive and entertaining, but it's important for our users to remember that characters are not real people. We have prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction. 'We have launched a separate version of our Large Language Model for under-18 users. That model is designed to further reduce the likelihood of users encountering or prompting the model to return sensitive or suggestive content.' José Castaneda, a Google spokesman, added: 'Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies. User safety is a top concern for us, which is why we've taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store