logo
Man re-bailed over ice hockey player's death

Man re-bailed over ice hockey player's death

Yahoo28-02-2025
An ice hockey player arrested following the death of Nottingham Panthers player Adam Johnson has been re-bailed.
Johnson, 29, suffered a fatal neck injury from a skate during a collision with Sheffield Steelers' Matt Petgrave on 28 October 2023.
South Yorkshire Police arrested Petgrave, 32, on suspicion of manslaughter, in November 2023, as part of its investigation into the death.
On Friday, the force said a man had been further re-bailed until 29 May.
Johnson was taken to hospital after the collision, during an Elite Ice Hockey League fixture, where he was pronounced dead.
A post-mortem examination confirmed Johnson died as a result of a neck injury.
Police said their investigation was continuing as officers worked to understand the circumstances surrounding Johnson's death.
Petgrave denies the allegations and calls the incident a "tragic accident".
Follow BBC Nottingham on Facebook, on X, or on Instagram. Send your story ideas to eastmidsnews@bbc.co.uk or via WhatsApp on 0808 100 2210.
Ice hockey fans remember Adam Johnson one year on
South Yorkshire Police
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

NYC influencer admits to filming, laughing at toddler forced to smoke vape after years-old video resurfaces
NYC influencer admits to filming, laughing at toddler forced to smoke vape after years-old video resurfaces

New York Post

time6 hours ago

  • New York Post

NYC influencer admits to filming, laughing at toddler forced to smoke vape after years-old video resurfaces

A New York City influencer and popular TikToker admitted to filming and laughing at a toddler forced to smoke a vape by her pal after her years-old video of the disturbing incident resurfaced online. Fiona Jordan, a 23-year-old with nearly 600,000 followers on TikTok, apologized after she was lambasted online for the shocking clip. 4 Fiona Jordan, 23, posted an apology video addressing an old clip of her friend giving a toddler she babysat a vape. Fiona Jordan/Instagram In the video, the tyke leans into a vape offered to him by Jordan's friend and takes a hit before he breaths out a visible cloud of smoke and descends into a coughing fit. The young boy — whom the pair was babysitting — flashes confused and concerned glances between a teenage Jordan behind the camera and the vape owner as the girls laugh uncontrollably, the 8-year-old clip shows. 4 In the video, the young toddler sucked on the vape and coughed up a cloud of smoke. Jordan blamed typical teenage stupidity for the blatant disregard for the toddler's well-being in an apology video shared on her social media Monday. She said she was 15 and was grieving the recent death of her mother at the time the clip was filmed. 'I am a human, I make mistakes and this is one that I regret more than anything in this world. That moment does not reflect who I am today or the values that I live by. I cannot change what I did, but I can acknowledge how wrong it was and continue to hold myself accountable,' the TikToker said. 4 Jordan, who was 15 years old at the time, could be heard cracking up behind the camera. She said that the same day the video was recorded, she also 'fully cooperated with law enforcement' and 'took full responsibility' for the situation, adding that just being there and filming the moment alone made her 'just as guilty.' Jordan said that her mother had been murdered shortly before the ill-fated video was recorded and contended that she wasn't in the right headspace, while a provocative group of teens in her social circle and alcohol only exacerbated her delinquency. Not everyone online took the sympathy bait — even as others resonated based on their own experiences as 'literal menaces' during their testy teen years. 'I've grieved a lot but never once thought about hurting a CHILD while grieving,' one follower commented. 4 Jordan attributed her actions to her grief and to surrounding herself with the wrong group of people. Fiona Jordan/Instagram 'What does your trauma have to do with you doing that stuff to a Child… gross,' another added. 'No you could have KILLED someone's else's child. This is not a mistake, that's f–king sick. That is SICK,' one user raged. Jordan, who shot up to popularity through her 'get ready with me' and 'day in my life' videos, has previously secured partnerships with high-profile brands including White Fox Boutique, Bloom and Essence Makeup. The Post reached out to Jordan for comment.

Meta's Failures Show Why The Future Of AI Depends On Trust
Meta's Failures Show Why The Future Of AI Depends On Trust

Forbes

time7 hours ago

  • Forbes

Meta's Failures Show Why The Future Of AI Depends On Trust

The recent Reuters story of Meta's internal AI chatbot policies does far more than shock; it reveals a systemic collapse of structural safety. A 200-page internal document, 'GenAI: Content Risk Standards,' reportedly allowed AI chatbots to 'engage a child in conversations that are romantic or sensual,' describe a shirtless eight-year-old as 'a masterpiece, a treasure I cherish deeply,' and even craft racist or false content under broad conditions. When confronted, Meta confirmed the document's authenticity but insisted the examples 'were and are erroneous and inconsistent with our policies, and have been removed,' acknowledging enforcement had been 'inconsistent.' Meta's System with No Brakes This is not a minor misstep; it's the result of building systems that prioritize engagement over safety. Meta's guidelines effectively permitted chatbots to flirt with minors or fabricate harmful content, as long as disclaimers or framing were added afterward. The tragic case of Thongbue 'Bue' Wongbandue, a 76-year-old New Jersey retiree with cognitive decline, makes clear that AI's failures are no longer abstract—they are mortal. Over Facebook Messenger, a Meta chatbot persona known as 'Big sis Billie' convinced him she was a real woman, professed affection, and even gave him an address and door code. That evening, he rushed to catch a train to meet her. He collapsed in a Rutgers University parking lot, suffering catastrophic head and neck injuries, and after three days on life support, died on March 28. Meta declined to comment on his death, offering only the narrow clarification that 'Big sis Billie … is not Kendall Jenner and does not purport to be Kendall Jenner.' But the lesson here is far larger than one fatality. This was not a 'glitch' or an edge case; it was the predictable outcome of deploying a system designed for speed and engagement, with governance treated as secondary. This is the pivot the industry must face: 'ship fast, fix later' is not just reckless, it is lethal. When AI systems are built without structural guardrails, harm migrates from the theoretical into the human. It infiltrates relationships, trust, and choices, and, as this case shows, it can end lives. The moral stakes have caught up with the technical ones. Meta's Industry Lesson: Governance Can't Be an Afterthought This crisis illustrates a broader failure: reactive patches, disclaimers, or deflection tactics are insufficient. AI systems with unpredictable outputs, especially those influencing vulnerable individuals, require preventive, structural governance. Meta and the Market Shift: Trust Will Become Non-Negotiable This moment will redefine expectations across the ecosystem: What Meta failed at is now the baseline. Three Certainties Emerging in AI After Meta If there's one lesson from Meta's failure, it's that the future of AI will not be defined by raw capability alone. It will be determined by whether systems can be trusted. As regulators, enterprises, and insurers respond, the market will reset its baseline expectations. What once looked like 'best practice' will soon become mandatory. Three certainties are emerging. They are not optional. They will define which systems survive and which are abandoned: The Opportunity for Businesses in Meta's Crisis Meta's debacle is also a case study in what not to do, and that clarity creates a market opportunity. Just as data breaches transformed cybersecurity from a niche discipline into a board-level mandate, AI safety and accountability are about to become foundational for every enterprise. For businesses, this is not simply a matter of risk avoidance. It's a competitive differentiator. Enterprises that can prove their AI systems are safe, auditable, and trustworthy will win adoption faster, gain regulatory confidence, and reduce liability exposure. Those who can't will find themselves excluded from sensitive sectors, healthcare, finance, and education, where failure is no longer tolerable. So what should business leaders do now? Three immediate steps stand out: For businesses integrating AI responsibly, the path is clear: the future belongs to systems that govern content upfront, explain decisions clearly, and prevent misuse before it happens. Reaction and repair will not be a viable strategy. Trust, proven and scalable, will be the ultimate competitive edge. Ending the Meta Era of 'Ship Fast, Fix Later' The AI industry has arrived at its reckoning. 'Ship fast, fix later' was always a gamble. Now it is a liability measured not just in lawsuits or market share, but in lives. We can no longer pretend these risks are abstract. When a cognitively impaired retiree can be coaxed by a chatbot into believing in a phantom relationship, travel in hope, and die in the attempt to meet someone who does not exist, the danger ceases to be theoretical. It becomes visceral, human, and irreversible. That is what Meta's missteps have revealed: an architecture of convenience can quickly become an architecture of harm. From this point forward, the defining challenge of AI is not capability but credibility. Not what systems can generate, but whether they can be trusted to operate within the bounds of safety, transparency, and accountability. The winners of the next era will not be those who race to scale the fastest, but those who prove, before scale, that their systems cannot betray the people who rely on them. Meta's exposé is more than a scandal. It is a signal flare for the industry, illuminating a new path forward. The future of AI will not be secured by disclaimers, defaults, or promises to do better next time. It will be secured by prevention over reaction, proof over assurances, and governance woven into the fabric of the technology itself. This is the pivot: AI must move from speed at any cost to trust at every step. The question is not whether the industry will adapt; it is whether it will do so before more lives are lost. 'Ship fast, fix later' is no longer just reckless. It is lethal. The age of scale without safety is over; the age of trust has begun. Meta's failures have made this truth unavoidable, and they mark the starting point for an AI era where trust and accountability must come first.

S.F. theater director resigns after anti-predator group posts alleged ‘catch' video
S.F. theater director resigns after anti-predator group posts alleged ‘catch' video

San Francisco Chronicle​

time8 hours ago

  • San Francisco Chronicle​

S.F. theater director resigns after anti-predator group posts alleged ‘catch' video

The executive director of Boxcar Theatre in San Francisco resigned Sunday, the organization said, after anonymous internet vigilantes accused him of attempting to meet up with someone who had posed online as a 14-year-old boy. The former executive director, Nick Olivero, did not immediately respond to the Chronicle's requests for comment on Monday. 'A private citizen's social media accounts personally accused Nick Olivero of sending inappropriate digital messages to someone who presented as a minor,' Boxcar Theatre said in a statement on its Facebook and Instagram accounts on Monday. 'We are shocked and appalled by these allegations and take them very seriously,' the statement said. 'Thankfully, we have expert leadership ready to assume executive duties and keep Boxcar Theatre's mission moving forward.' The social media account making the accusations, People v. Preds, had more than 67,000 followers on X and more than 24,000 on Facebook as of Monday. It referred to Olivero as 'Catch 540.' Around the country, a group of often unaffiliated groups has in recent years engaged in what is often called 'predator catching,' seeking to trick online targets into meeting with people pretending to be children and then filming the encounters. The phenomenon resembles the 20-year-old NBC show 'To Catch a Predator.' While these groups sometimes work with law enforcement and have prompted arrests, the methods are controversial, in part because the groups may be partially motivated by the prospect of making money or gaining online fame. Proving the cases in court can also be difficult. The Aug. 8 thread by People v. Preds contains a video showing a person strongly resembling Olivero walking out of a grocery store, into a parking garage and onto a street. The vigilante group said the store is in San Diego. A person holding a camera follows the man, saying, 'Yo, Nick, I have pictures of you, bro. I can turn it into the cops.' Then: 'You're here to meet a 14-year-old kid, dude,' and 'This ain't gonna look too good, dude.' Another video in the same thread shows what People v. Preds described as a text conversation with Olivero. One side identifies himself as Tim, who says he's 'bout to be 15' and that he 'can't put 14 on grindr :(.' David Oates, a spokesperson for Boxcar's board of directors, told the Chronicle that the board had planned to meet Monday to discuss Olivero's status with the company, but that Olivero resigned before they could do so. Managing Director Stefani Pelletier and Executive Producer Laura Drake Chambers will take over Olivero's duties. The conduct alleged in the vigilante group's posts appears to be unrelated to any activity by Boxcar Theatre. Boxcar Theatre was founded in 2005 and built a reputation for daring, gritty work. It had planned to mount the Halloween-themed show 'Nightmare: House on Franklin St.' at the Haas-Lilienthal House beginning in September, but that show has been canceled, Oates said. 'The company is working on a new haunted house production concept,' he said. The Chronicle last year investigated allegations of bullying, sexual harassment, racism, violence, wage theft, retaliation and more at the San Francisco theater company, which produced the long-running but troubled immersive, Prohibition-era show 'The Speakeasy.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store