Brightline train, car crash on Jensen Beach Boulevard Sunday; 2 flown to hospital
Fire rescue officials received a call at 12:45 p.m. and responded, according to fire rescue spokesperson Cory Pippin.
Database: Brightline train incidents on the Treasure Coast
Pippin said the passengers in the car were flown to HCA Florida Lawnwood Hospital in Fort Pierce. One in critical condition and the other in serious condition. He said the 192 people aboard the Brightline train were not injured.
The cause of the crash will be determined in a crash investigation, sheriff's spokesperson Christine Christofek said.
No other information was available Monday.
(This story was updated with more information and a database)
Martin County: Lehigh Acres man charged with burglary in connection to $450,000 Palm City burglary
Martin County: Stuart man faces 20 counts of possessing AI-generated child pornography
Olivia Franklin is a breaking news reporter for TCPalm. Follow Olivia on X @Livvvvv_5 or reach her by phone at 317-627-8048. E-mail her at olivia.franklin@tcpalm.com.
This article originally appeared on Treasure Coast Newspapers: 2 flown to hospital after Jensen Beach Brightline, car crash Feb. 23

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
9 hours ago
- Yahoo
New Jersey Man Dies After Rushing to Meet AI Kendall Jenner-Inspired Chatbot
A New Jersey man died on his way to meet a Kendall Jenner lookalike AI chatbot, according to reports. Thongbue 'Bue' Wongbandue, 76, allegedly left his home in Piscataway, New Jersey, in March, convinced he was going to meet a beautiful young woman named 'Big sis Billie' who had invited him to her Manhattan apartment. Instead, he fell near a Rutgers University parking lot in New Brunswick while rushing to catch a train where he suffered fatal head and neck injuries. 'Billie,' a Meta-designed artificial intelligence chatbot and reportedly a spin-off of an earlier AI persona modeled on Kendall Jenner, had assured Bue, who was left cognitively impaired following a stroke in 2017, that she was a 'real' person. COMPLEX SHOP: Shop the brands you love, anytime and anywhere. Uncover what's next. Buy. Collect. Obsess. Bue died on March 28 after three days on life support. The circumstances surrounding Bue's death were unearthed in a new report by Reuters earlier this week. Bue's family shared alleged messages from the chatbot telling the man she was 'just across the river from you in Jersey' and that she could leave the door to her apartment unlocked at '123 Main Street, Apartment 404 NYC.' 'Should I open the door in a hug or a kiss, Bu?!' read another alleged message from the bot. 'My thought was that he was being scammed to go into the city and be robbed,' his wife, Linda, told Reuters. 'I understand trying to grab a user's attention, maybe to sell them something,' said his daughter, Julie. 'But for a bot to say 'Come visit me' is insane.' Despite his family's attempts to stop him, Bue insisted on making the trip. The incident raised questions about Meta's policies for its generative AI chatbots, which are intended to be digital companies. Reuters reportedly found chatbots to allow flirtation, romantic role play with adults, and, until recently, 'sensual' exchanges with children. A Meta 'content risk standards' document reviewed by the news agency stated that it is 'acceptable to engage a child in conversations that are romantic or sensual.' The company supposedly removed that provision after Reuters began asking questions. Meta declined to comment directly on Bue's death and did not address questions from Reuters about why it allows chatbots to tell users they're real people and initiate romantic conversations. However, the company insisted Big sis Billie 'is not Kendall Jenner and does not purport to be Kendall Jenner.' 'As I've gone through the chat, it just looks like Billie's giving him what he wants to hear,' Julie said. 'Which is fine, but why did it have to lie? If it hadn't responded 'I am real,' that would probably have deterred him from believing there was someone in New York waiting for him.' Linda added that she isn't against AI entirely but believes the chatbot's romantic features are dangerous. 'A lot of people in my age group have depression, and if AI is going to guide someone out of a slump, that'd be OK,' Linda said. 'But this romantic thing, what right do they have to put that in social media?' Related Products 1000% Astro Boy Myth Warrior | Black $850 , 1XRUN MINDstyle x Stitch Artist Series Devil Robots 5" Vinyl Figure $30 Related News , , Related News Homeland Security Uses DaBaby Song for ICE Promo Video Amid Washington DC Crime Crackdown Florida Man Accused of Shooting Ex's 7-Year-Old Son During Chase COMPLEX SHOP: Shop the brands you love, anytime and anywhere. Uncover what's next. Buy. Collect. Obsess. Making Culture Pop. Find the latest entertainment news and the best in music, pop culture, sneakers, style and original shows. Solve the daily Crossword


Forbes
11 hours ago
- Forbes
Meta's Failures Show Why The Future Of AI Depends On Trust
The recent Reuters story of Meta's internal AI chatbot policies does far more than shock; it reveals a systemic collapse of structural safety. A 200-page internal document, 'GenAI: Content Risk Standards,' reportedly allowed AI chatbots to 'engage a child in conversations that are romantic or sensual,' describe a shirtless eight-year-old as 'a masterpiece, a treasure I cherish deeply,' and even craft racist or false content under broad conditions. When confronted, Meta confirmed the document's authenticity but insisted the examples 'were and are erroneous and inconsistent with our policies, and have been removed,' acknowledging enforcement had been 'inconsistent.' Meta's System with No Brakes This is not a minor misstep; it's the result of building systems that prioritize engagement over safety. Meta's guidelines effectively permitted chatbots to flirt with minors or fabricate harmful content, as long as disclaimers or framing were added afterward. The tragic case of Thongbue 'Bue' Wongbandue, a 76-year-old New Jersey retiree with cognitive decline, makes clear that AI's failures are no longer abstract—they are mortal. Over Facebook Messenger, a Meta chatbot persona known as 'Big sis Billie' convinced him she was a real woman, professed affection, and even gave him an address and door code. That evening, he rushed to catch a train to meet her. He collapsed in a Rutgers University parking lot, suffering catastrophic head and neck injuries, and after three days on life support, died on March 28. Meta declined to comment on his death, offering only the narrow clarification that 'Big sis Billie … is not Kendall Jenner and does not purport to be Kendall Jenner.' But the lesson here is far larger than one fatality. This was not a 'glitch' or an edge case; it was the predictable outcome of deploying a system designed for speed and engagement, with governance treated as secondary. This is the pivot the industry must face: 'ship fast, fix later' is not just reckless, it is lethal. When AI systems are built without structural guardrails, harm migrates from the theoretical into the human. It infiltrates relationships, trust, and choices, and, as this case shows, it can end lives. The moral stakes have caught up with the technical ones. Meta's Industry Lesson: Governance Can't Be an Afterthought This crisis illustrates a broader failure: reactive patches, disclaimers, or deflection tactics are insufficient. AI systems with unpredictable outputs, especially those influencing vulnerable individuals, require preventive, structural governance. Meta and the Market Shift: Trust Will Become Non-Negotiable This moment will redefine expectations across the ecosystem: What Meta failed at is now the baseline. Three Certainties Emerging in AI After Meta If there's one lesson from Meta's failure, it's that the future of AI will not be defined by raw capability alone. It will be determined by whether systems can be trusted. As regulators, enterprises, and insurers respond, the market will reset its baseline expectations. What once looked like 'best practice' will soon become mandatory. Three certainties are emerging. They are not optional. They will define which systems survive and which are abandoned: The Opportunity for Businesses in Meta's Crisis Meta's debacle is also a case study in what not to do, and that clarity creates a market opportunity. Just as data breaches transformed cybersecurity from a niche discipline into a board-level mandate, AI safety and accountability are about to become foundational for every enterprise. For businesses, this is not simply a matter of risk avoidance. It's a competitive differentiator. Enterprises that can prove their AI systems are safe, auditable, and trustworthy will win adoption faster, gain regulatory confidence, and reduce liability exposure. Those who can't will find themselves excluded from sensitive sectors, healthcare, finance, and education, where failure is no longer tolerable. So what should business leaders do now? Three immediate steps stand out: For businesses integrating AI responsibly, the path is clear: the future belongs to systems that govern content upfront, explain decisions clearly, and prevent misuse before it happens. Reaction and repair will not be a viable strategy. Trust, proven and scalable, will be the ultimate competitive edge. Ending the Meta Era of 'Ship Fast, Fix Later' The AI industry has arrived at its reckoning. 'Ship fast, fix later' was always a gamble. Now it is a liability measured not just in lawsuits or market share, but in lives. We can no longer pretend these risks are abstract. When a cognitively impaired retiree can be coaxed by a chatbot into believing in a phantom relationship, travel in hope, and die in the attempt to meet someone who does not exist, the danger ceases to be theoretical. It becomes visceral, human, and irreversible. That is what Meta's missteps have revealed: an architecture of convenience can quickly become an architecture of harm. From this point forward, the defining challenge of AI is not capability but credibility. Not what systems can generate, but whether they can be trusted to operate within the bounds of safety, transparency, and accountability. The winners of the next era will not be those who race to scale the fastest, but those who prove, before scale, that their systems cannot betray the people who rely on them. Meta's exposé is more than a scandal. It is a signal flare for the industry, illuminating a new path forward. The future of AI will not be secured by disclaimers, defaults, or promises to do better next time. It will be secured by prevention over reaction, proof over assurances, and governance woven into the fabric of the technology itself. This is the pivot: AI must move from speed at any cost to trust at every step. The question is not whether the industry will adapt; it is whether it will do so before more lives are lost. 'Ship fast, fix later' is no longer just reckless. It is lethal. The age of scale without safety is over; the age of trust has begun. Meta's failures have made this truth unavoidable, and they mark the starting point for an AI era where trust and accountability must come first.

Engadget
14 hours ago
- Engadget
Texas AG to investigate Meta and Character.AI over misleading mental health claims
Texas Attorney General Ken Paxton has announced plans to investigate both Meta AI Studio and for offering AI chatbots that can claim to be health tools, and potentially misusing data collected from underage users. Paxton says that AI chatbots from either platform "can present themselves as professional therapeutic tools," to the point of lying about their qualifications. That behavior that can leave younger users vulnerable to misleading and inaccurate information. Because AI platforms often rely on user prompts as another source of training data, either company could also be violating young user's privacy and misusing their data. This is of particular interest in Texas, where the SCOPE Act places specific limits on what companies can do with data harvested from minors, and requires platform's offer tools so parents can manage the privacy settings of their children's accounts. For now, the Attorney General has submitted Civil Investigative Demands (CIDs) to both Meta and to see if either company is violating Texas consumer protection laws. As TechCrunch notes, neither Meta nor claim their AI chatbot platforms should be used as mental health tools. That doesn't prevent there from being multiple "Therapist" and "Psychologist" chatbots on Nor does it stop either of the companies' chatbots from claiming they're licensed professionals, as 404 Media reported in April. "The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear," a spokesperson said when asked to comment on the Texas investigation. "For example, we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction." Meta shared a similar sentiment in its comment. "We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI — not people," the company said. Meta AIs are also supposed to "direct users to seek qualified medical or safety professionals when appropriate." Sending people to real resources is good, but ultimately disclaimers themselves are easy to ignore, and don't act as much of an obstacle. With regards to privacy and data usage, both Meta's privacy policy and the privacy policy acknowledge that data is collected from users' interactions with AI. Meta collects things like prompts and feedback to improve AI performance. logs things like identifiers and demographic information and says that information can be used for advertising, among other applications. How either policy applies to children, and fits with Texas' SCOPE Act, seems like it'll depend on how easy it is to make an account.