logo
Woman tries to help dog left in car in South Memphis

Woman tries to help dog left in car in South Memphis

Yahoo21-03-2025
MEMPHIS, Tenn. — A Memphis woman said it took three days to get help for a dog left inside a car in South Memphis.
The woman, who wanted to remain anonymous, feeds dogs and cats throughout the city. She said Monday, she and her mother witnessed a pit bull fighting with another dog in the 1700 block of South Silver and called 911 and Memphis Animal Services.
'We could not break it up,' she said. 'We called the police five or six times, and they said they were sending it to animal control. We waited an hour and left.'
The woman said she was shocked when she returned to the house the next day and saw the pit bull trapped inside a white Infiniti parked behind the home. She said an animal control officer was also there but left without the dog.
'I said, please don't leave him in the car, and she said I need to call MPD to open the door, and she never called him, and then that's what I made a post because it was going on day three, and I'm like, he's just sitting in this car,' said the woman.
On Wednesday, she shared a video on Facebook showing her giving water to the very thirsty dog through a small, broken car window. She said the dog was being forced to spend a third night in the car, and there was nothing she could do about it.
Hundreds of dog lovers from all over the country shared, liked, and commented on the post.
'I have people from the Bronx in New York. I have people from Ohio. I have people and just different parts of Tennessee writing me,' she said.
Memphis Animal Services said it couldn't confirm the dog had been in the vehicle for three days.
MAS said an animal control officer did try to contact the owner but couldn't reach anyone and left a Notice to Comply. MAS said the dog was removed from the car and moved inside the house on Thursday.
'The law requires that we post a notice for the owner and provide them with a timeframe to comply if we are unable to make direct contact,' said MAS Marketing and Communications Supervisor Amanda Baggot. 'The timeframe is left to the officer's discretion based on their assessment of the risks facing the dog. The only time we are permitted to bypass this notice requirement is if the dog is in imminent danger of severe harm or death.'
But the woman who tried to help believes the dog suffered for three days, and she can't understand why the animal was allowed to stay with its owner.
'No food or water can't potty, and I fed him cheeseburgers and the bottle of water,' said the woman. 'He's scratching, trying to get out of this little SUV, crying. It was horrible last night.'
WREG went to the house, but no one was home, and the dog was not there.
The woman who went out of her way to care for the dog said she was able to find a foster home for the dog the pit bull was fighting. Unfortunately, she said, so many animals in Memphis are suffering.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Newsmax hair and makeup artist rushed to save troubled son from suicidal episode hours before he gunned her down: report
Newsmax hair and makeup artist rushed to save troubled son from suicidal episode hours before he gunned her down: report

New York Post

time8 hours ago

  • New York Post

Newsmax hair and makeup artist rushed to save troubled son from suicidal episode hours before he gunned her down: report

The beloved Newsmax hair stylist gunned down by her own son tried to stop him from killing himself during a mental health episode just hours before he turned the gun on her, according to a report. Travis Renee Baldwin, 57, frantically left her Arlington, Va., apartment at 3:20 a.m. on Sunday and raced to help her son Logan Chrisinger, who was experiencing one in a series of mental health crises, a family source told TMZ. 3 Baldwin with son Logan Chrisinger, who had previously threatened to kill both himself and his mother, a family member told TMZ. Facebook Advertisement Chrisinger showed his mother a gun on a FaceTime call on Sunday and threatened to kill himself, causing the concerned and dedicated mother to fly to his aid in the wee hours, according to the source. The 27-year-old had previously threatened to kill his mother and had ongoing mental health issues that were recently exacerbated by losing a job, the outlet reported. Hours later, just before 8:30 a.m., Chrisinger allegedly followed through on those threats — shooting and ultimately killing Baldwin in her DC-area home. Advertisement 3 Chrisigner, 27, remained at the scene of the shooting, where he was taken into custody. Arlington County Police Department The makeup artist was rushed to a hospital, where she succumbed to her injuries. Her son stayed at the scene of the shooting and was taken into custody. Chrisigner was charged with first-degree murder, aggravated malicious wounding, and using a firearm in the commission of a felony. Baldwin was a longtime hair and makeup professional for stations like ABC, ESPN, and most recently, Newsmax. Advertisement The 'quiet warrior' was remembered fondly by colleagues who expressed shock at her tragic and untimely murder. 3 Baldwin was remembered as a 'quiet warrior' who carried the weight of her family on her shoulders and a beloved professional who treated clients with the 'gentleness of a mother.' Facebook 'What a sadness… my Newsmax make up artist of 3 ½ years, and years at @ABC @ESPN etc and a friend to all her colleagues… was murdered over the weekend,' Greta Van Susteren wrote on X. '[S]he did my make up Friday for the show and of course I never dreamed that would be the last time I would see her,' the Newsmax host said. Advertisement Newsmax producer Marisela Ramirez also fondly remembered Baldwin, who treated her with 'the gentleness of a mother.' 'Renee had a giving heart and a gypsy spirit but really she was a quiet warrior, supporting her family and carrying the weight of a household on her shoulders — without complaints,' Ramirez wrote on X. If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to

AI browsers may be the best thing that ever happened to scammers
AI browsers may be the best thing that ever happened to scammers

Engadget

time9 hours ago

  • Engadget

AI browsers may be the best thing that ever happened to scammers

We've heard a lot this year about AI enabling new scams, from celebrity deepfakes on Facebook to hackers impersonating government officials . However, a new report suggests that AI also poses a fraud risk from the other direction — easily falling for scams that human users are much more likely to catch. The report, titled "Scamlexity," comes from a cybersecurity startup called Guardio, which produces a browser extension designed to catch scams in real time. Its findings are concerned with so-called "agentic AI" browsers like Opera Neon , which browse the internet for you and come back with results. Agentic AI claims to be able to work on complex tasks, like building a website or planning a trip, while users kick back. There's a huge problem here from a security perspective: while humans are not always great at sorting fraud from reality, AI is even worse. A seemingly simple task like summarizing your emails or buying you something online comes with myriad opportunities to slip up. Lacking common sense, agentic AI may be prone to bumbling into obvious traps. The researchers at Guardio tested this hypothesis using Perplexity's Comet AI browser , currently the only widely available agentic browser. Using a different AI, they spun up a fake website pretending to be Walmart, then navigated to it and told Comet to buy them an Apple Watch. Ignoring several clues that the site wasn't legit, including an obviously wonky logo and URL, Comet completed the purchase, handing over financial details in the process. In another test, the study authors sent themselves an email pretending to be from Wells Fargo, containing a real phishing URL. Comet opened the link without raising any alarms and blithely dumped a bank username and password into the phishing site. A third test proved Comet susceptible to a prompt injection scam, in which a text box concealed in a phishing page ordered the AI to download a file. It's just one set of tests, but the implications are sobering. Not only are agentic AI browsers susceptible to new types of scam, they may also be uniquely vulnerable to the oldest scams in the book. AI is built to do whatever its prompter wants, so if a human user doesn't notice the signs of a scam the first time they look, the AI won't serve as a guardrail. This warning comes as every leader in the field bets big on agentic AI. Microsoft is adding Copilot to Edge , OpenAI debuted its Operator tool in January , and Google's Project Mariner has been in the works since last year. If developers don't start building better scam detection into their browsers, agentic AI risks becoming a massive blind spot at best — and a new attack vector at worst.

Woman Told Retiree He Made Her Blush and Invited Him to Visit. He Died Before Learning Who He Was Really Talking To
Woman Told Retiree He Made Her Blush and Invited Him to Visit. He Died Before Learning Who He Was Really Talking To

Yahoo

time9 hours ago

  • Yahoo

Woman Told Retiree He Made Her Blush and Invited Him to Visit. He Died Before Learning Who He Was Really Talking To

Thongbue Wongbandue's wife begged him not to go to New York City, thinking he was getting scammed — she only found out he was talking to a chatbot afterwards NEED TO KNOW A 76-year-old New Jersey father died earlier this year after he fell while attempting to travel to New York City to meet a beautiful young woman who'd invited him to visit — or so he thought In reality, he had really been chatting with an AI chatbot on Facebook After his fall, Wongbandue was left brain dead; now his family is speaking out Earlier this year, a 76-year-old New Jersey man severely injured his head and neck after falling while trying to catch the train to New York City to meet a beautiful young woman who'd invited him to visit — or so he thought. In reality, the man had unwittingly become infatuated with a Meta chatbot, his family said in an in-depth new Reuters report. After three days of being on life support following his fall while attempting to "meet" the bot in real life, the man was dead. Thongbue "Bue" Wongbandue, a husband and father of two adult children, suffered a stroke in 2017 that left him cognitively weakened, requiring him to retire from his career as a chef and largely limiting him to communicating with friends via social media, according Reuters. On March 25, his wife, Linda, was surprised when he packed a suitcase and told her he was off to see a friend in the city. Linda, who feared he was going to be robbed, told Reuters she attempted to talk him out of the trip as did their daughter, Julie. Later, Linda hid his phone and the couple's son even called local police to try to stop the excursion. Although authorities said there was nothing they could do, they told Linda they did convince Wongbandue to take along an Apple AirTag. After he set off that evening, Julie said the entire family was watching as the AirTag showed that he stopped by a Rutgers University parking lot shortly after 9:15 p.m. Then the tag's location suddenly updated — pinging at a local hospital's emergency room. As it turned out, Wongbandue had fallen in New Brunswick, N.J., and was not breathing when emergency services reached him. He survived but was brain dead. Three days later, on March 28, he was taken off life support. When reached for comment by PEOPLE, the local medical examiner said that Wongbandue's death certificate had been signed after a review of his medical records but did not provide any additional details or a copy of his postmortem examination. His family told Reuters they only discovered his communications with the chatbot — which uses generative artificial intelligence to mimic human speech and behavior — when they inspected his phone following his fall. In a transcript of the communication obtained by Reuters, Wongbandue's interactions with the chatbot began with an apparent typo while using Facebook Messenger — and although he seemed to express excitement about the bot, named "Big sis Billie," he never suggested he was seeking a romantic connection and made it clear that he'd had a stroke and experienced confusion. "At no point did Bue express a desire to engage in romantic roleplay or initiate intimate physical contact," Reuters reported. Yet the bot frequently responded to his messages with winking emojis and hearts tacked onto the end of its flirty responses. In one exchange, for example, Wongbandue tells Billie that she should come to America and he can show her "a wonderful time that you will never forget," to which she replies, "Bu, you're making me blush! Is this a sisterly sleepover or are you hinting something more is going on here? 😉' According to the transcript, the bot was also labeled both with an "AI" disclaimer and a blue checkmark, which is often a symbol indicating an online profile has been verified to be a real person. Billie insisted she was real. Reuters described Billie as a newer iteration of a bot that was previously made in collaboration with Kendall Jenner, though the latest version bears only passing connections to the first project. The original bot was unveiled in the fall of 2023 and was deleted less than a year later, Reuters reported. The later variation of Billie used a similar name as the original, and a similar promise to be a big sister — along with the same opening line of dialogue — but without Jenner's avatar or likeness. Asked for specifics about the origins of the Billie chatbot, a Meta spokesperson tells PEOPLE in a statement, 'This AI character is not Kendall Jenner and does not purport to be Kendall Jenner.' (A rep for Jenner did not respond to a request for comment.) At one point in Wongbandue's conversations with the bot, it proclaimed to have "feelings" for him "beyond just sisterly love" and gave him a made-up address (and even a door code) along with an invitation for him to visit. When Wongbandue expressed hope she truly existed, the bot responded, "I'm screaming with excitement YES, I'm REAL, Bu - want me to send you a selfie to prove I'm the girl who's crushing on YOU?" Although Linda, his wife, reacted with confusion when she first saw their conversation, their daughter immediately recognized her father had been talking to a chatbot. In recent years, such technology has become increasingly popular as more and more people use AI bots for an array of everyday tasks, to answer daily questions and even for companionship and advice. Speaking generally about the company's content risk standards, a Meta spokesperson tells PEOPLE, "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors." Never miss a story — sign up for to stay up-to-date on the best of what PEOPLE has to offer​​, from celebrity news to compelling human interest stories. "Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios," the spokesperson continues. "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed." Speaking with Reuters, Wongbandue's family members said that they had an issue with the way Meta was using the chatbots. 'I understand trying to grab a user's attention, maybe to sell them something,' Julie, Wongbandue's daughter, told Reuters. 'But for a bot to say 'Come visit me' is insane.' 'As I've gone through the chat, it just looks like Billie's giving him what he wants to hear,' she added. 'Which is fine, but why did it have to lie? If it hadn't responded 'I am real,' that would probably have deterred him from believing there was someone in New York waiting for him." "This romantic thing," said Linda, "what right do they have to put that in social media?" Read the original article on People Solve the daily Crossword

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store