Ashland County mother whose 6-year-old son was attacked by family dog while handcuffed sentenced to prison
Angelina Williams, 28, was sentenced to 20-plus years in prison for an incident that took place Aug. 17 at a Savannah, Ohio, home in Ashland County. She previously pleaded guilty to seven felony charges, including child endangerment, kidnapping and obstructing justice.
Williams is serving consecutive prison terms adding up to between 23 and 28 and a half years, according to court documents.
Local teen accused of raping mother and daughter
Prosecutors said Williams, her boyfriend and uncle handcuffed the boy by the ankles and wrists as a form of discipline since the child refused to clean up after the dog.
The boy somehow slipped when the adults were trying to put him in a chair and the dog attacked, officials said.
Ashland sheriff's department video (as seen in the video player at the top of the story) shows deputies providing first aid to the child, who was then taken by helicopter to a hospital for treatment.
The boy had reportedly been bit on the neck and ear, but was eventually released from the hospital.
Man dead after fishing boat capsizes near Avon Lake boat ramp
The boy and his older sister were visiting their mother at the time the attack took place. Williams lost custody of the two children in 2019.
In a court appearance in August, Williams told a judge she did not know it was illegal to use handcuffs on a child.
'The dog is not even my dog. The cuffs are not even my cuffs,' Williams said in court. ' I didn't even know the cuffs were illegal or anything was wrong with it. My uncle told me it was OK.'
Co-defendants Taylor Desiree Marvin-Brown and the owner of the dog, Robert Michalski — both charged with child endangering among other things — are being sentenced on July 14 and July 21 respectively, court records show.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
11 minutes ago
- Forbes
FBI Warning—You Should Never Reply To These Messages
FBI's AI warning is increasingly critical. Republished on July 10 with new report into AI deep fake attacks and advice for smartphone owners on staying safe as threats surge. The news that AI is being used to impersonate Secretary of State Marco Rubio and place calls to foreign ministers may be shocking, but it shouldn't be surprising. The FBI has warned such attacks are now underway and it will only get worse. As first reported by the Washington Post, the State Department has told U.S. diplomats that this latest attack has targeted at least three foreign ministers, a U.S. senator and a governor, using an AI generated voice to impersonate Rubio. A fake Signal account (Signal strikes again) was used to initiate contact through text and voice messages. It's clear that voice messages enable attackers to deploy AI fakes without the inherent risk in attempting to run them in real-time on a live call. The FBI is clear — do not respond to text or voice messages unless you can verify the sender. That means a voice message that sounds familiar cannot be trusted unless you can verify the actual number from which it has been sent. Do not reply until you can. Darktrace's AI and Strategy director Margaret Cunningham told me this is all too 'easy.' The attacks, while 'ultimately unsuccessful,' demonstrate 'just how easily generative AI can be used to launch credible, targeted social engineering attacks.' Alarmingly, Cunningham warns, 'this threat didn't fail because it was poorly crafted — it failed because it missed the right moment of human vulnerability.' People make decisions 'while multitasking, under pressure, and guided by what feels familiar. In those moments, a trusted voice or official-looking message can easily bypass caution.' And while the Rubio scam will generate plenty of headlines, the AI fakes warning has being doing the rounds for some months. It won't make those same headlines, but you're more likely to be targeted in your professional life through social engineering that exploits readily available social media connections and content to trick you. The FBI tells smartphone users: 'Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.' This is in addition to the broader advice given the plague of text message attacks now targeting American citizens. Check the details of any message. Delete any that are clear misrepresentations, such as fake tolls or DMV motoring offenses. Do not click any links contained in text messages — ever. And do not be afraid to hang up on the tech or customer support desk or bank or the law enforcement officer contacting you. You can then reach out to the relevant organization using publicly available contact details. ESET's Jake Moore warns 'cloning a voice can now take just minutes and the results are highly convincing when combined with social engineering. As the technology improves, the amount of audio needed to create a realistic clone also continues to shrink.' 'This impersonation is alarming and highlights just how sophisticated generative AI tools have become,' says Black Duck's Thomas Richards. 'It underscores the risk of generative AI tools being used to manipulate and to conduct fraud. The old software world is gone, giving way to a new set of truths defined by AI.' As for the Rubio fakes, 'the State Department is aware of this incident and is currently monitoring and addressing the matter,' a spokesperson told reporters. 'The department takes seriously its responsibility to safeguard its information and continuously take steps to improve the department's cybersecurity posture to prevent future incidents.' 'AI-generated content has advanced to the point that it is often difficult to identify,' the bureau warns. 'When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.' With perfect timing, Trend Micro's latest report warns 'criminals can easily generate highly convincing deepfakes with very little budget, effort, and expertise, and deepfake generation tools will only become more affordable and more effective in the future.' The security team says this is being enabled by the same kinds of toolkits driving other types of frauds that have also triggered FBI warnings this year — including a variety of other message attacks. 'tools for creating deepfakes,' Trend Micro says, 'are now more powerful and more accessible by being cheaper and easier to use.' As warned by the FBI earlier in the year and with the latest Rubio impersonations that it has under investigation, deep fake voice technology is now easily deployed. 'The market for AI-generated voice technology is extremely mature,' Trend Micro says, citing several commercial applications, 'with numerous services offering voice cloning and studio-grade voiceovers… While 'these services have many legitimate applications, their potential for misuse cannot be overlooked.' After breaking the Rubio impersonations news, the Washington Post warns that 'In the absent of effective regulation in the United States, the responsibility to protect against voice impostors is mostly on you. The possibility of faked distressed calls is something to discuss with your family — along with whether setting up code words is overkill that will unnecessarily scare younger children in particular. Maybe you'll decide that setting up and practicing a code phrase is worth the peace of mind.' That idea of a secure code word that a friend or relative can use to provide they're real was pushed by the FBI some months ago. 'Create a secret word or phrase with your family to verify their identity,' it suggested in an AI attack advisory. 'Criminals can use AI-generated audio to impersonate well-known, public figures or personal relations to elicit payments,' the bureau warned in December. 'Criminals generate short audio clips containing a loved one's voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom.'
Yahoo
26 minutes ago
- Yahoo
Search for missing cockfighters begins at Philippine lake
Search teams arrived Thursday at a lake south of the Philippine capital Manila to look for dozens of cockfighters allegedly murdered by rogue police, the Department of Justice said. Fifteen police officers are under investigation over a spate of mysterious disappearances in 2022 in the country's huge cockfighting industry. The case erupted back into the public consciousness last month with the televised appearance of a witness claiming to know where bodies had been submerged in Lake Taal, located about two hours south of the capital. Justice Secretary Crispin Remulla has since said he has "multiple witnesses" who can identify the location of the missing in the lake, which spans more than 230 square kilometres (89 square miles). "The purpose of (Thursday's mission) is to ... identify the area covered by the search, see initial water conditions and to measure the depth of covered area," the justice department said in a statement. The Philippine Coast Guard will participate in the preliminary assessment alongside police. National Police Chief Nicolas Torre this week said authorities needed to act swiftly. "The typhoon season is coming in," he told journalists on Tuesday. "We are moving fast to at least try to locate the bodies. We know that it is very, very challenging." Remulla on Friday said he had requested technical assistance from Japan including help with mapping the lake bed, parts of which are as deep as 172 metres (564 feet). The Japanese embassy in Manila told AFP it had received the request without providing further details. But Torre believes the Philippines had the necessary equipment on hand to begin the search. "We have a very, very robust shipping industry here and in other parts of the Philippines, so we can do it." Filipinos from all walks of life wager millions of dollars on matches every week between roosters who fight to the death with razor-sharp metal spurs tied to their legs. The sport, banned in many other countries, survived coronavirus pandemic restrictions by going online, drawing many more gamblers who use their mobile phones to place wagers. Former president Rodrigo Duterte banned the livestreaming of cockfights shortly before leaving office in 2022, but it has continued due to lax enforcement. pam-cwl/rsc


New York Times
28 minutes ago
- New York Times
Corrections: July 10, 2025
An article on Wednesday about four friends who took a weekend trip to Texas's Hill Country before the catastrophic flooding began misstated Aidan Heartfield's age. Mr. Heartfield, who remains missing, is 22, not 21. Errors are corrected during the press run whenever possible, so some errors noted here may not have appeared in all editions. To contact the newsroom regarding correction requests, please email nytnews@ To share feedback, please visit Comments on opinion articles may be emailed to letters@ For newspaper delivery questions: 1-800-NYTIMES (1-800-698-4637) or email customercare@