
Texas health department reports no additional measles cases in the state since June 3
The state, which is the epicenter of the current measles outbreak, has a total of 742 confirmed cases as of Friday.
The number of new cases continues to decrease, from an average of about 12 per day around the peak to fewer than one case per day recently, Chris Van Deusen, director of media relations at the Texas health department, told Reuters in an email.
"The fact that (we) haven't had any new hospitalizations reported in more than two weeks gives us confidence there are not major numbers of unreported cases still occurring out there," said Van Deusen.
The United States is battling one of the worst outbreaks of the highly contagious airborne infection it has seen, with over 1,000 reported cases and three confirmed deaths.
Despite the slowing spread of the infection in Texas, the country continues to record weekly increases in measles cases elsewhere.
The U.S. Centers for Disease Control and Prevention said a total of 1,168 confirmed measles cases were reported by 34 jurisdictions as of Thursday, an increase of 80 cases since its previous update last week.
The only time infections surpassed the 1,000 mark was in 2019, when the country reported 1,274 cases.
There have been 17 outbreaks, defined as three or more related cases, reported in 2025, the CDC said.
Experts have urged public health officials to provide urgent endorsement for highly effective vaccines.
The measles vaccine is 97% effective after two doses, according to the CDC.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
an hour ago
- Daily Mail
Man accidentally poisons himself after following diet recommended by ChatGPT that made him hallucinate
A man in Washington accidentally poisoned himself after following a diet made by ChatGPT. The unnamed man, 60, rushed to his local emergency room with suspicions that his neighbor was poisoning him. About a day after being admitted to the hospital, he also suffered paranoia and hallucinations and attempted to escape from the hospital. The man later revealed he had several dietary restrictions, including distilling his own water and following an 'extremely restrictive' vegetarian diet. He told doctors after reading about the harms of sodium chloride, or table salt, he asked ChatGPT about eliminating it from his diet. The chatbot reportedly advised him it was safe to replace salt with sodium bromide, which was used as a sedative in the early 20th century and is now found in anticonvulsants for dogs and humans. He ended up following this recommendation for three months and eventually developed bromism, or bromide poisoning. Bromide can accumulate in the body and impair nerve function, a condition called bromism. This leads to confusion, memory loss, anxiety, delusions, rashes and acne, which the man also had. Doctors treating the man, from the University of Washington in Seattle, replicated his search and got the same incorrect advice. They warned that the case highlighted 'how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes.' They said ChatGPT and other chatbots could 'generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation.' The anonymous case study, published earlier this month in the Annals of Internal Medicine, comes one week after OpenAI claimed one of the chatbot's newest upgrades could be better at answering health-related questions and 'flagging potential concerns.' However, ChatGPT's guidelines state it is not 'intended for use in the diagnosis or treatment of any health condition.' The patient appeared to have an earlier version of the software. After attempting to escape from the hospital, the man was put on an involuntary psychiatric hold and given large amounts of fluids and electrolytes to help flush the bromide out of his system. His bromide level was at 1,700 mg/L, while the normal range is between 0.9 and 7.3 mg/L. Bromide was used as a sedative in the 19th and 20th centuries and was once widespread in prescription and over-the-counter drugs. However, as research uncovered the risk of chronic exposure, regulators gradually began removing them from the US drug supply. As a result, cases today remain few and far between. The man reported acne and small red growths on his skin, insomnia, fatigue, muscle coordination issues and excessive thirst. It took three weeks for his bromide levels to stabilize and for him to be weaned off psychiatric medications before he was able to be discharged. The doctors treating him wrote: 'While it is a tool with much potential to provide a bridge between scientists and the nonacademic population, AI also carries the risk for promulgating decontextualized information. 'It is highly unlikely that a medical expert would have mentioned sodium bromide when faced with a patient looking for a viable substitute for sodium chloride.' They also emphasized that 'as the use of AI tools increases, providers will need to consider this when screening for where their patients are consuming health information.'


The Independent
5 hours ago
- The Independent
Vegas funeral home which smelled of ‘death' shut down after failing to cremate bodies
A Las Vegas funeral home has shut down after authorities accused it of failing to cremate bodies promptly, sometimes leaving remains for months in unsanitary conditions that one neighbor said caused the business to smell like 'death.' The Nevada State Board of Funeral and Cemetery Services revoked the license of McDermott's Funeral Home and Creation Services last week, finding that the business had not cremated or properly disposed of eight bodies in a 'reasonable period of time.' One body was kept at the facility for over 10 months before being cremated, according to officials. 'The smell was straight up death,' Daran Denny, who owns a tattoo shop nearby, told the Las Vegas Review-Journal. 'I was a medic for 10 years, and I know what death smells like.' McDermott's owner, Chris Grant, said delays in processing were a result of waiting for approval from Clark County Social Services to pay for the treatment of bodies that went unclaimed or abandoned. The Independent has contacted the agency for comment. 'Human nature is: 'Funeral Home got closed. I bet they were doing some scummy stuff,'' he told the paper. 'Nobody thinks that me and my staff lost everything. I just lost a 25-year career. I just lost a business that I've been building for 8½ years. I just lost the ability to even provide for my family.' During a February 2024 site visit, an inspector found eight bodies that hadn't been swiftly processed — Pamala Middlebrooks, Joseph Vocatura, Debi Vince, Catherine Lane-Novak, Lonna Lonning, Teresa John, Lawrence Ponteri and Edward Elliot — all of which had sat in refrigeration units for two months or more. Investigators then found that in all eight cases, the funeral home hadn't filed death records within the lawful 72 hours of death or receiving the bodies, either. One inspector that year described seeing 'three coolers containing numerous bodies that were leaking blood and bodily fluids.' A visit also revealed 'fluids of a body… had dripped onto a body on a lower shelf and onto the floor,' according to documents obtained by KLAS. A McDermott's general manager told the outlet the inspections were 'baseless,' and the funeral board denied the business, which had complaints dating back to 2021 with regulators, the opportunity for 'corrective action.'


Daily Mail
6 hours ago
- Daily Mail
Man, 60, poisoned himself after taking medical advice from ChatGPT
A man was left fighting for his sanity after replacing table salt with a chemical more commonly used to clean swimming pools after following AI advice. The 60-year-old American spent three weeks in hospital suffering from hallucinations, paranoia and severe anxiety after taking dietary tips from ChatGPT. Doctors revealed in a US medical journal that the man had developed bromism - a condition virtually wiped out since the 20th century - after he embarked on a 'personal experiment' to cut salt from his diet. Instead of using everyday sodium chloride, the man swapped it for sodium bromide, a toxic compound once sold in sedative pills but now mostly found in pool-cleaning products. Symptoms of bromism include psychosis, delusions, skin eruptions and nausea - and in the 19th century it was linked to up to eight per cent of psychiatric hospital admissions. The bizarre case took a disturbing turn when the man turned up at an emergency department insisting his neighbour was trying to poison him. He had no previous history of mental illness. Intrigued and alarmed, doctors tested ChatGPT themselves. The bot, they said, still recommended sodium bromide as a salt alternative, with no mention of any health risk. The case, published in the Annals of Internal Medicine, warns that the rise of AI tools could contribute to 'preventable adverse health outcomes' in a chilling reminder of how machine-generated 'advice' can go horrible wrong. AI chatbots have been caught out before. Last year, a Google bot told users they could stay healthy by 'eating rocks' – advice seemingly scraped from satirical websites. OpenAI, the Silicon Valley giant behind ChatGPT, last week announced that its new GPT-5 update is better at answering health questions. A spokesman told The Telegraph: 'You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.' Daily Mail have approached OpenAI for comment. It comes after clinical psychologist Paul Losoff told the that dependency on AI robots is becoming a huge risk, and warned against getting too close to ChatGPT. 'One might come to depend and rely on AI so [much] that they don't seek out human interactions,' he said. He explained that this could be especially detrimental for those who may already be struggling with anxiety or depression. Dr. Losoff explained that by using AI, these people may worsen their conditions and experience cognitive symptoms like chronic pessimism, distorted thinking, or cloudy thinking. And that in itself could create further issues. 'Because of these cognitive symptoms, there is a risk that an individual turning to AI may misinterpret AI feedback leading to harm,' he said. And when it comes to people who may be in crisis, this may only exacerbate issues. Dr. Losoff said that there is always a risk that AI will make mistakes and provide harmful feedback during crucial mental health moments. 'There also is a profound risk for those with acute thought disorders such as schizophrenia in which they would be prone to misinterpreting AI feedback,' he said.