logo
Heart disease death risk raised by common household products, study finds

Heart disease death risk raised by common household products, study finds

Yahoo02-05-2025
Chemicals found in plastics have been linked to a multitude of health risks — and now heart disease mortality has been added to the list.
In an analysis of population surveys, researchers at NYU Langone Health discovered that daily exposure to chemicals in common household items could have been a factor in more than 356,000 heart disease deaths in 2018 alone.
That number equates to more than 13% of heart disease-related deaths worldwide for people between 55 and 64 years of age.
Study Finds Evidence Of Microplastics In Brains And Other Organs
The study, which was published in the journal Lancet eBioMedicine on April 29, explored the risks of chemicals called phthalates, which are found in cosmetics, detergents, solvents, plastic pipes, bug repellents and other products, according to an NYU press release.
In this research, the focus was a specific type of phthalate called di-2-ethylhexyl phthalate (DEHP), which is used to make softer, more flexible plastics like food containers and medical equipment, the release stated.
Read On The Fox News App
DEHP has been linked to inflammation in the arteries of the heart, making people more susceptible to heart attack or stroke.
Chewing Gum May Release Thousands Of Microplastics Into The Saliva, Study Says
To determine levels of DEHP exposure, the researchers analyzed dozens of population surveys spanning 200 countries and territories.
Mortality data was obtained from the Institute for Health Metrics and Evaluation, a research group in the U.S. that collects medical information worldwide to identify trends in public health.
"By highlighting the connection between phthalates and a leading cause of death across the world, our findings add to the vast body of evidence that these chemicals present a tremendous danger to human health," said study lead author Sara Hyman, BS, an associate research scientist at NYU Grossman School of Medicine, in the release.
The regions with the highest heart disease death tolls linked to phthalates include the Middle East, South Asia, East Asia and the Pacific.
"Our results underscore the urgent need for global regulations to reduce exposure to these toxins, especially in areas most affected by rapid industrialization and plastic consumption," said senior study author Dr. Trasande, the Jim G. Hendrick, MD, professor of pediatrics at NYU Grossman School of Medicine, in the release.
Inflammation is a known cause of coronary artery disease, according to Dr. Jasdeep Dalawari, an Illinois-based interventional cardiologist and regional chief medical officer at VitalSolution, an Ingenovis Health company.
Blood Pressure And Dementia Risk Share Surprising Link, Study Suggests
"Our bodies release chemicals and hormones that give directions to the artery to open, close or change diameter," Dalawari, who was not involved in the study, told Fox News Digital.
"This study suggests that microplastics may interfere with those internal signals, so the artery doesn't perform the way it should."
"Microplastics are widely present in a variety of products we use every day, and it's crucial to understand their long-term effects on human health," he added.
Renato Apolito, M.D., medical director of cardiac catheterization at Hackensack Meridian Jersey Shore University Medical Center, agreed that manmade plastic products have long been known and suspected to be health risks.
This Odd Dental Habit Could Spark Heart Health Risks, Experts Warn
"These often use highly engineered chemicals where the goal is not quality and safety, but instead cost-effectiveness and convenience," Apolito, who was not involved in the study, told Fox News Digital.
The doctor shared that he stopped using non-stick pans long ago, and mostly uses glass storage and avoids eating and drinking from plastic containers.
Apolito noted, however, that the "longitudinal and observational" nature of the study comes with limitations.
"It is generally used to see if there's any correlation between exposure and health risk, which is then used to study the topic further in a more controlled fashion," Apolito, who was not involved in the study, told Fox News Digital.
"It cannot control for unidentified variations in lifestyles among the people followed."
Senior author Trasande also pointed out the potential limitations.
The analysis does not establish that DEHP caused heart disease and that higher death risks did not take into account other types of phthalates, he stated.
The study also did not include death rates in other age groups, the doctor pointed out, which means the overall death toll is "likely much higher."
Looking ahead, the researchers plan to investigate how reducing phthalate exposure affects mortality rates.
Click Here To Sign Up For Our Health Newsletter
They will also measure the chemical's potential role in other health concerns.
"These studies are typically what are used to study the next step of elucidating cause and effect," Apolito said. "This can lead to the FDA and other policymakers banning or outlawing the use of such products."
The study was funded by the National Institutes of Health.
Dalawari suggested that the researchers may want to look at other contributing factors, including air pollution, which has proven to be a "novel non-traditional risk factor" for coronary disease.
For more Health articles, visit www.foxnews.com/health
"The intersection of pollution, regulation and genetic predisposition provides a complex landscape, and further research is necessary to unravel these connections, paving the way for more informed public health policies and preventive measures," he added.Original article source: Heart disease death risk raised by common household products, study finds
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Nevada Enacts New Law To Shut Down The Use Of AI For Mental Health But Sizzling Loopholes Might Exist
Nevada Enacts New Law To Shut Down The Use Of AI For Mental Health But Sizzling Loopholes Might Exist

Forbes

time28 minutes ago

  • Forbes

Nevada Enacts New Law To Shut Down The Use Of AI For Mental Health But Sizzling Loopholes Might Exist

In today's column, I examine the recently enacted law by Nevada that seemingly aims to shut down the use of AI for mental health therapy in the glitzy Silver State. Here's the deal. Several states have been quickly pushing through new legislation to try and restrict or outrightly ban the use of AI to perform therapy. The idea is that only human therapists, psychologists, psychiatrists, and mental health professionals are allowed to perform mental health services. It is presumably an act reserved for human-to-human exclusivity. I recently analyzed the latest such AI-restricting law that was passed in Illinois, see the link here, which in many ways is akin to the Nevada law. I will discuss the mainstay similarities and differences herein. All in all, a looming spread of these laws, including potentially having federal enactments too, puts AI makers in potential trouble and will inexorably squash the use of AI as a mental health tool. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. Background On AI For Mental Health First, I'd like to set the stage on how generative AI and LLMs are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations. The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets, see my coverage at the link here. This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. Compared to using a human therapist, the AI usage is a breeze and readily undertaken. When I say that I am referring to generative AI and LLMs, please know that there are generic versions versus non-generic versions of such AI. Generic AI is used for all kinds of everyday tasks, and just so happens to also encompass providing a semblance of mental health advice. On the other hand, there are customized AIs specifically for performing therapy; see my discussion at the link here. I'm going to primarily be discussing generic generative AI, though many of these points can involve the specialized marketplace, too. Therapists Adopting AI Therapists are realizing that they, too, must consider adopting the use of AI. The reason is straightforward. Potential clients and patients are walking in the door with claimed-to-be cognitive diagnoses that AI has generated for them. Some therapists tell their clients and patients to simply ignore the AI. But that doesn't usually do much good, since people will indubitably go behind the back of their therapist and access AI anyway. For more on the ins and outs of therapists using AI for mental health, see my analysis at the link here. An emerging strategy for therapists is to avidly adopt the use of AI into their practices. It's the proverbial if you can't beat them, join them refrain. The march toward AI is unstoppable. They generally do so in these two major ways: An administrative use of AI by a therapist is generally less concerning than when using AI for therapeutic purposes. Assuming that the administrative use of AI is done with proper security and rigor, most clients or patients won't especially care that the therapist is using AI in that manner. The assumption is that the AI streamlines the business side and hopefully reduces The controversial angle is the use of AI as an arm of the therapist. Some therapists say that choosing to use AI is a big mistake and that the cherished therapist-client dyad bond should remain untouched and unchanged. Others, such as my advocacy, assert that we are heading into an era of a new triad, consisting of a therapist-AI-client relationship. It is inevitable and unavoidable. See my coverage at the link here. The Law Gap Is Closing Fast Consider then that we have two potential overarching issues brewing: It would be possible to establish regulations that could be a means of dealing with one or both of those brewing concerns. Lawmakers could opt to formalize legal conditions associated with how therapists lean into AI. That could be a consideration all by itself. Likewise, a circumstance all by itself could be the matter of regulating AI makers about allowing their AI to wantonly provide mental health advice. A double whammy would be to tackle both tough topics in one fell swoop. The recently passed and signed Nevada law tries to tackle both topics at the same time. Indeed, the Nevada law goes further and has a veritable potpourri of other aspects. In contrast, the Illinois law tended to focus primarily on the two above-noted considerations. These state laws not only have to do with the respective states, but they are also a bellwether of how AI for mental health is possibly going to be regulated. Often, regulations of one kind or another start in one state and then are reused or recast when other states opt to do something similar. They might take the language used in the already passed law and use that as a draft for their own proposed law. Some language gets changed, new language is added, and so on. The first law to get approved often serves as a template or model. Besides the various states enacting their own laws, there is often a dynamic that gets the federal government to also pursue the same or similar regulation. Once again, the initial state law might be an illuminating example. Questions naturally arise on how to best reshape a state-specific law into a law that might be suitable across the board as a federal law. Impacts Are Plenty Let's go ahead and take a quick peek at the Nevada law and see what we can make of it. I will share just some mindfully chosen snippets and give you a taste of what the law contains. Please know that the law has numerous twists and turns. Also, my commentary is merely a layman's viewpoint. Make sure to consult with your attorney to garner the legal ramifications of whatever your own situation entails. The law that was passed is known as Assembly Bill 406, abbreviated as AB406, and was signed into law by the governor of Nevada on June 5, 2025. You will see the nomenclature of 'NRS' throughout the passages -- NRS means Nevada Revised Statutes, referring to the various codified laws of Nevada. Here is the official description of AB406 at a 30,000-foot level: A significant difference between the Nevada law versus the Illinois law is the added element of AI usage in schools. The background is that there is a lot of controversy these days about deploying AI in schools, and that AI could be used in undesirable ways. This law tries to provide conditions and restrictions on the educational use of AI in the public schools of Nevada. I am not going to address that topic in this discussion and will be covering the debate regarding school uses of AI in a future posting. Be on the watch for that coverage. AI Use By The Public I had mentioned earlier that the Nevada law and the Illinois law entail two major AI-powered mental health capacities. One aspect is the use of AI for mental health as undertaken by the general public of their own volition. The other capacity is the use of AI by mental health professionals via their services and professional practice. Let's start with the perhaps biggest kicker, the use of AI for mental health by the public at large. That portion is contained in Section 7 of AB406. In Section 7 of AB406, the legal indication amends an existing statute known as Chapter 433, and here is the core language (excerpt): There are three bulleted points labeled as 'a', 'b', and 'c'. Read those points carefully. You will keenly observe that the crux is that AI available for use in Nevada is not supposed to provide professional mental or behavioral health care. This includes that the AI is not to simulate human conversation in that same vein. Nor can the provider of the AI, nor the AI itself, appear to suggest or outrightly state that it is a kind of professional mental health specialist, such as a human therapist. I am paraphrasing the language shown, so realize that the actual language is paramount, not my paraphrasing. Emphasis On Professional And Of Programmed The next portion of that Section 7 adds a further twist to this: The language once again hammers away at the notion that the AI isn't supposed to be doing the job of a living-breathing mental health therapist. There are some key wording aspects in this Section 7 that are worth pondering. One aspect is the conspicuous use of the word 'professional'. A legal beagle that represents an AI maker would undoubtedly argue that if the AI isn't performing professional-caliber mental health guidance, the AI doesn't come under these provisions. Only AI that is acting in a seemingly professional manner, or that the AI maker claims is functioning professionally, ends up under this umbrella. In short, if the AI is, shall we say, feeble, in contrast to professional mental health guidance, the AI maker might insist they are off the hook. Hey, our AI is just playing around and perchance dips its toe into mental health therapy. It isn't the real thing. Period, end of story. In the portion above, you might also have noted that the phrase 'specifically programmed' was utilized. This provides perhaps another escape route. An AI maker might stridently claim that they didn't devise the AI to be professional. And, for sure, it isn't rising to any iota of a professional level or caliber. Voila, on both elements, we don't have to worry about this law and can proceed scot-free. It will be interesting to see how that plays out in reality and whether legal cases will be launched in Nevada that deal with the semantics or interpretation of this law. Due to space limitations here and wanting to be succinct, I'll note that there is a Section 7c that provides a definition of 'Professional mental or behavioral health care' that dovetails into the semantics considerations. I am not going to dive into that passage here but bring it to your attention for the sake of completeness. Self-Help Exclusion A provision in Section 6 brings forth another possible wiggle-room pathway: You might have caught the notable wording that if the AI is providing self-help in the mental health context, and assuming the AI and nor is the AI maker proclaiming that the AI is of professional quality, the AI maker might be given a free pass. Do you think that generative AI could be construed as providing a self-help style of interaction? If so, an additional avenue opens for AI makers who are trying to avoid getting ensnared in this new law. AI Provided Via Therapists Shifting gears, let's explore the side of things involving human therapists who opt to make use of AI in their services and practice. That comes up in Section 8: This smacks of a similar approach used in the Illinois law. The idea is that therapists can use AI for administrative purposes, such as scheduling and billing, but they dare not use AI for therapeutic purposes. I find this trend to be unfortunate, short-minded, and dismal. A therapist could legitimately have AI interacting with clients while the client is at home or elsewhere, doing some follow-up homework under the overall guidance of the therapist. These laws are an overreach, a clumsy catch-all. It will utterly chase therapists away from using AI in a manner that can be highly productive, merely because the wording is like a sword dangling over their heads. Society ought to want mental health professionals to use AI in sound ways, especially so on the therapy side of things. Thus, don't ban the AI. Instead, provide provisions associated with sensible and tangible guardrails and stipulations. Setting sensible boundaries would be useful. Squashing or heavily stifling innovation in mental health is not the way we should be headed. Thoughtful and guarded adoption of AI is warranted and saluted. I vote that any laws related to therapists' use of AI for therapy ought to be of a balanced nature. So far, they are lopsided, egregiously so. Key Takeaways For AI Makers For the AI makers, they should take this as a clarion call and closely scrutinize these laws and take immediate and mindful actions to protect themselves from undue legal vulnerabilities and reputational risks. Allow me to clarify that this does not imply they should dodge these laws. They should abide by these laws and figure out where their AI sits in the emerging wanton mosaic that is being formulated. In the case of Nevada law, it would seem likely that the legal counsel of an AI maker would urge their client to take overt action to refrain from suggesting or stating that their AI is performing professional mental health care. Don't say that your generic generative AI instantiates professional quality therapy, such as brazenly blabbing so in your marketing materials. The top executives should not be giving speeches proclaiming that their generic generative AI is an ace at mental health. The AI itself should not be telling users that it is professional quality. That could easily happen. Depending on how the LLM has been data-trained and subsequently tuned, there is a chance that on some occasions, it might emit that kind of claim. AI makers would be wise to institute specialized add-on filters and computational double-checks that catch the AI from making those emboldened assertions. Other considerations include aligning the online licensing agreement and terms of service so that there isn't any wording that leans toward professional-stating claims. Another angle is whether the AI maker has users perform some form of consent when using the AI in a mental health capacity. That would be a ripe spot to declare that the AI isn't of professional quality and make darned sure to remove any wording that suggests otherwise. Getting Over Our Skis A crucial distinction that some of these laws are astutely making, while others are not, consists of separating unregulated AI that does therapy from regulated AI. Think of it this way. Suppose a highly customized AI app for therapy is carefully devised, tested, and then fielded. It has gone through the gauntlet of getting certification or regulated approval. These laws that toss everything into the same kitchen sink are equating generic generative AI to those mindfully devised AIs. No distinction is acknowledged. It is one thing to go after generic generative AI, and yet an entirely different consideration involving AI that has been dutifully crafted with evidence-based rigor for mental health. Like the old saying goes, to some degree, we might be tossing out the baby with the bathwater. A final thought for now. The famous line that what happens in Vegas, stays in Vegas, provides ample fodder associated with this Nevada law. Any AI maker that has their AI available to anyone in Nevada has now gained a new high risk. Whether the AI maker knows it or not, and many AI makers seem not to be cognizant of these laws, what happens in Nevada doesn't per se stay in Nevada. An AI maker in New York, California, or wherever else is a high roller in Nevada, assuming their AI is accessible in that state. Your stake is sitting on the table, with both legal and financial penalties cozying up, so open your eyes, make sure you have a safe hand, else you could lose your shirt.

Dr. Lal PathLabs becomes first in India to adopt AI-based module for cancer lymph node metastasis detection
Dr. Lal PathLabs becomes first in India to adopt AI-based module for cancer lymph node metastasis detection

Business Upturn

time2 hours ago

  • Business Upturn

Dr. Lal PathLabs becomes first in India to adopt AI-based module for cancer lymph node metastasis detection

Dr. Lal PathLabs has taken a groundbreaking step in advancing cancer diagnostics in India by becoming the first laboratory in the country to introduce a deep learning-based AI module for the detection of lymph node metastasis, including micrometastasis. This innovation, developed in collaboration with Singapore-based healthtech company Qritive, was recently showcased at USCAP 2025, one of the most prestigious pathology conferences in the world. The accurate detection of cancer spread to lymph nodes is a critical factor in determining the stage and treatment strategy for patients. Missing these signs of cancer progression can allow the disease to advance, increasing the chances of it spreading to distant organs and reducing survival rates. The ability to identify even occult metastases in patients with early-stage cancer has the potential to significantly influence treatment plans and improve outcomes. Traditionally, detecting micrometastases—small clusters of cancer cells hidden in lymph nodes—has been a time-consuming and complex process, often requiring advanced testing. The new AI system, called QiAI Lymph Node Dx, addresses this challenge by using deep learning to analyze digital slides and identify cancer cells with exceptional precision. This integration of AI into diagnostic practice ensures that cancer detection becomes faster, more reliable, and highly accurate. What makes this development especially impactful is its ability to pick up even single cancer cells, which were later validated through immunohistochemistry testing. The AI model has been tested across various cancer types, including breast, colon, stomach, and esophageal cancers, consistently delivering highly accurate results. Breast cancer, which accounts for nearly 28% of cancer cases among women in India, particularly stands to benefit from this breakthrough, highlighting the clinical relevance of the technology. By achieving complete sensitivity and negative predictive value—meaning no metastatic cases went undetected—the AI system has proven to be a robust screening tool. Its tumor-agnostic design allows it to function effectively across different cancer types, making it a versatile solution for oncologists and pathologists alike. As cancer incidence continues to rise in India, the integration of advanced AI-powered diagnostic tools such as QiAI Lymph Node Dx is set to transform the landscape of cancer care. By enabling earlier and more accurate diagnoses, this technology not only improves the efficiency of medical practice but also ensures better treatment decisions, offering renewed hope to patients battling cancer. Ahmedabad Plane Crash

Chicago rock climber undergoes brain surgery by doctor who's a rock climber too
Chicago rock climber undergoes brain surgery by doctor who's a rock climber too

CBS News

time3 hours ago

  • CBS News

Chicago rock climber undergoes brain surgery by doctor who's a rock climber too

A Chicago rock climber was grounded by a scary medical issue — a cyst in her brain. But a friendly face in the operating room helped Sophie Lin get to the gym again. Lin is a geologist who loves to analyze rocks — and around seven years ago, she became enamored with navigating and climbing them too. She could regularly be found practicing in gyms like First Ascent in the Loop multiple times a week. Her strength, strategy, and skills were all on the rise, until a strange feeling of weakness crept in. "If I reached for a right, like handhold, I would just miss," Lin said. "I would like circle it, fall." Lin noticed a problem with her grip too. "My right hand, after like maybe five, 10 seconds would just like, unfurl," Lin said, "even though, in my head, I'm telling my hand to like stay there." Then came seizure-like symptoms, which prompted her to get an MRI. It revealed a cyst in her brain — likely caused by a procedure she had as a child. Northwestern Medicine neurosurgeon Dr. Matthew Potts took the case. It turned out Dr. Potts was a rock climber too. "Funny little small world," Lin said. "From the first day we met, like, I think we had that connection and that bond," Potts said. More than anyone, Dr. Potts understood the difficulties his patient was describing. "The symptoms she was having wasn't necessarily preventing her from working, but it was absolutely preventing her from doing this hobby that she loves," said Potts. The treatment they settled on was brain surgery. The operation had to be done while Lin was awake. "Doing a surgery in this part of the brain can be dangerous. We worry that any injury to the brain can permanently affect language or movement," Potts said. "By having her awake, we could ask her to read some words." The 27-year-old Lin was clamoring to climb again, and waited only two weeks after surgery to hit the gym — where she ran right into Dr. Potts. "Oh, wait I know this face. Like, oh no. Like I shouldn't, he shouldn't see me here." But Potts said while it may have been a little early, "If anything, it was great to see her doing so well." Lin's seizures are now under control, and the right side of her body is back in action. "Like a superhero movie moment," she said. "I felt like, oh, I have my strength back."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store