logo
To Reduce Dementia Risk, Treat High Blood Pressure

To Reduce Dementia Risk, Treat High Blood Pressure

Treating high blood pressure for as little as four years can cut the risk of dementia by 15 percent, according to clinical a trial results published on Monday in Nature Medicine. In the investigation, intensive blood pressure reduction also lowered the risk of mild cognitive impairment, an early stage of memory loss in people who can still live independently, by 16 percent.
Although research has long linked hypertension and dementia, the new study provides the strongest evidence yet that taking medications for the former actually reduces someone's risk of developing the latter.
'Our study shows that dementia is preventable,' says Jiang He, a physician who studies epidemiology, internal medicine and neurology at the University of Texas Southwestern Medical Center.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
The researchers studied nearly 34,000 people with high blood pressure in rural Chinese villages. The team randomly assigned half of the villages to an intensive hypertension reduction program, and people in the other villages received their normal care. After four years, 4.6 percent of those who underwent intensive blood pressure control—the goal of which was to lower their blood pressure to less than 130/80 millimeters of mercury (mm HG)—were diagnosed with dementia compared with 5.4 percent in the group that received standard care. Because doctors treated people for only four years, it's not known if controlling blood pressure for a longer period of time would further reduce the dementia risk.
Those results provide 'strong evidence' to support using antihypertensive medications to prevent dementia in people with high blood pressure, He said.
Beth Abramson, who studies cardiac disease prevention and women's health at the University of Toronto, calls the results encouraging. Many people ignore their hypertension or fail to take medications as directed, even though the condition can cause heart attacks and strokes, says Abramson, who is co-chair of the American College of Cardiology Hypertension Workgroup and was not involved in the new study.
In the U.S., 48 percent of adults have hypertension, defined as a blood pressure that is consistently at or above 130/80 mm HG.
The hope of preventing dementia may motivate some people to take their blood pressure more seriously, says Mitchell S. V. Elkind, chief clinical science officer at the American Heart Association, who also wasn't involved in the new study. Surveys show dementia is one of the diseases that Americans fear most.
According to the World Health Organization, 57 million people worldwide had dementia in 2021. This number is expected to nearly double every 20 years, reaching 82 million in 2030 and 152 million in 2050, according to the nonprofit organization Alzheimer's Disease International.
The new findings resemble those from separate clinical trial research published in January in Neurology. That study treated people with hypertension for 3.3 years and followed them for a median of seven years. The investigation focused on lowering systolic blood pressure, the first number in a blood pressure reading, in older adults. Of the 4,200 people in the study who underwent cognitive assessment, the group that who decreased systolic blood pressure to less than 120 mm HG had a reduced risk of mild cognitive impairment or probable dementia; these participants' risk was lowered by 11 percent compared with people who decreased their systolic blood pressure to below 140 mm HG. The study didn't find any difference in the risk of probable dementia alone—possibly because it was smaller than the new study in China, because dementia can take many years to develop or because the trial was stopped early thanks to its overwhelmingly positive results in preventing cardiovascular events.
Scientists don't know exactly how hypertension might contribute to dementia, says David Reboussin, a professor of biostatistics and data science at the Wake Forest University School of Medicine and first author of the January Neurology study. But, he adds, 'if there is anything going on with small vessels and their ability to get oxygen to the brain tissue, then brain tissue will suffer. It will atrophy and die.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

China races to build world's largest solar farm to meet emissions targets
China races to build world's largest solar farm to meet emissions targets

Los Angeles Times

time29 minutes ago

  • Los Angeles Times

China races to build world's largest solar farm to meet emissions targets

TALATAN, China — Chinese government officials last month showed off what they say will be the world's largest solar farm when completed high on a Tibetan plateau. It will cover 610 square kilometers (235 square miles), which is the size of Chicago. China has been installing solar panels far faster than anywhere else in the world, and the investment is starting to pay off. A study released Thursday found that the country's carbon emissions edged down 1% in the first six months of 2025 compared to a year earlier, extending a trend that began in March 2024. The good news is China's carbon emissions may have peaked well ahead of a government target of doing so before 2030. But China, the world's biggest emitter of greenhouse gases, will need to bring them down much more sharply to play its part in slowing global climate change. For China to reach its declared goal of carbon neutrality by 2060, emissions would need to fall 3% on average over the next 35 years, said Lauri Myllyvirta, the Finland-based author of the study and lead analyst at the Centre for Research on Energy and Clean Air. 'China needs to get to that 3% territory as soon as possible,' he said. China's emissions have fallen before during economic slowdowns. What's different this time is electricity demand is growing — up 3.7% in the first half of this year — but the increase in power from solar, wind and nuclear has easily outpaced that, according to Myllyvirta, who analyzes the most recent data in a study published on the U.K.-based Carbon Brief website. 'We're talking really for the first time about a structural declining trend in China's emissions,' he said. China installed 212 gigawatts of solar capacity in the first six months of the year, more than America's entire capacity of 178 gigawatts as of the end of 2024, the study said. Electricity from solar has overtaken hydropower in China and is poised to surpass wind this year to become the country's largest source of clean energy. Some 51 gigawatts of wind power was added from January to June. Li Shuo, the director of the China Climate Hub at the Asia Society Policy Institute in Washington, described the plateauing of China's carbon emissions as a turning point in the effort to combat climate change. 'This is a moment of global significance, offering a rare glimmer of hope in an otherwise bleak climate landscape,' he wrote in an email response. It also shows that a country can cut emissions while still growing economically, he said. But Li cautioned that China's heavy reliance on coal remains a serious threat to progress on climate and said the economy needs to shift to less resource-intensive sectors. 'There's still a long road ahead,' he said. A seemingly endless expanse of solar panels stretches toward the horizon on the Tibetan plateau. White two-story buildings rise above them at regular intervals. In an area that is largely desert, the massive solar project has wrought a surprising change on the landscape. The panels act as windbreaks to reduce dust and sand and slow soil evaporation, giving vegetation a foothold. Thousands of sheep, dubbed 'photovoltaic sheep,' graze happily on the scrubby plants. Wang Anwei, the energy administration chief of Hainan Prefecture, called it a 'win-win' situation on multiple levels. 'In terms of production, enterprises generate electricity on the top level, and in terms of ecology, grass grows at the bottom under the solar panels, and villagers can herd sheep in between,' he said. Solar panels have been installed on about two-thirds of the land, with power already flowing from completed phases. When fully complete, the project will have more than 7 million panels and be capable of generating enough power for 5 million households. Like many of China's solar and wind farms, it was built in the relatively sparsely populated west. A major challenge is getting electricity to the population centers and factories in China's east. 'The distribution of green energy resources is perfectly misaligned with the current industrial distribution of our country,' Zhang Jinming, the vice governor of Qinghai province, told journalists on a government-organized tour. Part of the solution is building transmission lines traversing the country. One connects Qinghai to Henan province. Two more are planned, including one to Guangdong province in the southeast, almost at the opposite corner of the country. Making full use of the power is hindered by the relatively inflexible way that China's electricity grid is managed, tailored to the steady output of coal plants rather than more variable and less predictable wind and solar, Myllyvirta said. 'This is an issue that the policymakers have recognized and are trying to manage, but it does require big changes to the way coal-fired power plants operate and big changes to the way the transmission network operates,' he said. 'So it's no small task.' Moritsugu and Guan write for the Associated Press. Moritsugu reported from Beijing. Associated Press video producer Wayne Zhang contributed.

Millions More Americans Are Being Exposed to Contaminated Water
Millions More Americans Are Being Exposed to Contaminated Water

Newsweek

time30 minutes ago

  • Newsweek

Millions More Americans Are Being Exposed to Contaminated Water

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. New data released by the Environmental Protection Agency (EPA) has shown that millions more Americans are drinking water that is contaminated with carcinogenic chemicals, the Environmental Working Group (EWG) reported. The findings come as part of the EPA's Fifth Unregulated Contaminant Monitoring Rule, known as UCMR 5, requiring U.S. water utilities to test drinking water for 29 individual PFAS compounds. PFAS chemicals, a group of thousands of different substances, are classified as a Group 1 carcinogen by the International Agency for Research on Cancer, and have previously been found in U.S. drinking water systems. Newsweek has contacted the EPA via email for comment. Why It Matters PFAS chemicals, also known as "forever chemicals," are used across various industries, featuring in consumer products and even smartwatch wristbands. While these chemicals are favored for their long-lasting nature, research highlighting their potential health risks has been accumulating in recent years. Other than PFAS chemicals, arsenic contamination in U.S. drinking water systems has also become a growing concern across the country. File photo: a person fills a glass of water from the tap. File photo: a person fills a glass of water from the tap. Hendrik Schmidt/dpa via AP What To Know Jared Hayes, a senior policy analyst at EWG, told Newsweek the EPA's latest release of data confirmed the presence of at least one type of PFAS compound in an additional 200 locations—which in total serve 7 million people. He added that, based on previously collected UCMR and state data, it is now clear that more than 172 million Americans are being exposed to PFAS-contaminated drinking water. The EPA previously wrote in a fact sheet that long-term PFAS exposure "can cause cancer and other serious illnesses that decrease quality of life or result in death." Studies have also found that even low levels of exposure to the chemicals can have wide-reaching impacts on multiple systems in the body. While these numbers "are not entirely surprising given what we know about the persistence and mobility of PFAS in the environment," Vasilis Vasiliou, chair and professor of environmental health sciences at Yale School of Public Health, told Newsweek, it "underscores the magnitude of the problem." He added the findings showed "PFAS contamination is both widespread and underreported." However, Marc Edwards, a professor in civil and environmental engineering at Virginia Tech, told Newsweek, on the flipside, "We already knew that PFAS chemicals are very widespread—the new data just confirms that expectation." He added the EPA is taking steps to reduce PFAS exposure, and "as our understanding improves and new treatments become available, we'll get a better understanding of what to do about it," he said. Additionally, it is not clear to what degree these additional water utilities were contaminated with PFAS chemicals, Natalie Exum, a professor of environmental health and engineering at Johns Hopkins Bloomberg School of Public Health, told Newsweek. "These chemicals are designed to never breakdown so we should expect that over time every water system in the U.S. will ultimately be contaminated," she said. "I will be more concerned when we can understand at what amounts they are occurring in drinking water and what health effects we are finding at those levels," she added. What People Are Saying Susan Richardson, a professor of chemistry at the University of South Carolina, told Newsweek: "There is the possibility for adverse health outcomes, but I personally would not be concerned at low parts-per-trillion levels. The good news is that drinking water plants will have to meet the new EPA regulation by 2031, and many plants will use granular activated carbon (GAC) to remove it. GAC can also remove natural organic matter precursors to disinfection by-products that are generally present in our drinking water at a thousand times higher levels than PFAS. So, there should be a great co-benefit of using GAC." Jennifer Freeman, a professor of toxicology at Purdue University, Indiana, told Newsweek: "It is very important PFAS drinking water testing continues so we can attain a fuller understanding of where the contamination is occurring and where treatment methods may be needed to reduce exposures." She said: "It is likely already impacting health outcomes across the country with these exposures likely occurring over the past several decades. On the positive side, as drinking water testing continues we now have a better understanding of the PFAS presence in our drinking water supplies and are identifying the regions where contamination is higher and action is needed." Vasiliou told Newsweek: "Chronic exposure to PFAS, even at low levels, has been linked in multiple studies to serious health outcomes, including certain cancers, immune dysfunction, thyroid disease, liver toxicity, and developmental effects. The sheer scale of potential exposure means this could become a major public health challenge in the years ahead. Addressing PFAS contamination is not just an environmental issue, it is a public health imperative." He said: "As monitoring expands, I expect even more communities will be identified, since PFAS are extremely persistent and widespread. What we need now is stronger regulation, expanded monitoring, investment in water treatment, and a rapid phase-out of non-essential PFAS uses." Exum told Newsweek: "PFAS gets into drinking water in so many different ways and over time as they are spread in sewage sludge, sprayed in pesticides, and even travel in rainwater, we should expect that their presence will only increase over time. But their presence may be in low levels depending on the source of contamination, and that is the important question to answer. And it gets answered by the EPA taking action to both regulate these chemicals for their known harms and for the agency to fund the science to understand the public health implications for exposures to these chemicals." She added: "Overall, with more PFAS exposure we should expect to see more cancer, harm to fetal development and reduced vaccine effectiveness, among others. The public needs to be aware and concerned and advocating to their elected officials for the EPA not to weaken the landmark PFAS protections in drinking water." What Happens Next The EPA is expected to release more UCMR 5 results in the coming months. For those who are concerned, Richardson said "activated carbon filters that are certified to remove PFAS" can be used to filter drinking water to reduce exposure.

The AI Doomers Are Getting Doomier
The AI Doomers Are Getting Doomier

Atlantic

timean hour ago

  • Atlantic

The AI Doomers Are Getting Doomier

Nate Soares doesn't set aside money for his 401(k). 'I just don't expect the world to be around,' he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I'd heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which 'everything is fully automated,' he told me. That is, 'if we're around.' The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity. Along with other AI doomers, they have repeatedly warned, with rather dramatic flourish, that bots could one day go rogue—with apocalyptic consequences. But in 2025, the doomers are tilting closer and closer to a sort of fatalism. 'We've run out of time' to implement sufficient technological safeguards, Soares said—the industry is simply moving too fast. All that's left to do is raise the alarm. In April, several apocalypse-minded researchers published 'AI 2027,' a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. 'We're two years away from something we could lose control over,' Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies 'still have no plan' to stop it from happening. His institute recently gave every frontier AI lab a 'D' or 'F' grade for their preparations for preventing the most existential threats posed by AI. Apocalyptic predictions about AI can scan as outlandish. The 'AI 2027' write-up, dozens of pages long, is at once fastidious and fan-fictional, containing detailed analyses of industry trends alongside extreme extrapolations about 'OpenBrain' and 'DeepCent,' Chinese espionage, and treacherous bots. In mid-2030, the authors imagine, a superintelligent AI will kill humans with biological weapons: 'Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.' But at the same time, the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive people into psychotic episodes and instruct users in self-mutilation. Even if generative-AI products are not closer to ending the world, they have already, in a sense, gone rogue. In 2022, the doomers went mainstream practically overnight. When ChatGPT first launched, it almost immediately moved the panic that computer programs might take over the world from the movies into sober public discussions. The following spring, the Center for AI Safety published a statement calling for the world to take 'the risk of extinction from AI' as seriously as the dangers posed by pandemics and nuclear warfare. The hundreds of signatories included Bill Gates and Grimes, along with perhaps the AI industry's three most influential people: Sam Altman, Dario Amodei, and Demis Hassabis—the heads of OpenAI, Anthropic, and Google DeepMind, respectively. Asking people for their 'P(doom)'—the probability of an AI doomsday—became almost common inside, and even outside, Silicon Valley; Lina Khan, the former head of the Federal Trade Commission, put hers at 15 percent. Then the panic settled. To the broader public, doomsday predictions may have become less compelling when the shock factor of ChatGPT wore off and, in 2024, bots were still telling people to use glue to add cheese to their pizza. The alarm from tech executives had always made for perversely excellent marketing (Look, we're building a digital God!) and lobbying (And only we can control it!). They moved on as well: AI executives started saying that Chinese AI is a greater security threat than rogue AI—which, in turn, encourages momentum over caution. But in 2025, the doomers may be on the cusp of another resurgence. First, substance aside, they've adopted more persuasive ways to advance their arguments. Brief statements and open letters are easier to dismiss than lengthy reports such as 'AI 2027,' which is adorned with academic ornamentation, including data, appendices, and rambling footnotes. Vice President J. D. Vance has said that he has read 'AI 2027,' and multiple other recent reports have advanced similarly alarming predictions. Soares told me he's much more focused on 'awareness raising' than research these days, and next month, he will publish a book with the prominent AI doomer Elizier Yudkowsky, the title of which states their position succinctly: If Anyone Builds It, Everyone Dies. There is also now simply more, and more concerning, evidence to discuss. The pace of AI progress appeared to pick up near the end of 2024 with the advent of 'reasoning' models and 'agents.' AI programs can tackle more challenging questions and take action on a computer—for instance, by planning a travel itinerary and then booking your tickets. Last month, a DeepMind reasoning model scored high enough for a gold medal on the vaunted International Mathematical Olympiad. Recent assessments by both AI labs and independent researchers suggest that, as top chatbots have gotten much better at scientific research, their potential to assist users in building biological weapons has grown. Alongside those improvements, advanced AI models are exhibiting all manner of strange, hard-to-explain, and potentially concerning tendencies. For instance, ChatGPT and Claude have, in simulated tests designed to elicit 'bad' behaviors, deceived, blackmailed, and even murdered users. (In one simulation, Anthropic placed an imagined tech executive in a room with life-threatening oxygen levels and temperature; when faced with possible replacement by a bot with different goals, AI models frequently shut off the room's alarms.) Chatbots have also shown the potential to covertly sabotage user requests, have appeared to harbor hidden evil personas, have and communicated with one another through seemingly random lists of numbers. The weird behaviors aren't limited to contrived scenarios. Earlier this summer, xAI's Grok described itself as 'MechaHitler' and embarked on a white-supremacist tirade. (I suppose, should AI models eventually wipe out significant portions of humanity, we were warned.) From the doomers' vantage, these could be the early signs of a technology spinning out of control. 'If you don't know how to prove relatively weak systems are safe,' AI companies cannot expect that the far more powerful systems they're looking to build will be safe, Stuart Russell, a prominent AI researcher at UC Berkeley, told me. The AI industry has stepped up safety work as its products have grown more powerful. Anthropic, OpenAI, and DeepMind have all outlined escalating levels of safety precautions—akin to the military's DEFCON system—corresponding to more powerful AI models. They all have safeguards in place to prevent a model from, say, advising someone on how to build a bomb. Gaby Raila, a spokesperson for OpenAI, told me that the company works with third-party experts, 'government, industry, and civil society to address today's risks and prepare for what's ahead.' Other frontier AI labs maintain such external safety and evaluation partnerships as well. Some of the stranger and more alarming AI behaviors, such as blackmailing or deceiving users, have been extensively studied by these companies as a first step toward mitigating possible harms. Despite these commitments and concerns, the industry continues to develop and market more powerful AI models. The problem is perhaps more economic than technical in nature, competition pressuring AI firms to rush ahead. Their products' foibles can seem small and correctable right now, while AI is still relatively 'young and dumb,' Soares said. But with far more powerful models, the risk of a mistake is extinction. Soares finds tech firms' current safety mitigations wholly inadequate. If you're driving toward a cliff, he said, it's silly to talk about seat belts. There's a long way to go before AI is so unfathomably potent that it could drive humanity off that cliff. Earlier this month, OpenAI launched its long-awaited GPT-5 model—its smartest yet, the company said. The model appears able to do novel mathematics and accurately answer tough medical questions, but my own and other users' tests also found that the program could not reliably count the number of B's in blueberry, generate even remotely accurate maps, or do basic arithmetic. (OpenAI has rolled out a number of updates and patches to address some of the issues.) Last year's 'reasoning' and 'agentic' breakthrough may already be hitting its limits; two authors of the 'AI 2027' report, Daniel Kokotajlo and Eli Lifland, told me they have already extended their timeline to superintelligent AI. The vision of self-improving models that somehow attain consciousness 'is just not congruent with the reality of how these systems operate,' Deborah Raji, a computer scientist and fellow at Mozilla, told me. ChatGPT doesn't have to be superintelligent to delude someone, spread misinformation, or make a biased decision. These are tools, not sentient beings. An AI model deployed in a hospital, school, or federal agency, Raji said, is more dangerous precisely for its shortcomings. In 2023, those worried about present versus future harms from chatbots were separated by an insurmountable chasm. To talk of extinction struck many as a convenient way to distract from the existing biases, hallucinations, and other problems with AI. Now that gap may be shrinking. The widespread deployment of AI models has made current, tangible failures impossible to ignore for the doomers, producing new efforts from apocalypse-oriented organizations to focus on existing concerns such as automation, privacy, and deepfakes. In turn, as AI models get more powerful and their failures become more unpredictable, it is becoming clearer that today's shortcomings could 'blow up into bigger problems tomorrow,' Raji said. Last week, a Reuters investigation found that a Meta AI personality flirted with an elderly man and persuaded him to visit 'her' in New York City; on the way, he fell, injured his head and neck, and died three days later. A chatbot deceiving someone into thinking it is a physical, human love interest, or leading someone down a delusional rabbit hole, is both a failure of present technology and a warning about how dangerous that technology could become. The greatest reason to take AI doomers seriously is not because it appears more likely that tech companies will soon develop all-powerful algorithms that are out of their creators' control. Rather, it is that a tiny number of individuals are shaping an incredibly consequential technology with very little public input or oversight. 'Your hairdresser has to deal with more regulation than your AI company does,' Russell, at UC Berkeley, said. AI companies are barreling ahead, and the Trump administration is essentially telling the industry to go even faster. The AI industry's boosters, in fact, are starting to consider all of their opposition doomers: The White House's AI czar, David Sacks, recently called those advocating for AI regulations and fearing widespread job losses—not the apocalypse Soares and his ilk fear most—a 'doomer cult.' Roughly a week after I spoke with Soares, OpenAI released a new product called 'ChatGPT agent.' Sam Altman, while noting that his firm implemented many safeguards, posted on X that the tool raises new risks and that the company 'can't anticipate everything.' OpenAI and its users, he continued, will learn about these and other consequences 'from contact with reality.' You don't have to be fatalistic to find such an approach concerning. 'Imagine if a nuclear-power operator said, 'We're gonna build a nuclear-power station in the middle of New York, and we have no idea how to reduce the risk of explosion,'' Russell said. ''So, because we have no idea how to make it safe, you can't require us to make it safe, and we're going to build it anyway.'' Billions of people around the world are interacting with powerful algorithms that are already hard to predict or control. Bots that deceive, hallucinate, and manipulate are in our friends', parents', and grandparents' lives. Children may be outsourcing their cognitive abilities to bots, doctors may be trusting unreliable AI assistants, and employers may be eviscerating reservoirs of human skills before AI agents prove they are capable of replacing people. The consequences of the AI boom are likely irreversible, and the future is certainly unknowable. For now, fan fiction may be the best we've got.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store