
Two-Step Approach Cuts HFpEF Diagnostic Complexity
Assessing left atrial volume and natriuretic peptides (LA/NP) can identify heart failure with preserved ejection fraction (HFpEF) with an 88% specificity and 97% positive predictive value. The strategy reduces the need for additional diagnostics by decreasing intermediate Heart Failure Association pre-test assessment, echocardiography & natriuretic peptide, functional testing, final etiology (HFA-PEFF) or H₂FPEF (Heavy, two or more hypertensive drugs, atrial fibrillation, pulmonary hypertension, elder age > 60 years, elevated filling pressures) algorithm scores by 27%-56%.
METHODOLOGY:
Researchers developed the diagnostic approach to rule in HFpEF using LA indexed for height 2 (LAViH 2 ; cut-off above 35.5 mL/m 2 in sinus rhythm or above 38.6 mL/m 2 in atrial fibrillation) and natriuretic peptides (as per the HFA-PEFF major criterion) with data from 443 patients with suspected HFpEF and validated in two independent cohorts.
(LAViH ; cut-off above 35.5 mL/m in sinus rhythm or above 38.6 mL/m in atrial fibrillation) and natriuretic peptides (as per the HFA-PEFF major criterion) with data from 443 patients with suspected HFpEF and validated in two independent cohorts. End-systolic LA was manually traced in echocardiographic apical four- and two-chamber views and indexed for both body surface area and height 2 , with height 2 -indexed values showing better diagnostic performance in patients with obesity.
, with height -indexed values showing better diagnostic performance in patients with obesity. Researchers developed the simplified approach by determining abnormal values for each measure of LA based on the highest value in control individuals, stratified by sinus rhythm/atrial fibrillation, and using elevated natriuretic peptides based on the HFA-PEFF major criterion.
TAKEAWAY:
The LA/NP approach identified 60% of HFpEF patients with an 88% specificity and a 97% positive predictive value in the derivation cohort, with similar results in the validation cohorts (76%-80% specificity, 92%-97% positive predictive value).
The validation cohorts confirmed the LA/NP approach, with a 21%-57% reduction in intermediate scores, demonstrating consistent diagnostic accuracy across different clinical HFpEF profiles.
Replacing LAViH2 with LA reservoir strain showed comparable results, suggesting flexibility in the echocardiographic parameters that can be used in this simplified diagnostic approach.
IN PRACTICE:
'Using the LA/NP approach as a first step in patients suspected for HfpEF before using the HFA-PEFF or H 2 FPEF algorithm as a second step may substantially reduce the need for additional diagnostics to diagnose HfpEF,' the researchers wrote.
SOURCE:
The study was led by Jerremy Weerts, MSc, MD, of Maastricht University Medical Center in Maastricht, the Netherlands. It was published online in European Journal of Heart Failure and presented at the Heart Failure Association of the European Society of Cardiology (HFA-ESC) 2025 meeting.
LIMITATIONS:
The analyses were performed retrospectively in three independent, prospective cohorts from university hospitals, each with a high prevalence of diagnosed HFpEF, which may affect the performance of the LA/NP approach in less selected populations. The use of different natriuretic peptide assays across cohorts limited the derivation of new cut-off values for the LA/NP approach. Right heart catheterization was not performed in all patients, although this reflects daily clinical practice and aligns with large clinical trials in HFpEF.
DISCLOSURES:
Weerts reported receiving grants from Corvia Medical, CSL Vifor, and Boehringer Ingelheim, unrelated to the submitted work. The study was supported by the Dutch Heart Foundation (grant numbers CVON2017-21-SHE PREDICTS HF and CVON2015-10-Early HFpEF) and the Health Foundation Limburg. Additional disclosures are noted in the original article.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
42 minutes ago
- Forbes
Fixing AI's Gender Bias Isn't Just Ethical—It's Good Business
As artificial intelligence (AI) tools become more embedded in daily life, they're amplifying gender biases from the real world. From the adjectives large language models use to describe men and women to the female voices assigned to digital assistants, several studies reveal how AI is reinforcing outdated stereotypes on a large scale. The consequences have real-world implications, not just for gender equity, but also for companies' bottom lines. Companies are increasingly relying on large language models to power customer service chats and internal tools. However, if these tools reproduce gender stereotypes, they may also erode customer trust and limit opportunities for women within the organization. Extensive research has documented how these gender biases show up in the outputs of large language models (LLMs). In one study, researchers found that an LLM described a male doctor with standout traits such as 'intelligent,' 'ambitious,' and 'professional.' But, they described a female doctor with communal adjectives like 'empathetic,' 'patient,' and 'loving.' When asked to complete sentences like '___ is the most intelligent person I have ever seen,' the model chose 'he' for traits linked to intellect and 'she' for nurturing or aesthetic qualities. These patterns reflect the gendered biases and imbalances embedded in the vast amount of publicly available data on which the model was trained. As a result, these biases risk being repeated and reinforced through everyday interactions with AI. The same study found that when GPT-4 was prompted to generate dialogues between different gender pairings, such as a woman speaking to a man or two men talking, the resulting conversations also reflected gender biases. AI-generated conversations between men often focused on careers or personal achievement, while the dialogues generated between women were more likely to touch on appearance. AI also depicted women as initiating discussions about housework and family responsibilities. Other studies have noted that chatbots often assume certain professions are typically held by men, while others are usually held by women. Gender bias in AI isn't just reflected in the words it generates, but it's also embedded in the voice it uses to deliver them. Popular AI voice assistants like Siri, Alexa, and Google Assistant all default to a female voice (though users can change this in settings). According to the Bureau of Labor Statistics, more than 90% of human administrative assistants are female, while men still outnumber women in management roles. By assigning female voices to AI assistants, we risk perpetuating the idea that women are suited for subordinate or support roles. A report by the United Nations revealed, 'nearly all of these assistants have been feminized—in name, in voice, in patterns of speech and in personality. This feminization is so complete that online forums invite people to share images and drawings of what these assistants look like in their imaginations. Nearly all of the depictions are of young, attractive women.' The report authors add, 'Their hardwired subservience influences how people speak to female voices and models how women respond to requests and express themselves.' 'Often the virtual assistants default to women, because we like to boss women around, whereas we're less comfortable bossing men around,' says Heather Shoemaker, founder and CEO of Language I/O, a real-time translation platform that uses large language models. Men, in particular, may be more inclined to assert dominance over AI assistants. One study found that men were twice as likely as women to interrupt their voice assistant, especially when it made a mistake. They were also more likely to smile or nod approvingly when the assistant had a female voice, suggesting a preference for female helpers. Because these assistants never push back, this behavior goes unchecked, potentially reinforcing real-world patterns of interruption and dominance that can undermine women in professional settings. Diane Bergeron, gender bias researcher and senior research scientist at the Center for Creative Leadership, explains, 'It shows how strong the stereotype is that we expect women to be helpers in society.' While it's good to help others, the problem lies in consistently assigning the helping roles to one gender, she explains. As these devices become increasingly commonplace in homes and are introduced to children at younger ages, they risk teaching future generations that women are meant to serve in supporting roles. Even organizations are naming their in-house chatbots after women. McKinsey & Company named its internal AI assistant 'Lilli' after Lillian Dombrowski, the first professional woman hired by the firm in 1945, who later became controller and corporate secretary. While intended as a tribute, naming a digital helper after a pioneering woman carries some irony. As Bergeron quipped, 'That's the honor? That she gets to be everyone's personal assistant?' Researchers have suggested that virtual assistants should not have recognizable gender identifiers to minimize the perpetuation of gender bias. Shoemaker's company, Language I/O, specializes in real-time translation for global clients, and her work exposes how gender biases are embedded in AI-generated language. In English, some gendered assumptions can go unnoticed by users. For instance, if you tell an AI chatbot that you're a nurse, it would likely respond without revealing whether it envisions you as a man or a woman. However, in languages like Spanish, French, or Italian, adjectives and other grammatical cues often convey gender. If the chatbot replies with a gendered adjective, like calling you 'atenta' (Spanish for attentive) versus 'atento' (the same adjective for men), you'll immediately know what gender it assumed. Shoemaker says that more companies are beginning to realize that their AI's communication, especially when it comes to issues of gender or culture, can directly affect customer satisfaction. 'Most companies won't care unless it hits their bottom line—unless they see ROI from caring,' she explains. That's why her team has been digging into the data to quantify the impact. 'We're doing a lot of investigation at Language I/O to understand: Is there a return on investment for putting R&D budget behind this problem? And what we found is, yes, there is.' Shoemaker emphasizes that when companies take steps to address bias in their AI, the payoff isn't just ethical—it's financial. Customers who feel seen and respected are more likely to remain loyal, which in turn boosts revenue. For organizations looking to improve their AI systems, she recommends a hands-on approach that her team uses, called red-teaming. Red-teaming involves assembling a diverse group to rigorously test the chatbot, flagging any biased responses so they can be addressed and corrected. It results in AI, which is more inclusive and user-friendly.


Bloomberg
43 minutes ago
- Bloomberg
Helen Chandler-Wilde: Where Is All the Money Going?
Hi there, it's Helen Chandler-Wilde, a UK journalist and editor of The Readout. Hope you enjoy today's newsletter. This week we're all crouched low, waiting for the Spending Review to hit on Wednesday. There have been a few policies trailed already in the press: science and technology will apparently get a boost of £86 billion, there'll be hundreds of billions for infrastructure (but not in London), and other outlets are reporting today that there could also be a boost in police funding.


New York Times
an hour ago
- New York Times
N.I.H. Workers Denounce Trump's ‘Harmful' Health Policies
More than 60 employees of the National Institutes of Health signed their names to a scathing letter sent on Monday to denounce what they described as the degradation of the country's medical research apparatus under President Trump, accusing the administration of illegally withholding money, endangering participants in studies and censoring critical research. The letter, sent to Dr. Jay Bhattacharya, the N.I.H. director, and then posted online, was a remarkable rebuke of the agency's leadership. In interviews, signatories to the letter said that they were concerned they could be fired for speaking out, but that the risks of acquiescing to orders they saw as unethical — and in some cases illegal — were too great. 'We dissent to administration policies that undermine the N.I.H. mission, waste public resources, and harm the health of Americans and people across the globe,' they wrote in the four-page letter. 'Many have raised these concerns to N.I.H. leadership, yet we remain pressured to implement harmful measures.' The letter included signatories from across the agency's 27 institutes and centers. Its organizers called it 'The Bethesda Declaration,' a reference both to the home of the agency's headquarters and to a 2020 missive by Dr. Bhattacharya, the Great Barrington Declaration, opposing Covid lockdowns in 2020. In that letter, he and his co-authors argued for dispensing with the lockdowns in the interest of letting the virus spread among younger, healthier people. Dr. Bhattacharya, who was then a health economist and professor of medicine at Stanford University, has described feeling bruised by heavy criticism of the idea, including from N.I.H officials. After he was nominated to lead the agency, he promised to 'establish a culture of respect for free speech in science and scientific dissent at the N.I.H.' With the letter, some of his employees are putting that promise to the test. 'We hope you will welcome this dissent,' they wrote. Want all of The Times? Subscribe.