logo
#

Latest news with #JohnMandrola

May 30 2025 This Week in Cardiology
May 30 2025 This Week in Cardiology

Medscape

time4 days ago

  • Health
  • Medscape

May 30 2025 This Week in Cardiology

Please note that the text below is not a full transcript and has not been copyedited. For more insight and commentary on these stories, subscribe to the This Week in Cardiology podcast , download the Medscape app or subscribe on Apple Podcasts, Spotify, or your preferred podcast provider. This podcast is intended for healthcare professionals only. In This Week's Podcast For the week ending May 30, 2025, John Mandrola, MD, comments on the following topics: listener feedback, CRT vs CSP, important clues on the ECG, beta-blocker interruption after myocardial infarction, novel approaches to LDL-C lowering, and ICD decisions in cardiac sarcoidosis. Dr Steve Dickson, a heart failure cardiologist from Kansas City, writes via email to push back on my skepticism about rapid initiation of guideline-directed medical therapy (GDMT) in heart failure (HF). Dr Dickson says I am right about the STRONG-HF trial, which used cardiologists for the majority of follow-up visits. And while cardiologist-led care may currently be the case in many clinics across the country, he writes, times are changing. Hospitalists, primary care practitioners (PCPs), and advanced practice practitioners are becoming more comfortable with these life-saving medication classes. The future of heart failure care isn't just the burden of cardiologists, but internal medicine where there are more reinforcements. Dr Dickson says that in his clinic they run a '4 Weeks 4 Meds' program for patients with heart failure with reduced ejection fraction (HFrEF), mainly through nurse practitioners that he guides and manages. He says it's 'quite aggressive but I can assure you my CEO loves seeing his readmissions drop.' My comment to this is that I fully support programs like '4 weeks 4 meds' programs. It's a great idea, but of course, it takes administrative support, incentives. And, I would add that this is a great use of advanced practice clinicians. I say congratulations and I hope more programs like this become a reality. Clearly, HF care would improve if it did. The obvious advantage of such a program is not only getting more patients on max GDMT but also avoiding harm from overzealous GDMT titration. Cardiac Resynchronization Therapy vs Conduction System Pacing – CONSYST-CRT As many of you know, because I have said it so many times, the most exciting thing in EP right now is not ablation but pacing, specifically, conduction system pacing (CSP). I saw a Tweet this week, saying that ablation has become more anatomic and rote, and pacing is becoming more physiologic. It used to be the opposite. In an ablation, we would map and study electrophysiology, and in pacing we would simply flop a catheter in the right ventricle (RV) and we would just screw it in wherever it fell into the RV. Now in ablation we put in a pulsed field ablation (PFA) catheter and boom. E-grams are gone. PFA ablation is ruining EP. Now, during a pacemaker we meticulously record and pace in areas of the conduction system. 6+ years ago, we were doing it with the His bundle, and now it's the left bundle. To do CSP is to bathe yourself in beautiful physiology. But beautiful images of CSP on an ECG are not evidence. What we need is evidence that it is superior to biventricular (BiV) pacing for the treatment of patients with heart failure and left bundle branch block (LBBB). The thing is, in BiV pacing, we have strong evidence, from multiple trials, that biV pacing reduces hard outcomes, like mortality, compared to standard RV implantable cardioverter defibrillators (ICDs) or medical therapy. We think, emphasis on think, that because CSP narrows the QRS, and some smaller studies have shown improved hemodynamics with CSP, it is similar to, or superior to, BiV pacing. We need trials. Well, I am a local principal investigator here at my hospital for the US government-funded Left vs Left trial, which will have death and hospitalization for heart failure (HHF) as a primary endpoint. Planned enrollment is 2100 patients and follow-up is for 5 years. That's a long time to wait. This month, JACC EP has published another attempt to compare CSP to BiV pacing in CRT-eligible patients. Spanish investigators have now published the CONSYST-CRT trial. This was a randomized controlled trial (RCT) of just 134 patients of CSP vs BiV pacing. The design was non-inferiority of CSP. The primary endpoint was a composite of many things: death, transplant, HHF, or a left ventricular ejection fraction (LVEF) improvement of less than 5% at only 12 months. I hope you are seeing the problems: I will go over the issues in the comments. First finding: 18 of 67 patients crossed over from CSP to BiV pacing. Second finding: 23.9% in the CSP group had a primary outcome event vs 29.8% of the BiV pacing. The absolute risk difference was –5.9%. The confidence interval went from –21.2%, which is far better with CSP, to 9.2% worse for CSP. The accepted non-inferiority margin was 10% worse, and so 9% is less than 10% and this allowed a claim of non-inferiority with a P value of .02. Secondary endpoints also favored CSP. QRS narrowing was better in the CSP arm. Echo response was better, as was a NYHA functional class improvement. The combined endpoint of death, transplant, and HHF also showed non-inferiority, though I could not find confidence intervals on the difference. The authors concluded: CSP was non-inferior to biV pacing in achieving clinical and echocardiographic response, suggesting that CSP could be an alternative to biV pacing. I laud trials and clinicians who attempt them. CSP vs biV pacing is an important question. But if you are going to experiment on people, you need to have a chance of telling a difference. There is no way this trial had a chance with only 134 patients—1 in three who crossed over—to detect signal from noise. There were 16 vs 20 primary outcome events, respectively. There were 1 vs 3 deaths, 0 vs 1 transplant, and 8 vs 10 HHF. The most common outcome was failing to improve an ejection fraction (EF), which is far from a hard clinical outcome. That is a tiny number of events. Not only was the composite endpoint problematic, but follow-up for a year is totally inadequate. Left vs Left is asking the same question and enrolling 2000 patients — almost 20 times more patients. It would have been fine to compare hemodynamics, but we have oodles of these studies showing similar effects on EF with CSP. I mean no malice to the Spanish team, and to be sure, I was part of the His-SYNC trial, another woefully underpowered trial, but this type of effort tells us little to nothing about the long-term effects of the two strategies. Underpowered trials, I think, should be avoided. Since the beauty of the ECG is what got me interested in cardiology, I am always on the lookout for studies on the ECG. A UCSF group reports a research letter in Circulation EP on the association between a fragmented QRS and myocardial fibrosis burden and autopsy-defined arrhythmic causes among presumed sudden cardiac death (SCD). QRS complexes should be sharp and narrow. There should be no notches or fragments. This is a report from the ongoing project called the POST SCD or 'Postmortem Systematic Investigation of Sudden Cardiac Death' study. The idea is to use autopsy and clinical data to adjudicate arrhythmic (potentially treatable with an ICD) vs nonarrhythmic death (overdose, pulmonary embolism, stroke, etc). So far, the team has had 943 presumed SCD, of which 402 had ECGs before death. And 98 (or about 1 in 4) of these had histogical data. Fragmented QRS was defined as greater than 1 notch for QRS <120 milliseconds (ms) and >2 notches if the QRS ≥120 ms and fragmented QRS's had to be in 2 leads. The findings of this research letter were: Presumed SCD with a fractionated QRS on more than 1 lead were more likely to have arrhythmic causes compared with presumed SCDs without fractionated QRS 76% versus 55%; odds ratio, 2.6 [95% CI, 1.11–6.5]; P = .036, No other ECG marker was associated with higher odds of arrhythmic death. Coronary artery disease (CAD) was the most common cause of fractionated QRS. Presumed SCDs with fractionated QRS in greater than 1 lead had higher mean total and replacement fibrosis as well as more interstitial fibrosis burden than presumed SCDs without fractionated QRS ( P < .05 for all). I cover this imperfect study because the value of the ECG is under-recognized. I say imperfect because only half the presumed SCD had ECGs before death; and only a fraction of these had histological samples. What's more, the difference in associations were statistically significant but the positive and negative predictive value of fractionated QRS is surely low. But I want to consider this a public service announcement to my colleagues in cardiology. When you are sorting out whether a patient (say with syncope or near syncope) has a worrisome structural heart condition, study the QRS. When someone is telling me the story of syncope, my first question after the story is what does the QRS look like? Fragmented QRS is an electrical manifestation of disordered ventricular conduction, which is often due to scar, and disordered conduction, aka, anisotropy, sets the stage for reentry. Recall one of the fundamentals of cardiology: reentry requires three things: two pathways, delayed conduction, and unidirectional block. Scar sets the stage because there all three possible. In the same way that atrial structural disease increases the odds that a PAC fibrillates the atria, ventricular structural disease makes it more likely that a PVC fibrillates the ventricle. No, I am not saying that every notched fragmented QRS is malignant, but it's a clue, it's one piece of data. Don't overreact, but don't ignore the QRS. EHJ has a substudy from the French ABYSS trial of beta-blocker (BB) interruption vs continuation after myocardial infarction (MI). The substudy purported to show 'spikes' in HR and blood pressure (BP) after BB discontinuation. And this explains why the ABYSS study failed to show that stopping beta-blocker therapy could be safely done in stable post-MI patients. None of this is correct. None. And I want to correct the record about ABYSS, again. In my mind, ABYSS is entirely consistent with the REDUCE-AMI trial of not using BB in the post-MI patient. REDUCE-AMI found that not giving long-term BB resulted in similar major adverse cardiovascular events (MACE) outcomes compared with standard BB in post-MI patients with a normal EF. The main ABYSS trial, published in NEJM in August 2024, compared interruption of BB or continuation of BB in patients who had a previous MI and EF > 40% and MI at least 6 months before randomization. The median time from MI to randomization was just about 3 years. ABYSS enrolled about 3700 patients. It had a non-inferiority (NI) design with a 4-point composite endpoint of MI, stroke, death, or hospitalization for cardiac reason. The NI margin was a difference of 3% points for the upper bound of the 95% CI. The results were that interruption was not only not non-inferior to continuation but actually inferior to continuation. The numbers: 23.8% primary outcome events in the interruption group vs 21.1% primary outcome events in the continuation group Absolute risk increase of 2.8% with 95% CI that went from 0.1 to 5.5. Since 5.5% was greater than the NI margin of 3%, NI was not met. And when you looked at the CI, the entire 95% CI was above 0 for the interruption group so it was actually inferior. But, But. The entire composite was driven by hospitalization for cardiac reasons — 349 vs 307. There were no differences in death, no difference in MI, and no difference in stroke. The major MACE outcomes were nearly identical. The only reason ABYSS found to be against interruption was a surrogate outcome requiring adjudication of hospitalization. The less biased outcomes of death, stroke, and MI were identical. Now to the EHJ substudy, which looked at changes in BP and HR from baseline to post-randomization in the two groups. They also assessed the changes in HR and BP impact on the primary endpoint for the prespecified subgroups of patients with or without history of hypertension. With little to no surprise the BB interruption group sustained an increase in heart rate (about 10 bpm) and systolic blood pressure (SBP) (3.7 mmHg). These deltas remained stable after discontinuation over months. With regard to the hypertension (HTN) and no-HTN groups which were almost 50-50 split, there were largely similar effects. That was an increase in heart rate and SBP. Also not surprising was that the primary outcome occurred more often in patients with HTN than those without HTN, showing that HTN patients are higher risk. I am not sure we needed that analysis because HTN is a known risk factor. The observed harm (higher rate of the primary outcome) was numerically higher in the HTN group; the difference was not statistically significant. In other words, there was no evidence of a heterogenous treatment effect for HTN or no HTN. But remember, 'harm' as described by the primary outcome in ABYSS is flawed because it was driven only CV hospitalizations. In Table 2, the authors show the results of the components of the primary outcome in the BB interruption and continuation arm based on the subgroup of HTN — and in MI, stroke, and death, the P values for interactions were close to 1, as in absolutely no difference. The authors concluded: Interruption of beta-blocker treatment after an uncomplicated MI led to a sustained increase in BP and HR, with potentially deleterious effects on outcomes, especially in patients with history of hypertension. My friends, the ABYSS trial was a perfectly nice trial. It addressed an important question. The problem was the choice of endpoint. When you look at MI, stroke, and death, BB interruption was totally fine. This substudy showing why ABYSS found harm is flawed because the main trial chose a flawed endpoint. Further, finding that HR and BP go up slightly when a BB is stopped is like finding out you get wet when walking in the rain. BB reduce HR and BP. When you stop them, HR and BP go up a bit. Since the main trial found no increase in MACE, we can say that that small rise in HR and BP don't have deleterious effects. The most we can take from this study is that there does not seem to be a heterogenous treatment effect for the presence of HTN. I think we can stop BB in patients with a history of MI if they have normal EF and no other reason to take BB. The one caveat to my argument that death, MI, and stroke were not different was that it was a little underpowered. For the composite of death, MI, and stroke, the hazard ratio was 0.96 in the interruption vs continuation arm, but the confidence interval went from 0.74 to 1.24. So, interruption could be 26% better or 24% worse than continuation. And perhaps that is why the authors decided to add CV hospitalizations. Yet when you combine the data from REDUCE-AMI and recent observational data from Denmark, the ABYSS finding of no increase in MACE after BB interruption supports the idea that interruption of BB after MI is reasonable and safe. And, of course, less medicine is better. Is Lifelong LDL-C Lowering Within Reach? The heart-1 Gene-Editing Trial Heart-1 Gene Therapy Trial Pauses Enrollment JACC has published a phase 2 dose-ranging study with another oral PCSK9 inhibitor, called AZD0780. In other words, it doesn't have a name yet. The drug has a novel mechanism, which I won't delve into at this point, but it did achieve a dose-dependent drop in LDL cholesterol (LDL-C). The max dose of 30 mg achieved a 50% placebo-adjusted reduction of LDL-C and all patients were on statins but had LDL-C > 70 mg/dL. No adverse effects were noted but the study was quite short at only 12 weeks. The accompanying editorial was quite good. I learned a lot from it. First, there are multiple oral PCSK9 inhibitors in development. I think this is a good thing because pills are better than shots. I say that because shots increase the work of being a patient. There is just more inertia and insurance restrictions and effort required to take a shot, however infrequently. The editorialists also discussed the pricing dilemma and the role of pharmacy benefit managers (PBM) whose rebate-based pricing models often dictate drug availability. To be honest, I don't quite understand PBMs well enough to explain them, but I can say that I don't get the feeling they contribute much to the health of US citizens. There is also a potential — and I emphasize the potential part — for permanent inhibition of the PCSK9 complex via CRISPR/Cas9-mediated gene editing. Dr Sekar Kathiresan's company Verve Therapeutics is working on that, and it has progressed to early-stage human trials. The one-and-done approach to LDL-C management is obviously a high reward, high risk strategy and there will need to be rigorous safety data before this could become a reality. Two safety issues have arisen: one a patient developed very high hepatic enzymes, and in another trial, two patients had serious cardiac events — not felt to be due to the intervention. So we shall see. There would have to be a heck of lot more data on safety before I signed up to permanently change my genes — especially when you can accomplish the same thing with a tiny daily pill. I know that I sound old saying this, but if you stop and think, it's quite shocking that such an approach could be close to becoming a possibility. Editing genes so as to reduce LDL-C, for life. Wow. One of the toughest calls in EP is deciding on ICD treatment of patients with suspected cardiac sarcoid. It's hard for many reasons, not least because the diagnosis of cardiac sarcoid, and its risk stratification is difficult. European Heart Journal has published a nice study from a Dutch center and two American centers. They had about 1500 patients with biopsy proven sarcoid, mostly non-cardiac who had not had ventricular tachycardia (VT) and therefore were being considered only for a primary prevention ICD. They then compared outcomes based on multiple ways to risk stratify. I, for instance, did not know that there were multiple professional societal recommendations for an ICD. There is the 2014 Heart Rhythm Society (HRS) statement, the 2017 AHA/ACC guideline and the 2022 European Society of Cardiology (ESC) guideline. And they're all a little different. The purpose of this study is that Dutch and American authors propose is that cardiovascular magnetic resonance imaging (CMR) phenotyping is better. CMR phenotyping sounded tricky but it they made it seem easy. You are either CMR high or low risk. It's based on EF and 'pathology frequent LGE'. There were four CMR phenotypes: (1) no late gadolinium enhancement (LGE) and normal LVEF, (2) no LGE and abnormal LVEF, (3) pathology-frequent LGE, and (4) pathology-rare LGE. Pathology-frequent LGE included at least one of four features: sub-epicardial, multifocal, septal, and right ventricular free wall involvement. They then dichotomized CMR phenotypes into high-risk (pathology-frequent LGE) and low-risk (no LGE and normal LVEF, no LGE and abnormal LVEF, or pathology-rare LGE) phenotypes. The next step was to follow the 1500 or so patients over 5-10 years for VT events. Of note, most were young, average age 54 years. Let me stop there and say when you have a 50-something year old with heart block, before putting in a pacemaker, stop and think about sarcoid. Do a CMR first. The first finding was that when an ICD was indicated based on either the society or CMR phenotype the likelihood of a ventricular arrhythmia (VA) event at 10 years was high. Ranging from 20% to 35% for the high risk CMR group. When an ICD was not indicated by either the society recommendations or CMR phenotype, the rates of VA were low, but ranged from 5% in the HRS statement to 2.6% for CMR low risk. They key findings were seen in Figure 3: the CMR dichotomized method had the best are under the curve (AUC). At 5 years, it was 0.86 for VT events. That's pretty darn good, and statistically better than each of the professional society criteria for ICD. The problem of course is that the ICD need is imperfect. A subgroup of the 1500, about 8% or 119 patients received a loop recorder. About a third had high risk CMR and two thirds had CMR low risk. 4 of 38 patients (or 4% per year) had VT in the CMR high risk 1 of 81 patients in the CMR low-risk had a VA for a 0.4% per year risk. This illustrates the problem of prediction: most high-risk patients do not have a VT event and not every low-risk patient remains free of VT. As it is ischemic and non-ischemic HF patients. I do like the CMR approach, though. And this study is quite nice, as there were more than 1000 patients, all with biopsy-proven sarcoid, good follow-up, and centrally adjudicated blinded CMR. There were — as always — limitations. ICD therapy as a VT event is not always a surrogate for SCD. Just because a person has VT therapy (shock or anti-tachycardia pacing) does not mean it was aborted SCD. Second, patients without ICDs in this study may have had VT and it went undetected. To me, if we had some replication or confirmation of such a simple CMR high- or low-risk strategy, it would seem best. Realizing of course the imperfect nature of prediction of VA events. The things to keep in mind with ICD therapy is that a) it is not an insurance policy; insurance policies confer no risk to the policy holder, and ICDs surely do; b) you always want to deploy an ICD in a patient with a high-risk of the primary outcome (the VA) and a low-risk of competing risks, such as severe HF, or dementia or chronic kidney disease, or a host of disease seen in the elderly. The 54-year-old patient with heart block and LGE and reduced EF seems like an ideal patient for an ICD.

May 23 2025 This Week in Cardiology
May 23 2025 This Week in Cardiology

Medscape

time23-05-2025

  • Health
  • Medscape

May 23 2025 This Week in Cardiology

Please note that the text below is not a full transcript and has not been copyedited. For more insight and commentary on these stories, subscribe to the This Week in Cardiology podcast , download the Medscape app or subscribe on Apple Podcasts, Spotify, or your preferred podcast provider. This podcast is intended for healthcare professionals only. In This Week's Podcast For the week ending May 23, 2025, John Mandrola, MD, comments on the following topics: Listener feedback on sports 'disqualification,' big digoxin news, Brugada syndrome, another positive finerenone study, and unblinded transcatheter trials. Paul Dorian, a senior Canadian academic EP who has written extensively in the area of sports cardiology, writes via email regarding my comments on the Mayo Clinic's 'return-to-play' genetic heart disease study that I covered last week. Dorian first agrees with my comments and the ideas of the authors who report an extremely favorable prognosis of patients with gene-positive but phenotype-negative disease. Let me quote from his email, because it is so educational. I take issue with the phrase 'disqualification.' As sports cardiologists, we never ever 'disqualify' any athlete from competing in a sport. Disqualification should be entirely restricted to the team, organization, or governing sport entity for a particular sport. Disqualification is a legal and organizational concept. What physicians, especially sports cardiologists, can, and should do is inform the patient of the best estimate of the risk of sport, specifically tailored to the severity of illness, the predicted risk of adverse events, including sudden death, the specific genotype, and or phenotype, and the type of sport and frequency, intensity, and duration of activity. Using the now well accepted concept of shared decision-making, this is then up to the individual patient/athlete to decide whether they wish to participate in their desired sport or sports, and at what intensity and under what circumstances. Although physicians are sometimes asked to 'clear' an athlete for competition or sometimes indicate that the athlete should be 'disqualified' this is always inappropriate and should never be done (unless the physician is representing the team or sporting organization as opposed to caring for the athlete). This, of course, does not mean that physicians should refrain from giving clear advice, including recommending against certain activities, if they feel that the risk is high enough. For context, the risk of dying per ascent of Everest is 1%. The risk of dying from hang gliding or parachute jumping is approximately 2 per 1000 participants. If we don't 'disqualify' our patients or friends from attempting Everest, or hang gliding, or parachute jumping, why would we 'disqualify' a patient with much less than 1% annual risk of death from participating in sport? I want to thank Dr Dorian for writing and I am glad to learn from experts like yourself. Digoxin News On May 19, the European Journal of Heart Failure published the baseline characteristics of the DIGIT-HF trial. This is a placebo-controlled RCT in patients with symptomatic heart failure with reduced ejection fraction (HFrEF) with EF < 30% and Class 3-4 HF symptoms that compares the safety and efficacy of digitoxin vs placebo—in addition to baseline geometric mean titer (GMT). The authors published the rationale paper in 2019. I will link to it. The primary outcome will be death and heart failure hospitalizations (HHF). The motivation for this trial stemmed from the old DIG trial, one of my favorites to discuss. The DIG trial, circa 1997, randomized just under 7000 patients and found no difference in mortality, which was its primary endpoint. At the time, the trial was—largely—considered a negative trial as this was the era of angiotensin-converting enzyme (ACE), and beta-blockers, and mineralocorticoid receptor antagonists (MRA). However, in today's terms, where almost all HF interventions fail to move mortality, and only decrease surrogate endpoints, such as HHF, the DIG trial could easily be recast as a winner. For four big reasons: One is that Dig shredded HHF (by a statistically significant 28%), on par with SGLT2 inhibitors and angiotensin receptor–neprilysin inhibitors (ARNIs) in heart failure with preserved ejection fraction (HFpEF). Two is that Dig also reduced total hospitalizations—which, in my opinion, is the only hospitalization surrogate that patients care about. Three…subgroup analyses from the Dig trial found a heterogenous treatment effect where most of the dig benefit came from patients with more advanced HF and lower EFs. Four, trial procedures allowed for open-label use of dig in the event of worsening HF. This occurred 8% more often in the placebo arm. Also notable about the DIG trial is it was highly pragmatic. There was no run-in period and no dig levels were mandated. Some might ask whether the DIGIT-HF trial enrollment of only 1200 patients will have enough power. I think it's a serious concern, but it's also possible by only recruiting the sickest of the sick, there will likely be higher event rates. Although underpowered trials are terrible because it's unethical to experiment on people without hope of having enough power to sort signal from noise. Then, right after I tweeted this out, ID doctor Todd Lee responded to me on Twitter that there another ongoing dig trial for patients with HF. In August of last year, the European Journal of Heart Failure published the rationale and design paper for the Dutch-led DECISION trial. This is an RCT, double-blind, placebo-controlled trial looking at dig in patients with 'chronic' HF and LVEF < 50%. The primary endpoint is cardiovascular death and HF visits, including hospitalization and urgent visits. The sample size is 1000 patients, all of whom have been enrolled by Dec 23. It's powered to find a 22% reduction in the composite endpoint. I am glad there are two trials but worry about the power of these trials. Make no mistake, dig use requires care and knowledge of pharmacology, which is less common in the modern clinician. But I also strongly believe digoxin has been unfairly maligned by biased observational comparisons wherein sicker patients get dig and that is why there is an 'association' with worse outcomes. I will cite a meta-analysis of all DIG studies, first author Oliver Ziff, in the BMJ , wherein the association with digoxin harm falls in parallel with the robustness of statistical methodology. And, in fact, there is no association of harm when only dig RCTs are combined. Dig can be an extremely useful adjunct to help patients with HF. I don't know about you, but we get a fair number of consults to evaluate patients who are unfortunate enough to get an ECG that the computer reads as possible Brugada syndrome. Some, perhaps most of these, can be simply put off as misdiagnosed, because incomplete right bundle branch block is a common normal variant. But, for patients who likely have Brugada syndrome and are asymptomatic, it's a struggle because you know there is a tiny but asymmetrically terrible risk of sudden death. Everyone agrees that implantable cardioverter defibrillator (ICDs) should be used for secondary prevention of a second cardiac arrest, but for primary prevention where nothing has happened in Brugada syndrome, the harms likely outweigh the benefits. If only there was a risk prediction model. You know, like the totally accurate helpful CHA 2 DS 2 VASc score. Well, a paper from the group of Dr. Rui da Providencia in London has subjected the many risk prediction models of Brugada syndrome to systematic review and risk of bias assessment. The first author is Daniel Gomes and it's in EuroPace . The first thing to say about this paper is that there are at least 11 multi-parameter risk scores for predicting major arrhythmia events in patients with Brugada syndrome. I did not know there were that many risk scores. The second main finding was that 100% of the models were assessed as an overall high risk of bias. Third…the pooled c-statistics for each model had a lot of heterogeneity and lower discriminative power than originally reported. The authors' second paragraph of the discussion outline the challenge: At present, almost two thirds of patients with Brugada syndrome are asymptomatic at the time of the diagnosis, and up to 0.2%–0.6% per year will eventually develop ventricular arrhythmia or sudden cardiac death as the initial presentation. They then write that clinicians need to balance that tiny risk against a cumulative ICD complication rate of 4%–6% per year — many-fold higher. Think about it: how do you predict an event with a less than 1% incidence. We can try, but I think it's best to be super humble, calm, and reassuring in the exam room. It turns out that there is a good list of 'general preventive measures' to go over with patients with Brugada syndrome. These include aggressive treatment of fever, avoiding dehydration and drugs that may induce ST-segment elevation in right precordial leads (Class I anti-arrhythmics, some anesthetics, and psychotropic drugs). We can also have these patients avoid recreational substances such as cocaine, cannabis, and excessive alcohol intake. All of these things can exacerbate the type 1 pattern and trigger VF. To be fair, I am no expert in assessment of models, as it's above my pay grade as a clinician, but I cover the paper for the same reason I covered the genetic heart disease paper last week: technology and testing have increased the number of asymptomatic people harboring Brugada syndrome and its incredibly low risk of a terrible event. The digital health revolution will bring more of these problems, not less. I have seen aggressive marketing (Watchman, Cardio-Mems, Entresto) but finerenone may be the champion of marketing. At the European Society of Cardiology HF meeting, and simultaneously publishing in the Journal of Cardiac Failure , the FINEARTS HF authors report the results of a substudy of a small subset (~1000) of the total 6000 in the trial. I don't have to tell you the topline result because you already know them: the sky is blue and every finerenone study is positive. First let's briefly review FINEARTS-HF: NEJM 2024. Finerenone vs placebo in 6000 patients with HFpEF. Mean age 72 years, almost half female and the mean left ventricular ejection fraction (LVEF) was 52%. The primary outcome was CVD or a worsening heart failure (HF) event which included HHF or urgent visit for HF. The finerenone arm had a 16% lower rate of the composite endpoint, which was statistically significant. The absolute risk reduction was 2.8% but there was no difference in CVD. Lower rates of HF events drove the positive results. Total death was also not statistically different. The core problem of course was that this finerenone trial, like all finerenone regulatory trials, were compared against placebo rather than the $4 per month spironolactone tablet. Purists will say, John, there is no proof that spironolactone reduces outcomes in HFpEF. They would cite the negative TOPCAT trial, and this would be technically correct. But, when TopCat was analyzed without the outlier countries Russia and Georgia, it was clear that spironolactone also reduced outcomes in HFpEF. Of course it does. Any doc who treats HFpEF knows that spironolactone is a secret weapon. If we had had robust regulatory authorities at FDA, they would have forced Bayer to design their regulatory trials against spironolactone. If I were at FDA, it is what I would have required. Or at least do a three-arm trial with finerenone, spironolactone, and placebo. Anyway, the latest substudy took 1000 patients of the total 6000 who had been randomized during a HF hospitalization or shortly after. In FINEARTS, recall that the proportion of patients without a worsening ambulatory or hospitalized HF event within 3 months of randomization was prospectively capped at approximately 50% of total enrollment. The purported idea was to capture a unique cohort at risk for readmissions and to examine the effectiveness of early initiation of finerenone on short term readmission endpoints. And you guessed it, among these patients, 30-day readmissions for HF were 1.8% vs 3.6% in those randomized to placebo. Similar results were observed when examining 60- and 90-day HF readmissions You can see the marketing potential. HF readmissions is an area of focus among the quality people. Because it can bear on reimbursement. Now the proponents of finerenone can say, look, we have a drug that reduces HF readmissions. Come on you all. First, this is a small subgroup of a trial with a mere 16% reduction in a composite primary endpoint of HF events, with no difference in hard outcomes like CVD or all-cause death. And a weak comparator arm. What's more, what do you think happens if you randomize one group of patients sick enough to have a HF hospitalization to an extra diuretic vs no extra diuretic? You get fewer readmissions. I have little doubt that MRA drugs are effective in all patients with HF, regardless of EF. The question is whether the non-steroidal and surely more costly finerenone is better than spironolactone or eplerenone. I have opined often on this podcast about the use of subjective endpoints in transcatheter trials that are open-label. The problem is that one group gets an intervention, with its huge caring signal, and the other groups gets no procedure. Tablets only. Probably bland white tablets. Close your eyes and picture the scene in a valve-clipping procedure from the patient's perspective. The patient meets at least two, perhaps three specialist doctors in the prep area. Then they go in the room — the procedure room — and they see massive booms, huge screens, multiple people. And even if you give them general anesthesia, the impression is one of …holy mackerel, I remember that room — this procedure has got to help me. The control arm gets nothing but also knows they could have been randomized to the procedure. Huge problem. Well, I am delighted to tell you that Sanjay Kaul, surely one of the best evidence adjudicators in all of medicine, wrote a short editorial in the journal EuroIntervention where he persuasively argues that open-label trials are rarely adequate to support labeling claims based on patient-reported outcomes. I want to thank Sanjay for his incredible generosity. He has taught me so much over the years. He is really excellent. In the editorial, Kaul cited a great example, one I did not know about. Perhaps you did. He contrasted two trials of hemodynamic monitoring. One was the GUIDE-HF trial of invasive hemodynamic monitors in HF patients, which used blinded assessment of Quality of Life (QoL). Every patient got a CardioMEMS device and its use was blinded. GUIDE-HF found no difference in the KCCQ. But the MONITOR-HF trial, which was similar, was open label trial of hemodynamic monitoring and guess what happened: it showed an improvement of Kansas City Cardiomyopathy Questionnaire (KCCQ) in the patients who had the device. The article is a Tour De Force argument in favor of using proper sham controls for devices. It's an important concept because the cardiology space iterates so fast. I was talking yesterday with a nurse who is my age and we were reminiscing about the old vs new devices. One depressing feature of this story is that it would be super-easy to do blinded trials. For the tricuspid valve interventions, you simply sedate the patient and put a catheter in the femoral vein and you have a sham control procedure. Then we would know. Whenever you think a placebo procedure is unethical or risky — we can't do that, Mandrola — I ask you to think about two things: first, think about the counterfactual where we may still be doing left internal mammary artery (LIMA) ligation or TMR, transmyocardial revascularization if not for sham-controlled trials showing that they weren't effective. And the second thing to think about is to go look at the angiograms in the Lancet supplement of ORBITA trial. Look at those angiograms. Half of these scary lesions were initially treated with nothing. That should reassure you that sham controls are possible. Finally, perhaps the most depressing part of the unblinded transcatheter trials is that I know , and you know , that the investigators know that blinding is necessary. Medical investigators are supposed to be scientists and clinicians. Scientists should not need regulators to push them to do proper trials. If they're scientists, they should just do proper trials. And as clinicians they use the placebo effect nearly every day in the office, so they know that proper sham controls are needed from the clinical perspective. Yet they did not do it. I'm just sad about this whole thing.

May 16 2025 This Week in Cardiology
May 16 2025 This Week in Cardiology

Medscape

time16-05-2025

  • Health
  • Medscape

May 16 2025 This Week in Cardiology

Please note that the text below is not a full transcript and has not been copyedited. For more insight and commentary on these stories, subscribe to the This Week in Cardiology podcast , download the Medscape app or subscribe on Apple Podcasts, Spotify, or your preferred podcast provider. This podcast is intended for healthcare professionals only. In This Week's Podcast For the week ending May 16, 2025, John Mandrola, MD, comments on the following topics: The BedMed trial of nighttime BP meds, SURMOUNT-5, Troponin URL, gene tests in patients with no disease, and GDMT for heart failure. JAMA has published what I hope is the last of the trials of timing of blood pressure (BP) meds. Here is a brief rundown of the history. The MAPEC trial conducted between 2000-2009 found a 61% reduction of MACE favoring bedtime meds. This was published in a journal called Chronobiology International . I know. I had never heard of it either. Then the same group from Spain conducted the Hygia trial of 19,000 patients with hypertension, and they reported a 45% reduction in major adverse cardiovascular events (MACE) with bedtime BP meds. EHJ published this trial in 2020, and it set off a storm of controversy. EHJ editors even issued a formal 'Expression of Concern' letter. Ricky Turgeon and Canadian colleagues writing in the AHA journal Hypertension noted that the main concern (of many) with Hygia was that nighttime BP meds also reduced the risk of non-CV death. This, they argued, was a serious outlier as no BP trial had ever shown a reduction in nonCV death. Further, Turgeon noted that it was unclear whether or not Hygia was truly randomized, nor whether allocation was concealed. Then came the TIME trial, which enrolled 21,000 patients with hypertension in the UK and found no difference in MACE depending on timing of meds. The Lancet published this study in 2022. The BedMed trial, published this month, was a study of 3500 Canadian patients with hypertension, recruited from primary care clinics. Half got morning meds, and the other half took their meds at night. There was absolutely no difference in MACE outcomes over a nearly 5-year follow-up. Comments I believe the TIME and BedMed trials disprove the findings from the single group in SPAIN. My reasoning is simple: the findings of massive MACE reductions simply by changing the time of day of BP meds is implausible. I especially struggle with Hygia's reduction in non-CV-death. Then, in two separate geographies, two independent groups find no benefit. You go with the plausible two trials. Another lesson from this story is the matter of open data. Here, I don't single out the Hygia authors because I feel that all randomized controlled trials (RCTs) whose results could change practice ought to be submitted with source data so that it can be independently verified. Think of the time and money that it took to disprove implausible findings. TIME enrolled 21,000 patients and BedMed enrolled 5000 and both trials followed patients for approximately 5 years. Trials with massive effect sizes deserve both attention and scrutiny. I hope the new leaders at NIH and FDA emphasize replication of studies. For our patients, it is simple: it does not matter when you take your meds. Just take them todos los dias. NEJM published the SURMOUNT-5 trial of tirzepatide vs semaglutide for weight loss. The manuscript is long. The statistical methods section is 3 paragraphs, the discussion is also many paragraphs. I don't understand why this is so. Patients were nondiabetic, age 45 years, two-thirds female, and BMI of 39. One group gets tirzepatide, the other semaglutide, and the primary endpoint was change in weight from baseline. Tirzepatide easily won. Over the 72 weeks of the trial, those in the tirzepatide group lost 20% of their body weight vs -13.7% in the semaglutide group. Those in the tirzepatide group were more likely than those in the semaglutide group to have reductions of at least 10%, 20%, and 25%. Secondary endpoints, such as cardiometabolic risk factors (BP, glucose) were also lower in tirzepatide group. Systolic blood pressure (SBP) reduction was 10.2 mm Hg vs 7.7 mm Hg in the tirzepatide vs semaglutide groups. Safety signals were similar. As most know, tirzepatide is a dual GIP and GLP-1 agonist. Whatever is the mechanism, and the authors speculate a lot, tirzepatide induces more weight loss with a similar safety signal. Semaglutide has been shown to reduce more than weight; in the SELECT trial of patients with established ASCVD, semaglutide reduced MACE outcomes by about 20% relative to placebo. SURMOUNT-5 authors tell us there is an ongoing CV outcomes trial with tirzepatide. I would speculate that the CV-disease modifying effect is a class effect. Purists may say we need more data because weight loss is a surrogate measure. I don't know; the more weight loss we get, especially when starting at a BMI of 39, the better. SBP was lower with more weight loss. One caveat was the trial was open label. Which is weird, because surely the sponsor, Eli Lilly, could afford proper blinding. I guess it's possible that if you know you are on the stronger drug, you might do more to lose weight, regarding diet and exercise. As for clinical translation, I would reach for tirzepatide first. Why wouldn't you go for the more potent weight-loss inducer? Thing is, there will surely be more GLP-1s coming to market, and the tirzepatide lead now may soon be taken away. Irish cardiologists Mark Coyle and John McEvoy have argued persuasively in the European Heart Journal that the coming fifth universal definition of myocardial infarction (MI) should include age-specific high-sensitivity troponin (hs-troponin) upper reference limits. I did not know that another universal definition of MI was coming soon. Whenever I think of hs-troponin, the following sentence comes to mind… It should not be so, but the increasingly sensitive troponin assays make life more complicated. I say it should not be so because Andrew Foy and I have explained in JAMA Internal Medicine how to use troponins at the bedside. The teaser is that it's no different from creatine phosphokinase (CPK) and that you have to think first about whether the patient with an elevated troponin is having a plaque-rupture MI. If they are not, then you treat the underlying cause. Really, it's that simple. The two writers make a compelling case for having age-specific reference values. I am sure you all have seen older patients with troponin levels just above the reference. This too should not be so, but the red values (for abnormal) infect our brains — it does mine at least. I hate seeing red numbers in the EHR. Before I tell you their argument I want to ask a common-sense question: Why should a 75-year-old have the same normal value as a 25-year-old? It makes no sense. Coyle and McEvoy write an extremely compelling argument. It's structured, well written, and addresses both the pros and cons of having age-specific upper reference limits (URLs). You will learn a lot reading this argument. I will give a brief summary: They first tell us why we use troponin URLs are used to define myocardial injury. First, you need URLs because troponin levels are not standardized across different manufacturers of troponin assays. Thus, there is a need for a common benchmark of 'abnormality' for all troponin assays. They then explain how URL are derived. Basically, you measure troponins in a sample of normal adults. The reference population is 400 adults. Why 400? I don't know exactly; they say for 'statistical purposes.' With this group of 400 troponins, you can calculate a value that is 99th percentile. This is the URL. With that as background, the authors tell us of efforts to derive the 99th percentile troponin levels for the US population, using longitudinal data such as NHANES. Now they are getting close to making their case. These efforts found significant differences in URL by sex. And indeed, there are sex-specific URL in the fourth universal definition of myocardial infarction (UDMI). But they also found substantial differences by age. For instance, adults aged ≥60 years had consistently higher URLs for high-sensitivity cardiac troponin than adults aged 40–59 or 18–39 years, with a clear age gradient. There was little data on 70- and 80-year-olds so the age cutoff proposed is over 60. The authors write: We believe that the use of age-specific high-sensitivity cardiac troponin URLs to define myocardial injury could have major clinical advantages. Obviously, the No. 1 advantage is to avoid huge downstream workups for older adults with levels just above normal. They rightly argue that a good understanding of troponins would not lead to low-value workups for mildly abnormal lab values, but, since they are clinicians, they take a pragmatic approach and write: We think it would be more impactful to implement age-specific URLs (as was done for sex-specific troponin URLs) The next part of their argument is brilliant. They go through in a bullet-point type format — 7 con arguments against and rebut each. I won't go through them all. Suffice to say, they persuaded me. Their case is strong. I wish such age-related normals weren't needed; I wish every doctor who uses troponins understood the test. But since that is not so, helping us with age-specific levels is a sound idea. JACC EP has published a paper from the Michael Ackerman Mayo Clinic Genetic Heart Rhythm Clinic. The topic of gene +/phenotype – (G+/P–) people could not be more relevant. More and more, cascade screening of family members of a patient with genetic heart disease is picking up people with the same gene but no disease. No disease as in no QT prolongation, no arrhythmogenic right ventricular cardiomyopathy (ARVC), no hypertrophic cardiomyopathy (HCM), etc. Yet… these people (notice I am using the word 'people' not patients) are often disqualified from sport. Mayo Clinic has one of the largest GHD clinics worldwide, and their retrospective chart reviews teach us a lot. There are few if any clinical trials in GHD. Before I tell you the results of this report on 274 G+/P– people, let me first say that I consider disqualification from sports participation one of the severest restrictions we dole out. I love sports. And I owe some of my success as a doctor to my years of team sports. Had I had a funny gene and been disqualified, it would have been quite bad. Patients were considered G+ if a genetic test identified a pathogenic variant or likely pathogenic variant in an established gene associated with long QT syndrome (LQTS), HCM, catecholaminergic polymorphic ventricular tachycardia (CPVT), or arrhythmogenic cardiomyopathy (ACM). Phenotype testing was done with ECG, imaging, or exercise test. Of the 274 total patients, most had LQTS (231 or 84%); 19 or 7% had CPVT, 15 or 5% had ACM, and a few percent had HCM. The age of diagnosis was 10 and 15 years. Mid 20s for ACM. Most patients were discovered via family or cascade screening. Nearly 1 in 5 sought RTP after disqualification at another institution. All but 4 (270 of 274) were allowed RTP after an evaluation or guideline-directed treatment and shared decision-making. For LQTS, patients who were truly disease nonpenetrant were treated mostly with pharmacologic therapy (80, 72%) or an intentional nontherapy (INT) strategy (28, 25%), which consisted of avoiding QT-prolonging drugs, advice for proper hydration, acquisition of a personal automated external defibrillator, and routine follow-up visits. The small remainder of patients were treated with more invasive therapies, such as left cardiac sympathetic denervation (LCSD; 2, 1%) or an implantable cardioverter-defibrillator (3, 2%). Two of the patients (1%) with an ICD had the device placed before being seen at our clinic. For CPVT (N =19), after evaluation, most patients were treated pharmacologically (11, 58%) or with an INT strategy (8, 43%). For ACM (N =15), all patients were put on a plan of active surveillance to monitor for any evidence of phenotypic conversion or disease emergence. All cardiac imaging was normal, including cardiac MRI, and patients had no structural evidence of disease. On follow-up, 2 patients showed late gadolinium enhancement on cardiac MRI 4 and 5 years after the initial Mayo Clinic visits and were considered P+ at that point. For HCM (N=9), all patients (9, 100%) were managed with INT. They were all referred for evaluation because of a positive family history for disease. Overall, in total, 68% of G+/P– people received pharmacologic therapy and 27% received INT or intentional non-therapy. And no patients had any disease-related cardiac events or deaths in more than 1,300 years of combined follow up. No patients. None. Nada. About 60 patients had their gene variant downgraded from pathogenic, likely pathogenic or VUS to likely benign. These patients were not included in the 274 in this report, but this is an important data point because they were disqualified at the same rate as the patients with true pathogenic gene variants. This is a really important paper. Really important. I get gene reports, and they are scary. Why? For two reasons: First, because no one wants to have a young person have cardiac arrest. Second, because almost no one knows much about genetic heart disease. Indeed, gene testing is about as opposite from troponin testing as it gets. Cardiologists deal with troponin levels about every day. We have a feel for troponins, often the wrong feel, but there is familiarity. Gene reports come with a bold-faced phrase 'pathogenic variant.' Our tendency is to pull the trigger and say 'no sports.' What this report tells us is that if you do a proper evaluation (history, exam, ECG, image, and stress test) you can identify phenotype negative people. That is, they don't have the disease. And then set forth a treatment and follow-up plan, often with no treatment, these people (not patients) can live normal lives and participate in sports. The authors write with caution: In fact, the purpose of this study is not to encourage physicians to disregard a positive genetic test, but to use it as a tool to further risk-stratify and guide management and treatment of the patient, and to not automatically disqualify from exercise. For instance, for the INT strategy the authors write that 'even though no prescription is given or intervention performed, avoidance of QT-prolonging drugs, being proactive with electrolyte replenishment and hydration, monitoring other variables that could lead to an event (ie, lack of sleep, excess caffeine, postpartum period), and attending frequent follow-ups are still expected. I realize you (and I) will not become experts in genetic heart disease from reading this paper. But reading this paper should infuse us with perspective. That is, genes are not fate. Genotype is not phenotype. Disqualification is severe. And when we find gene positive people the answer is not to scare people and disqualify them. The answer is to read, research the problem and as I often do — phone a friend. Then, once you have done that have a discussion with the person and their family. I want to say thank you to Michael Ackerman and his team. They have taught me, and the entire field of cardiology, tons about genetic heart disease. STRONG HF: More Beats Less After Discharge for Heart Failure Here we go again with rushing guideline-directed medical therapy (GDMT). JACC-HF has published a research letter in which numerous authors performed a post-hoc analysis of the STRONG HF trial. The main study question was to quantify how many days free of death or heart failure hospitalization (HHF) over 6 months is gained with rapid uptitration of GDMT after an admission for HF. The secondary aim of the study was to promote the endpoint of RMST — restricted mean survival time — and RMSTD, or restricted mean survival time difference, which they authors say provides a direct measure of clinical benefit and can complement the hazard ratio (HR). It's interesting because in years past, trial reports often included such a measure. For instance, in the AVID trial of ICD vs AAD for VT, the average unadjusted length of additional life associated with ICD therapy over AAD was 2.7 months at 3 years. (I don't know why authors stopped including such data points, but one theory is that it seriously minimized the differences.) Back to the STRONG HF substudy and the matter of 'rapid uptitration of GDMT.' STRONG HF was published in 2022. STRONG HF enrolled about 1600 patients with acute heart failure who were not treated with optimal medications from 87 hospitals in 14 countries. Nearly 90% were recruited from Africa and Russia. Patients were randomly assigned to a high-intensity treatment arm or usual care. The high-intensity arm was aptly named. Patients first were given half the optimal dose of heart failure meds while still hospitalized. These included all the usuals (renin-angiotensin blockers, β-blockers, mineralocorticoid receptor antagonists). At week 1 after discharge, patients were checked for tolerance of these meds. At week 2, meds were uptitrated to full dose. The week 3 visit was used to check tolerance of full-dose meds, and then patients were checked again after 6 weeks. Notably, cardiologists performed these "safety" visits, using only a history, exam, and measurement of basic labs. The usual care arm was according to the local practice—in Africa or Russia. Obviously, the high-intensity arm did better. They had 5-fold more frequent in-person visits and far more patients taking optimal HF meds. The primary endpoint of death or HHF at 6 months was 34% lower. Readmission for heart failure drove the benefit, but all-cause death was 16% lower and cardiovascular death was 26% lower in the high-intensity arm. The trial was terminated early for benefit. Adverse events, actually, were higher in the high-intensity arm, 46% vs 29%, most commonly related to low BP or hyperkalemia and renal issues. I wrote about the STRONG-HF trial in 2022 and wondered if it would change hospital systems. That is, if you found a way for a cardiologist to see the patient 4 times in the 6 weeks after hospital discharge you could improve outcomes. Well, since STRONG has come out, I have heard little about hospital systems changing their systems. But perhaps I was naïve back in 2022. GDMT enthusiasts used the STRONG HF study to promote the value of rapid titration of meds. While that happened, I now have come to recognize STRONG-HF as one of the classic examples of performance bias—which infects so many strategy trials. That is, patients randomized to high-intensity care got a lot more than just rapid titration of meds. They had 5-fold more interactions with doctors and HF teams. They had more education, more follow-up and, simply stated, more care. There is no way to tell if it was all the extra care of the meds. I can think of few examples of situations where you give one group of sick patients 5x more healthcare interactions than the other group and it doesn't matter. In fact, this is the reason EAST AF found benefit for early rhythm control. In EAST, there was little difference in sinus rhythm between the early rhythm control (ERC) and rate control arms, so rhythm control can't explain the better outcomes. Instead, the ERC had tons more healthcare encounters. That was a long introduction to the research letter looking at days free of the primary composite of death or HHF. In STRONG-HF, it was nearly 9 days longer without an event in the high-intensity arm compared with the usual care. The HR was 0.61 or 39% reduction. Most of the benefit was driven by a reduction of HHF, but the difference in death was 3.5 days but this did not reach statistical significance. The authors than do a similar analysis, looking at extra days free of the primary endpoint based on the percentage of optimal dose of HF meds. And, of course, patients on optimal doses of HF meds did better. Each 10% increase in percentage optimal dose of GDMT was associated with an RMSTD of +1.7 days (95% CI: +0.8 to +2.5 days; P < 0.001). The authors are impressed. They write: Our post hoc analysis of the STRONG-HF trial highlights the magnitude of benefit from rapid GDMT uptitration in HF patients beyond reduced risk. RMST complements HR and risk differences (RDs) and can enhance the physician's ability to convey the real-world impact of interventions to patients and families. The authors write: Therefore, we believe RMST should be considered for routine inclusion among clinical trial result reporting. This can potentially bridge the knowledge gap between clinical trial results and bedside decision-making with patients and family. Unlike HRs and RDs, which address the critical question of whether a treatment works, RMST provides a tangible and intuitive answer to the question, 'How many extra days does this treatment offer? I like the idea of using the RMST but for 100% opposite reasons. I am not sure where the authors practice, or what their patients are like. But here in Kentucky, if I tell a farmer from Grayson County, Kentucky, that if he spends 4 extra days of his life in the next 6 weeks driving to Louisville to see a cardiologist, running through the paperwork gauntlet each time, then taking extra meds (many of which are costly), that he may have an extra 8 days before having another HHF, I am not sure they would be interested. Same with AVID trial. Would the ICD have gained their lofty status if we emphasized the extra 2.7 months of life over 3 years? What do you all think? I think we should use something like RMST. I'm afraid it would make a lot of our highly valued recent 'breakthroughs' look relatively modest. This is why I love basic old pacemakers. The RMST for a pacemaker for heart block is likely infinity. The patient dies without the pacemaker, and lives perhaps two decades with it. I guess the value of using RMST is an empirical question—one that could be evaluated in qualitative studies of what patients think.

May 09 2025 This Week in Cardiology
May 09 2025 This Week in Cardiology

Medscape

time09-05-2025

  • Health
  • Medscape

May 09 2025 This Week in Cardiology

Please note that the text below is not a full transcript and has not been copyedited. For more insight and commentary on these stories, subscribe to the This Week in Cardiology podcast , download the Medscape app or subscribe on Apple Podcasts, Spotify, or your preferred podcast provider. This podcast is intended for healthcare professionals only. In This Week's Podcast For the week ending May 9, 2025, John Mandrola, MD, comments on the following topics: the controversial KETO-CTA study, tough decisions in subclinical AF, and potentially huge benefit for GLP-1 receptor agonists. Meta-analysis The journal JACC Advances published a study looking at plaque progression in people eating a ketogenic diet (KD). It stirred all sorts of controversy on social media. I will review it this week. A few background comments. An obstacle to the broad clinical implementation of carbohydrate-restricted diets (CRDs) and KD are lipid changes that occur in a minority of patients upon carbohydrate restriction. Bad lipid changes. As in large increases in LDL cholesterol (LDL-C) and associated apolipoprotein B (ApoB). While there are many factors contributing to increases in LDL-C and ApoB in the KD, "leanness" seems important. Get this: The authors cite a meta-analysis of 41 studies that reports that mean baseline BMI had a strong inverse association with LDL cholesterol change [whereas saturated fat amount was not significantly associated with LDL-C change. For trials with mean baseline BMI <25, LDL cholesterol increased by 41 mg/dL (95% CI, 19.6-63.3) on the low carbohydrate diet (LCD). By contrast, for trials with a mean of BMI 25 to less than 35, LDL cholesterol did not change, and for trials with a mean BMI ≥35, LDL cholesterol decreased by 7 mg/dL (95% CI, –12.1 to –1.3). I did not know that lipid changes with CHD was modified by BMI. These observations have given rise to the characterization of the lean mass hyper-responder (LMHR) phenotype. From the authors: the aim of the study was "to examine the association between plaque progression and its predicting factors." I know; it is a bit confusing. 100 people who were on a KD (for years actually), and had a "keto-induced" LDL-C ≥190 mg/dl and HDL ≥60 mg/dl and TG ≤ 80 mg/dl were followed for 1 year using coronary artery calcium (CAC) scan and coronary computed tomography angiography (CCTA). I say "keto-induced" because the LDL-C had to be less than 160 before adapting the KD. Entry criteria also included an increase in ≥ 50% in the LDL-C. Plaque progression predictors were assessed with linear regression and Bayes Factors. Study subjects had to have normal glucose and A1c and normal CRP. Patients on the KD had a normal BMI at 22 and very high LDL of 254, HDL 89 and triglycerides of 67. Pause there: average LDL 254, so many were higher. These were 55-year-old mostly men who were adherent to KD as documented with beta-hydroxybutyrate (BHB) measures. Over the year, there was no substantial changes in ApoB or BMI. The study was actually pre-registered, and the primary endpoint was originally the change in noncalcified plaque burden. They did not formally present this endpoint. Instead, they gave the median change in percent atheroma volume, which they said was 0.8%. Who knows what this means? They tells us that this value is comparable with those observed in other cohorts. The thing is…the primary endpoint of change in noncalcified plaque volume (NCPV) was presented in a figure which you could look at and see that most individuals have an increase in NCPV. This lack of data on the PEP caused a stir online and the lead author offered the data in a video on Twitter/X. The numerical pooled NCPV change value was an in increase of 18.8 mm³. If this means nothing to you, don't worry. I will come back to it. Weird though that we had to get the primary endpoint in a Twitter video. The main thrust of the paper were the correlations. Neither the change in ApoB throughout the study nor the ApoB at the beginning of the study were associated with the change in NCPV. There was also no correlation between LDL-C and NCPV. What was correlated? The baseline CAC was positively associated with a change in NCPV. So also were baseline plaque measures. Simplifying: if there was plaque or CAC at baseline, there was a positive correlation with NCPV. The authors make the case that while both LDL-C and ApoB are independent risk factors for atherosclerosis, the absolute risk associated with elevated LDL-C and ApoB is context-dependent and may not apply to this lean mass hyper-responder (LMHR) group. Thus, they write, "these data are consistent with the observation that high LDL-C and ApoB among a metabolically healthy population have different cardiovascular risk implications than high LDL-C among those with metabolic dysfunction." Gosh, that is a big conclusion because these people had total cholesterol of 350 and LDL-C of 255. The authors make the case that the lean mass hyper-responders are different from the person with abnormal lipids from metabolic syndrome: Difference 1 : LDL-C and ApoB elevations are dynamic and result from the metabolic response of carb restriction and this is not a genetic defect. Difference 2 : LMHR are normal weight and metabolically healthy; they don't have obesity, diabetes, or insulin resistance. Difference 3 : The high LDL-C and ApoB in this phenotype emerge as part of a lipid triad, also inclusive of high HDL-C and low triglycerides, representing a metabolic signature of a distinct physiological state. Difference 4: The degree of this phenotype appears inversely related to BMI ("leanness"), consistent with the idea that it is a metabolic response to carbohydrate restriction that is accentuated in leaner, more metabolically healthy persons. The authors really are not shy in their conclusions. And I call them Whoppers No. 1, No. 2, and No. 3: Whopper 1: The LMHR population constitutes a unique and important natural experiment evaluating the lipid heart hypothesis in an unprecedented manner. Whopper 2: Our data are consistent with the notion that elevated ApoB, even at extreme levels, does not drive atherosclerosis in a dose-dependent manner in this population of metabolically healthy individuals. They qualify this conclusion by saying that LHMR may still have risk. For instance, they noted that PAV increase comparable to what has been observed in other studies on populations with lower LDL-C across the cardiovascular disease risk spectrum. They offer no citation here. Whopper 3: Quote: "These insights can facilitate personalized treatment and risk mitigation strategies based on modern, cost-effective cardiac imaging." For instance, they say, despite profound elevations in LDL-C and ApoB, based on their data, LMHR subjects with CAC = 0 at baseline (n = 57) constitute a low-risk group for percent atheroma volume (PAV) progression, even as compared to other cohorts with far lower LDL-C and ApoB. By contrast, LMHR subjects with elevated baseline CAC, possibly from a history of metabolic damage and dysfunction prior to adopting a CRD, appear to constitute a relatively higher risk group for PAV progression even where LDL-C and ApoB are equal to their CAC = 0 counterparts. Before closing they coin the phrase " plaque begets plaque. " I see why this paper generated angst online. The idea of the study is reasonable; what's unreasonable are the conclusions. First, if you look at the primary endpoint of change in noncalcified plaque volume, it went up. A lot. 18.8 mm3. That was 2.5x higher than they predicted in their study protocol. So, if you believe that the delta of NCPV is a great surrogate, it looks quite ominous. Second, imaging tests are almost always a terrible surrogate measure. Images are images. To assess risk, you need to measure events. Heart attacks. Stroke. CV death. I realize this is a small uncontrolled study, and it's fine to look at these things (in fact, I am curious), but you cannot claim clinical importance just because you weaved a nice story about high LDL-C in LMHR being different from high LDL-C in metabolic syndrome patients. Third, there is like 50 years of data supporting LDL-C being causal for atherosclerosis. Like every Bradford Hill criteria is met. So…. If you are going to claim an exception, you need more rigorous evidence than this. The priors here — the priors being that these LMHR are an exception — have to be extremely pessimistic, so you'd need really strong data to change your posterior view. This study surely was not strong evidence. Fourth, assuming you believe the plaque images are precise and reproducible and clinically relevant, this study really suffers from a lack of control. All they had to do is recruit a group of people eating a Mediterranean diet. Let's see what happens to them relative to the Keto people. Fifth, the authors don't tell us how many people they screened to find these 100 people. I get the sense they are highly selected bunch. Finally, the question of heart health from a specific diet is going to be really hard to sort out. Nutritional studies always are. An RCT in a prison might work, but cardiac event rates in young people—even with KD-induced LDL will be infrequent. What's more, the LMHR will surely do other things that affect heart disease, like exercise, not smoking etc. If the authors are wrong, and actually eating a diet that causes crazy high LDL levels and maintaining a lean body mass is actually harmful , then, given the popularity of carbohydrate-restricted diets, this could be a public health disaster. As for diet, I do think Americans eat too many carbs, but the KD seems extreme. Why not just eat a balanced diet, like they do in Sicily? JAMA Network Open has published an interesting modeling study from a Finnish group on the matter of net benefit of oral anticoagulation (OAC) in subclinical device-detected atrial fibrillation (AF). The background here is known to anyone practicing cardiology. It's perhaps the most common question I receive: John, Mrs Smith had 4 hours of AF on her pacemaker. Her CHA 2 DS 2 VASc score is 4; should we anticoagulate? And if we don't anticoagulate, how much AF does she have to have before we do? The short answer is that I have no idea. Your comeback is…come on Mandrola, we have two trials. And it is true. We have the NOAH trial. Edoxaban vs placebo in 2500 patients with a median duration of AF 2.8 hours. The primary outcome of CV death, stroke, and systemic embolism (SE) was 19% lower in the edoxaban group. The confidence intervals (CI) were wide, and the difference did not reach significance. Major bleeding was 31% higher and this did reach statistical significance. We also have the ARTESIA trial. Apixaban vs acetylsalicylic acid (ASA) in 4000 patients with median duration AF 1.5 hours. The primary outcome of stroke and SE was 37% lower with apixaban and this did meet statistical significance. Major bleeding however was 80% higher and this met statistical significance. Some have said NOAH was negative and ARTESIA was positive. Perhaps, technically, this is true. But I think they both show the same thing. OAC reduces stroke and increases bleeding. It leaves us with the question of net benefit. I've heard Jeff Healey, the PI of ARTESIA, rightly say that we should favor OAC because strokes are worse than bleeds. This is certainly true. The tension in subclinical AF is that the yearly stroke rates were low—in the 1% range. Far lower than what would be predicted in the CHA 2 DS 2 VASc assessment for clinical AF. And if that is true, even if OAC reduces the relative risk, the absolute risk reduction is tiny, in the order of 4 per 1000, with a number needed to treat (NNT) of 250. Therefore, any increase in bleeding may mitigate the net benefit. Led by senior author Konsta Teppo, the group set out to estimate the "net benefit" of OAC in SCAF. They used modeling. It's technical. A Medscape colleague, nephrologist F. Perry Wilson, covered this paper, and he wrote: The study was done using a computer. I know — all studies are done with computers. But here I mean literally. The authors used a decision analytical model run with 10,000 patients with subclinical AF on OAC and 10,000 without OAC. They then used a "Markov decision model" to estimate net outcomes of NOACs. You all know that doctors who ablate AF and put in pacers and defibrillators don't know much about Markov modeling. I was going to ask Professor Teppo. He would have told me. But to make life easier, I just asked Claude. Who said: A Markov model consists of multiple health states individuals can move between based on specific transition probabilities. Think of it as a simulation where: Patients exist in various health states (e.g., well, post-stroke, bleeding event, death) In each cycle (1 month in this study), patients can transition between states with certain probabilities The model tracks what happens to a cohort of simulated patients over time The model was constructed in multiple steps: Base Case Patient: The researchers created a model patient, aged 77 years (matching the average age in the clinical trials), and applied the untreated stroke and bleeding rates from the NOAH-AFNET 6 and ARTESIA trials. Health States: The model included states for: Being Well with subclinical AF Ischemic stroke (with varying severities) Major bleeding events (hemorrhagic stroke, other intracranial bleeding, extracranial bleeding) Development of clinical AF Death Transition Probabilities: The pooled point risk estimates from the meta-analysis combining the two trials were used as the effect sizes for anticoagulation on stroke (32% decrease) and major bleeding (62% increase). The model assigned an 80% weight to nonintracranial bleedings for the increase in bleedings caused by the DOACs. The numbers come from the McIntyre et al meta-analysis of NOAH and ARTESIA in Circulation . Event Severity: Probabilities for the severity of stroke and bleeding events in the anticoagulation and non-anticoagulation groups of the model were approximated from previous observations in patients with and without anticoagulation. Quality of Life Weights: The net benefit outcome was assessed in terms of QALYs (Quality-Adjusted Life Years), where clinical events reduced patients' quality of life based on the type and severity of the event according to previously published quality of life data. Time Horizon: The simulation was run for a 10-year period with 10,000 samples in both decision groups (with and without the DOACs). The main outcome measure for net benefit was the cumulative quality-adjusted life-years (QALYs) during the simulation. This included things like severity of ischemic strokes, hemorrhagic strokes, other intracranial bleeds, and extracranial bleeds, as well as the number of deaths during a 10-year simulation. It's really neat way to look at net benefit. As I said, I don't know anything more than what I read about Markov modeling, but the thing that strikes me, and perhaps you too, is that there are a lot of degrees of freedom of choices. That said, here is what the model shows: Over the 10-year period, you would have 1076 strokes in nontreated subclinical AF vs 843 with treatment. The delta of 233 strokes saved seems like a lot but it's only 2.3% per year. There would be 1213 major bleeds without OAC vs 1664 with OAC. The 453 more bleeds that would be 4.5% per year. Deaths were nearly the same. 55 fewer in the anticoagulation (AC) arm but it's only 0.6% delta per year. OK, what about the primary endpoint of quality-adjusted life years. It was, drumroll… Per patient, the differences listed led to 1 additional quality-adjusted week of life (0.024 QALYs) with DOAC treatment during the 10-year simulation. When the 95% CIs of treatment effect sizes were considered in probabilistic sensitivity analysis, there was a 66% probability that DOAC treatment leads to more QALYs than withholding treatment. The authors did an exploratory analysis looking at higher risk patients and as you would expect, in patients with CHADSVASC score >4, the increase QALY with DOAC was now a month, not a week. But they note caution because this estimate came from subgroup analyses in the two trials, neither of which met statistical significance for interaction. The authors concluded that "initiating DOACs in patients with device-detected subclinical AF was associated with a minimal increase in QALYs. However, the benefits were uncertain, and the effect size of the overall net benefit does not appear to be clinically meaningful." I loved this paper and the authors' discussion. The modeling and estimates make intuitive sense, right? The trials find extremely low rates of stroke with SCAF. The average age of patients was 77. Older patients have many competing risks. Andrew Foy has a nice model thinking about domains acting on treatment effect. They are overlapping circles where you have the risk of the primary outcome vs competing risk, and the treatment benefit and treatment harm. Treating subclinical AF is a perfect example of these four domains coming together to almost cancel themselves out. The Markov model basically quantifies this to 1-week extra of good quality of life. Perhaps a few weeks longer if the patient has extremely high stroke risk. The clear conclusion I make from this paper is that subclinical AF is a different entity than clinical AF of old. I co-authored a paper on that in Stroke . The Finnish group has shown very little net benefit in treating SCAF. This contrasts with the famous Singer et al net benefit paper in Annals of Internal Medicine in 2010, where they showed clear net benefit of warfarin in patients in the ATRIA cohort. Here, even with an annual stroke rate of only 2%, warfarin provided a large net benefit. In clinical AF, stroke reduction for anticoagulation was larger than the bleeding increase. But subclinical AF has a different meaning. Yes, it is electrically the same; the atria is fibrillating. But, and this is my opinion, I am beginning to think that a certain degree of short duration occurs in older people as a matter of normal life. We know that PACs and PVCs increase with age, why not short duration AF? For now, the only solution to the matter is to do what David Sackett described when he coined the term evidence-based medicine. That is, we align care with patient preferences. With our patients we discuss the uncertainty, seek their preferences, and treat accordingly. There can be no algorithm, no guideline. This problem does not fit into those colored boxes in guidelines, and the top people who write guidelines should resist the urge to help us clinicians. If a patient fears stroke and is willing to deal with the disutility of taking a daily pill (that is, the cost and taking it every day), then use OAC. If a patient fears bleeding, then hold off and monitor more. I know, this is a cardiology podcast, but here me out. There is a connection. Plus, the science of this study is striking. In Kentucky, one of the least healthy states in the US, obesity is essentially the norm. I estimate that more than half the patients I see in clinic have some degree of metabolic syndrome—overweight or obese, insulin resistant, type 2 diabetes (T2D). Many of these patients also have fatty liver disease, which used to be called non-alcoholic steatohepatitis, or NASH. But it is now called MASH, or metabolic-associated steatohepatitis. It's a bad condition, and I think it flies under the radar of most of us cardiologists—because we see these patients for hypertension or AF or ischemic heart disease. My friend Claude says that 9-15 million adults in the US have MASH. And it's getting worse. The prevalence of MASH in the US is predicted to increase 63% from 16.5 million cases in 2015 to 27 million cases in 2030. And in patients with T2D and obesity, the prevalence of MASH is as high as 16%, so almost 1 in 5 patients. NEJM recently published the results of the ESSENCE trial of 1197 patients with documented inflammation on liver biopsy of semaglutide vs placebo. This is a two-part trial. Part 1 looks at histology. And it has resolution of steatohepatitis without worsening of liver fibrosis and reduction in liver fibrosis without worsening of steatohepatitis as the primary endpoints. ESSENCE trial is ongoing, and Part 2 will measure clinical outcomes. NEJM published the PART 1 and it is shocking. Patients were young, age 56, and more than half were females. The mean BMI was 34-35. The primary endpoints were twofold: Resolution of steatohepatitis without worsening of fibrosis occurred in 63% vs 34% in the semaglutide vs placebo groups. That is an absolute treatment difference of 29 percentage points, which was highly significant. The second primary endpoint, a reduction in liver fibrosis without worsening of steatohepatitis, was reported in 37% vs 22% in the semaglutide vs placebo arms. That is absolute treatment diff of 14.4 percentage points, also highly significant. All secondary outcomes favored semaglutide. I mention this study because a) oodles of our patients have MASH, whether we know it or not, b) MASH is on the rise, and c) MASH portends a poor prognosis; it is the leading cause of liver transplant, but perhaps most relevant to cardiologists is that most patients with MASH die of cardiovascular (CV) complications. The GLP-1 agonist drug in ESSENCE basically shredded evidence of liver disease, at least histologically, and I suspect the follow-on outcomes portion of ESSENCE will be stopped early for benefit. There is an approved medication for MASH, called resmetirom, but this is a thyroid hormone receptor-beta (THR-β) selective agonist that specifically targets the liver. Other than mild lipid-lowering effects, it has no known CV benefits. This is unlike GLP-1 agonists which have RCT-proven benefits in CV disease and diabetes. I am not yet prescribing GLP-1, but like the SGLT2 inhibitors, I think GLP-1 agonists will soon be a drug that cardiologist will want to prescribe. I realize that some listeners may say we should be treating obesity and obesity-related diseases with weight loss and diet and exercise. The GLP-1 benefit seen in ESSENCE was likely due to weight loss. My answer to that is it doesn't matter why the GLP-1 drugs work. When trials show that something works, an evidenced-based practitioner should embrace it. What's more, clinical medicine is pragmatic. It's pretty obvious that lifestyle interventions have a low success rate. So, if the GLP-1 agonists work, we should prescribe them. And, finally, there is a big difference in using GLP-1 agonists in an adult with diabetes, heart disease, and liver inflammation vs an adolescent who is overweight. The former person is in real trouble. The net benefit calculus favors treatment. The younger person should be counseled aggressively to change lifestyle. In other words, treatment is different from prevention. CABG Still Superior to Stents Despite FAME 3 Endpoint Swap A few weeks ago, I discussed the FAME-3 trial 5-year results. I turned this into a column that is up now. The short story is that FAME-3 was designed as a one-year non-inferiority trial comparing fractional flow reserve (FFR)-guided PCI to coronary artery bypass grafting (CABG) in patients with multivessel disease. Why someone would want to compare revascularization strategies at one year is mysterious, but that is what they did. And CABG was much better. FFR-PCI did not even reach non-inferiority in a four-point composite endpoint of death, myocardial infarction, stroke, and unplanned revascularization. My column pushes back against the claims in 3 and 5-year results that FFR-PCI is now equivalent to CABG. The claims stem from use of a different endpoint. Take a look at my column and see what you think.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store