logo
Is a Year of DAPT Magical Thinking?

Is a Year of DAPT Magical Thinking?

Medscape12-05-2025
Christopher Labos, MDCM, MSc
It's hard to accept that babies born in the year 2000 now have kids of their own. It's even harder to accept that there never was much evidence for 12 months of dual antiplatelet therapy (DAPT) after coronary stenting.
In a recent commentary, lead author Marco Valgimigli, MD, PhD, likened the routine use of 1 year of DAPT to a myth that has become entrenched in our collective culture. Though recent trials suggest shorter durations are feasible and possibly better, untangling truth from myth is proving to be a Gordian knot.
Enter the DAPT Era
The era of DAPT began with the publication of the CURE study in 2001. Patients with non-ST elevation myocardial infarction were randomized to aspirin plus clopidogrel or aspirin plus placebo. Despite the increased bleeding seen with the addition of clopidogrel, the trial's positive primary outcome prompted many, myself included, to start putting patients on both drugs.
As more potent anti-platelets such as prasugrel and ticagrelor came to market, on the basis of TRITON-TIMI 38 and PLATO , respectively, the idea of yearlong double therapy was reinforced. But in TRITON-TIMI 38, median treatment duration was 14.5 months and in PLATO it was just over 9 months. Neither study provided 12 months of DAPT nor tested a specific duration of therapy.
The subject grew more complicated as the field evolved. Bare metal stents improved and then gave way to drug eluting stents which in turn evolved to newer iterations with better scaffolds, polymers, and anti-proliferative agents. In the early 2000's, the risk of stent thrombosis with the first generation of drug-eluting stents prompted a science advisory stressing the importance of 12 months of DAPT. Even though the risk of late stent thrombosis is much reduced with the newer generation of drug eluting stents, a confluence of factors made 12 months of DAPT the standard of care.
But a blanket 1-year recommendation ignores the past quarter century's dizzying evolution in stents, angiography techniques, and background medical therapy. We can justifiably question if studies from 20 or even 10 years ago are still relevant and if the particular risk and medication profile of the patient sitting in front of you has been adequately represented in the clinical trials.
Balancing Ischemic Benefit and Bleeding Risk
Multiple trials have tested alternatives to 12 months of DAPT. The DAPT trial tested 12 months vs 30 months of DAPT and found fewer stent thromboses and cardiovascular events with longer treatment but at the cost of more bleeding. This trade-off has been replicated many times and a meta-analysis of five trials of longer term DAPT showed that extending treatment beyond 12 months reduced cardiovascular events but resulted in more bleeds.
This balance between cardiovascular benefit and bleeding risk prompted the question of shorter durations than 12 months. A summary of this body of evidence suggests that 3-6 months had no major impact on either stent thrombosis and paradoxically bleeding risk compared with 12 months.
While those de-escalation trials usually left patients on aspirin alone, an alternative would be to stop the aspirin and leave them on a P2Y12 inhibitor.
Multiple trials have tested a variety of permutations. The SMART-CHOICE trial de-escalated to clopidogrel in most patients after 3 months of DAPT, STOPDAPT-2 de-escalated to clopidogrel after 1 month of DAPT, while TWILIGHT tested ticagrelor monotherapy after 3 months of DAPT in a high-risk population. Keeping this heterogeneity in mind, meta-analyzing these trials suggests that ticagrelor alone after 1-3 months of DAPT reduces the bleeding risk without increasing cardiovascular events. Whether we can say the same for prasugrel is less clear. The STOPDAPT-3 trial tried an 'aspirin free' strategy with prasugrel monotherapy at 3.75 mg daily rather than the standard 10 mg dose. This strategy did not improve bleeding rates or worsen the primary endpoint, suggesting that prasugrel monotherapy may be feasible. But the atypical dosing and general unpopularity of prasugrel does make it a challenging trial to put into practice.
But Wait There's More De-escalation Trials
At the recent American College of Cardiology scientific sessions in Chicago, there were even more data to add to the mix. The HOST-BR trial tested two different regimens based on the patient's bleeding risk. High risk patients were randomized to 1 vs 3 months of DAPT and low risk patients were randomized to 3 vs 12 months, with clopidogrel being the most common second anti-platelet. In high bleeding risk patients, limiting DAPT to 1 month increased major adverse cardiac and cerebral events (MAACE) at 1 year by 4.0% on the absolute scale with a non-significant trend towards less bleeding. By contrast, in the low-bleeding-risk patients, limiting DAPT to 3 months instead of 1 year had no major effect on MAACE but reduced bleeding by 9.5% on the absolute scale. In both the high and low risk patients, 3 months of DAPT seemed to be the sweet spot.
SMART-CHOICE 3 took a different tack and evaluated patients that had already completed their 'standard' duration DAPT (12 months post myocardial infarction and 6 months overwise) and randomized them to monotherapy with aspirin vs clopidogrel. At 3 years, clopidogrel reduced MAACE by 2.2% with no effect on bleeding.
Both of these Korean trials were open label studies. In East Asian populations, differences in bleeding risk suggest that clopidogrel might be superior to ticagrelor, hence the decision to use it in the study. However, in other ethnic groups the same might not hold true, and whether the results would replicate with ticagrelor is anyone's guess.
Guidelines Nudge but Don't Fully Budge
Depending on your point of view, the accumulated data is either dizzyingly complex and impossible to parse through or too limited to make any firm recommendations. Admittedly, we have to consider not just length of DAPT but also which anti-platelets to use as monotherapy and the clinical context of the patient. A high-ischemic-risk cardiac patient at low bleeding risk is very different from a high bleeding risk patient going for an elective PCI (percutaneous coronary intervention). We shouldn't blithely assume that what's true for clopidogrel is true for ticagrelor or that one antiplatelet is universally better in all populations. Also, most trials fail to consider one important consideration: cost.
Given all the new data from the last few years, you might think that clinical guidelines have evolved to incorporate or at least acknowledge the growing equipoise of how and when to inhibit platelets in cardiac patients. But the recent 2025 ACS guidelines still suggest a minimum 12 months of DAPT and give it a class 1A indication if the patient isn't at high bleeding risk. If they are, bleeding reduction strategies include transitioning to ticagrelor monotherapy after 1 month (Class 1A) and switching from ticagrelor or prasugrel to clopidogrel as part of the DAPT regimen (Class 2B). Monotherapy with any agent after 1 month also gets a 2b recommendation.
Although the guidelines are starting to budge on the issue, we seem to be perpetually locked into a 12-month DAPT mindset. The why is a fascinating question. It's partly because we are often more tolerant of bleeding than of cardiovascular events. We think we can manage bleeds whereas a stent thrombosis or a recurrent myocardial infarction feels like a failure.
Who follows up on the patient will also affect the duration of antiplatelet therapy. It's asking a lot of primary care providers to overrule the angiographer's recommendation of 12 months of DAPT post stent on the cath report.
A combination of uncertainty and inertia is keeping 12 months of DAPT alive. I'm as guilty as everyone else of falling into long established patterns. Habits are hard to break, but 24 years after CURE, we should acknowledge that 1 year of DAPT was never shown to be clinically superior to any other interval. Most of the contemporary data suggests that shorter durations are just as good if not better.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Drug Combinations for CVDs Tied to Bullous Pemphigoid Risk
Drug Combinations for CVDs Tied to Bullous Pemphigoid Risk

Medscape

time21 minutes ago

  • Medscape

Drug Combinations for CVDs Tied to Bullous Pemphigoid Risk

TOPLINE: A case-control study revealed that combinations of drugs for cardiovascular diseases and hypertension were frequently prescribed before a diagnosis of bullous pemphigoid (BP), but the risk associated with combinations did not exceed that associated with individual agents. The most common drug combinations included angiotensin-converting enzyme (ACE) inhibitors with statins and antiplatelets with statins. METHODOLOGY: Researchers conducted a nested case-control study using healthcare records from the Clinical Practice Research Datalink between 1998 and 2021 and analysed 16,844 BP cases and 79,493 age- and sex-matched control individuals having no BP diagnosis at the index date (the first date a BP diagnosis code was recorded). Association rule mining (ARM) identified the 10 most common drug class or active substance pairs prescribed to cases or control individuals on the same day and within 6 months before the index date. In the sensitivity analysis, researchers identified medication pairs prescribed within 30 days of each other and during the 6 months preceding the index date. Researchers quantified how often two drugs are co-prescribed compared with their independent prescribing by calculating a lift. They then derived the fold change (FC) as the ratio of a lift in cases vs control individuals. The analysis included multivariable conditional logistic regression to estimate the risk for BP following drug combinations and their constituent drugs. TAKEAWAY: The most frequent drug combinations associated with an increased risk for BP were ACE inhibitors-statins (FC of the lifts in the main analysis vs sensitivity analysis: 1.31 vs 1.18), antiplatelets-statins (1.23 vs 1.11), proton pump inhibitors (PPI)-antiplatelets (1.22 vs 1.14), PPI-statins (1.22 vs 1.14), and ACE inhibitors-antiplatelets (1.20 vs 1.09). For drug substances, combinations with a greater lift in BP cases were simvastatin-ramipril (FC, 1.30), simvastatin-aspirin (FC, 1.21), and ramipril-aspirin (FC, 1.19). After adjusting for BP-associated drugs, the Charlson Comorbidity Index, and relevant confounders, the increased risk remained significant for these drug class combinations: antiplatelets-statins (odds ratio [OR], 1.20), ACE inhibitors-statins (OR, 1.16), PPI-statins (OR, 1.22), ACE inhibitors-antiplatelets (OR, 1.26), and PPI-antiplatelets (OR, 1.43; P < .001 for all). The risk for BP associated with these frequently prescribed drug combinations was lower than the risk linked to each constituent drug at both class and substance levels. In both main and sensitivity analyses, patients who developed BP were more likely than control individuals to have received combinations of cardiovascular or antihypertensive drugs before diagnosis. IN PRACTICE: "The ARM algorithm exploratory analysis identified the most commonly prescribed drug combinations prior to BP. Logistic regression confirmed drug combinations for CVDs [cardiovascular diseases] or hypertension associated with increased BP risk," the authors wrote. "The increased BP risk following reported combinations was modest and was not greater than their constituent drugs. Given that the number of patients with BP is low, we do not suggest avoiding the reported drugs but instead being on the lookout for any skin reactions following treatments for CVDs or hypertension," they concluded. SOURCE: This study was led by Mikolaj Swiderski, University of Nottingham, Nottingham, England. It was published online on August 06, 2025, in Clinical and Experimental Dermatology. LIMITATIONS: The ARM algorithm considered only the frequency of prescriptions to obtain drug combinations. Additionally, the algorithm demonstrated limited clinical value, linking only half of the inferred drug class combinations with BP and failing to capture the sequence or precise timing of prescriptions. It also lacked dosage and treatment duration data, and as an exploratory tool, ARM could not establish causal relationships between drug exposures and the risk for BP. DISCLOSURES: This research was supported by the National Institute for Health and Care Research grant via the Research for Patient Benefit Programme. Swiderski reported receiving salary funding from this grant. Another author reported receiving salary funding from King's College London, University of Nottingham, and the National Institute for Health and Care Research East Midlands scholarship scheme. Additional disclosures are noted in the original article. This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

Graft Failure Risk Rises in Paediatric Kidney Retransplants
Graft Failure Risk Rises in Paediatric Kidney Retransplants

Medscape

time21 minutes ago

  • Medscape

Graft Failure Risk Rises in Paediatric Kidney Retransplants

TOPLINE: In paediatric kidney transplant recipients, the risk for graft failure increased with each subsequent transplantation. After the second transplant, older donor age, transplantation from a deceased donor, shorter survival of the primary graft in non-sensitised recipients, higher panel-reactive antibody (PRA) levels, and an increasing number of human leucocyte antigen DR (HLA-DR) mismatches predicted a higher risk for graft failure. METHODOLOGY: Researchers analysed data from a European registry to determine the incidence and risk factors for graft failure and death after repeat kidney transplantation in paediatric recipients across Eurotransplant countries. They included 4527 primary paediatric kidney transplant recipients (median age, 11 years; 41% girls), of whom 1155 underwent a second, 259 a third, and 41 a fourth kidney transplantation. The median follow-up duration for the first, second, third, and fourth kidney transplantation was 8.83, 6.67, 5.75, and 5.58 years, respectively. The probability of graft failure (the need for dialysis or retransplantation) and death with a functioning graft was analysed in kidney transplantation recipients. Patient data collection included recipient sex and age; country; transplantation dates; PRA levels; primary kidney disease; graft failure dates; donor source, age, and sex; and the number of HLA mismatches. TAKEAWAY: The risk for graft failure increased with each subsequent kidney transplant, with 5-year rates of 15% after the first, 24% after the second, 30% after the third, and 40% after the fourth transplant. The risk for graft failure or death with a functioning graft was the highest in the first month after primary kidney transplantation and increased even further after each subsequent transplantation. After the second transplant, transplantation from deceased donors, older donor and recipient age, shorter survival of the first graft in non-sensitised recipients, higher PRA levels (1%-100%), and an increasing number of HLA-DR mismatches were associated with an increased probability of graft failure and death with a functioning graft. Transplants performed in more recent years were also associated with better survival outcomes. IN PRACTICE: "Each subsequent kidney transplantation in pediatric recipients carried a higher risk of GF [graft failure], particularly in the first months post-transplant. After the second transplantation, older calendar year of KTx [kidney transplantation], deceased donor (DD), older donor and recipient age, and increasing HLA-DR mismatches were associated with a higher predicted GF and death with functioning graft," the authors wrote. "In DD kidney allocation schemes, 2 HLA-DR mismatches should be avoided," they added. SOURCE: This study was led by Ferran Coens, Ghent University Hospital, Ghent, Belgium. It was published online on August 07, 2025, in Pediatric Nephrology. LIMITATIONS: This study had substantial missing data for HLA mismatches in living donor transplants, limiting the analysis of HLA mismatches to deceased donor kidney transplant recipients alone. Additionally, the analysis did not adjust for certain relevant peri- and post-transplant predictors of graft failure, such as cold and warm ischaemia time, socioeconomic status, and allograft rejection. DISCLOSURES: The registry involved in this study received funding through a grant from the Dietmar Hopp Stiftung, the European Society for Paediatric Nephrology, the German Society for Pediatric Nephrology, and other sources. The authors declared having no conflicts of interest. This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

Your Smartwatch's Sleep Tracker May Be Sleeping on the Job
Your Smartwatch's Sleep Tracker May Be Sleeping on the Job

CNET

time2 hours ago

  • CNET

Your Smartwatch's Sleep Tracker May Be Sleeping on the Job

If sleep is important to you -- and it should be -- you might want to think twice before you put a lot of stock in the latest stress charts from your fitness wearable. A recent study from the Netherlands' Leiden University, published in the Journal of Psychopathology and Clinical Science, has found that when smartwatches and similar devices record readings on stress, fatigue or sleep, they're frequently getting it wrong. Researchers studied 800 young adults using the same Garmin Vivosmart 4 smartwatch model. They compared the data the smartwatches produced with the reports that the users created four times per day about how sleepy or stressed they were feeling. Lead author and associate professor Eiko Fried said the correlation between the wearable data and the user-created data was "basically zero." A representative for Garmin did not immediately respond to a request for comment. Stressed or sex? Your watch doesn't know So why do wearables like fitness smartwatches get it so wrong? Their sensors are fairly limited in what they can do. Watches like these need to be worn correctly at all times (a loose or tight watch may give poor readings, for example), and they typically use basic information like pulse rate and movement to make guesses about health. Those guesses don't always reflect real-world scenarios. A wearable may identify high stress when the real cause of the change was a workout, excitement over good news or sex. There are so many potential alternatives to stress or fatigue that the watches in the study never really got it right -- and the devices sometimes guessed the complete opposite emotional state from what users recorded. The Dutch study did note that Garmin's Body Battery readings, which specifically measure physical fatigue, were more reliable than stress indicators, but still inaccurate. And sleep sensing performed the best of them all, with Garmin watches showing a two-thirds chance of noting the differences between a good night's sleep and a bad one. It's also worth noting that smartwatch sensors can become more accurate as technology improves. It would be interesting to run a similar study with the newer Garmin Vivosmart 5 to see if anything has improved, as well as see if other models like the latest versions of the Apple Watch have similar accuracy results.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store