logo
Testing for LMNA Mutations Called ‘Woefully Underutilized'

Testing for LMNA Mutations Called ‘Woefully Underutilized'

Medscape15-07-2025
People with mutated copies of the LMNA gene are at high risk for cardiac laminopathies, including atrioventricular block and atrial or ventricular arrhythmias (VAs) leading to dilated cardiomyopathy.
These autosomal dominant mutations have a high penetrance, meaning that a high percentage of persons with a pathogenic or likely pathogenic variant will develop health problems related to the gene. For those with cardiac manifestations, about 90% of carriers of LMNA mutations older than 30 years have a high risk for sudden death from arrhythmia — even patients with minimal left ventricular dilation and mild systolic impairment — well before the onset of heart failure. A long-term follow-up study from 122 consecutive carriers of LMNA mutations with cardiac conditions showed that most had experienced arrhythmia, heart block, embolic events, or heart failure within 7 years of diagnosis.
Could some outcomes, such as sudden cardiac death, be averted with a more precise view of LMNA mutations? New research published in JAMA Cardiology shows pinpointing the type and location of the LMNA mutation may guide clinicians toward earlier treatment approaches to improve prognosis for these high-risk patients. These interventions might include earlier placement of implantable cardioverter-defibrillator (ICD) devices and family testing to detect the mutation before the onset of symptoms.
'Genetic testing for dilated cardiomyopathy is woefully underutilized,' said the paper's senior author, Neal Lakdawala, MD, an associate professor of medicine at Harvard Medical School and a cardiologist in the Heart and Vascular Center at Brigham and Women's Hospital, in Boston.
In fact, claims data showed that fewer than 2% of patients with dilated cardiomyopathy undergo genetic testing. 'Prior research has established the prognostic power of a genetic diagnosis,' Lakdawala told Medscape Medical News . 'We took it one step further within a specific genetic etiology, to show that the type of gene variant and the location of a gene variant also matter.'
The retrospective cohort study examined international registry data from 718 patients (mean age, 41.3 ± 14.3 years) with pathogenic or likely pathogenic variants of LMNA . The participants in the study had no prior history of malignant VA. The primary outcome was time to malignant VA, defined as sudden cardiac death, placement of an ICD, or other manifestations of hemodynamically unstable VA. The secondary outcome, advanced heart failure, was defined as nonsudden cardiac death, implantation of a left ventricular assist device, or heart transplant.
Reflecting the high risk associated with LMNA mutations, Lakdawala said, nearly one third of the study participants experienced sudden cardiac death, hemodynamically unstable VA, or an ICD procedure during the 4.2-year follow-up period. In addition, 15% developed advanced heart failure, defined as the implantation of a left ventricular assist device, heart transplant, or nonsudden cardiac death. These outcomes occurred despite many patients having a baseline left ventricular ejection fraction (EF) in the normal range (mean EF was 56%, well above guideline-recommended thresholds of 35%-45% for ICD placement, the researchers reported).
Looking deeper into the genes, Lakdawala and his colleagues found participants who had truncating LMNA variants — an abbreviated version of the protein — had worse arrhythmic outcomes, regardless of the position of this genetic mutation on the DNA sequence. On the other hand, those who exhibited missense variants of the LMNA gene — an altered amino acid on the DNA sequence — had a lower risk for harmful arrhythmias and better overall outcomes.
Taken together, the location and nature of the gene variants could lead to specific predictions of cardiac risk, according to the researchers. A man with an EF of 50% and a truncating LMNA gene variant, for example, would have a 12% risk for VA within 5 years, but a 7.2% risk if a missense variant were present. For a woman with EF 50%, this risk would be 7.5% for a truncating variant vs 4.5% for a missense variant, if no other risk factors were present.
Why Genetic Testing Is Key
In an editorial accompanying the journal article, Sharlene M. Day, MD, a cardiomyopathy specialist and presidential professor at the Perelman School of Medicine at the University of Pennsylvania, in Philadelphia, wrote 'the data from this study can also inform risk stratification even in healthy populations with incidental or secondary findings.' Integrating genetic findings into cardiomyopathy management should be 'a priority for all practicing cardiologists,' she wrote.
'The knowledge gap appears to be narrowing with respect to the importance of genetic testing in patients with cardiomyopathies,' Day told Medscape Medical News . 'But there's still opportunity to improve recommendations and referrals by cardiologists for genetic counseling and testing.'
Testing typically consists of a broad panel identifying multiple gene variants, including LMNA , she said. If a gene variant is found in an individual patient, cascade testing of family members for that variant is often recommended.
'The current research study nicely highlights the impact of identifying not only the specific gene involved but the type of variation within that gene in terms of risk stratifying patients for adverse outcomes,' she said.
Impact on Future Cardiology Guidelines
Future clinical practice guidelines should emphasize the value of a genetic diagnosis for risk stratification in patients with dilated cardiomyopathy, especially for predicting sudden death and heart failure, Lakdawala said. The most recent guidelines on heart failure from the American College of Cardiology and the American Heart Association list a class 2A recommendation for placement of an ICD in patients with high-risk genes for dilated cardiomyopathy and an EF of 45% or lower, adding that primary preventive ICD may be considered for those with higher EF.
The 2023 European Cardiomyopathy Guideline recommends placement of ICDs in patients with LMNA variants and an EF above 35% (class 2A if risk factors are present and class 2B if no risk factors are present). 'For updated guidelines, I think the most immediate impact would be to refine the LMNA risk score for ventricular arrhythmias to include the type and location of the LMNA variant,' Day told Medscape Medical News .
'Genetic testing has clinical ramifications that will help cardiologists take better care of their patients,' Lakdawala added. 'The take-home message is that they should order these tests!'
Lakdawala reported receiving personal fees from Alexion, Bayer, Bristol Myers Squibb, Cytokinetics, Lexeo Therapeutics, Nuevocor, Pfizer, and Tenaya Therapeutics and grants from Bristol Myers Squibb and Pfizer.
Day reported serving as chair of the steering committee for Lexicon Pharmaceuticals, on the data monitoring committee for Cytokinetics, and receiving grants from Bristol Myers Squibb.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

SIGA to Host Business Update Call on August 5, 2025 Following Release of Second-Quarter 2025 Results
SIGA to Host Business Update Call on August 5, 2025 Following Release of Second-Quarter 2025 Results

Yahoo

time4 hours ago

  • Yahoo

SIGA to Host Business Update Call on August 5, 2025 Following Release of Second-Quarter 2025 Results

NEW YORK, July 29, 2025 (GLOBE NEWSWIRE) -- SIGA Technologies, Inc. (SIGA) (Nasdaq: SIGA), a commercial-stage pharmaceutical company, today announced that management will host a webcast and conference call to provide a business update at 4:30 P.M. ET on Tuesday, August 5, 2025. Participating in the call will be Diem Nguyen, Chief Executive Officer, and Daniel Luckshire, Chief Financial Officer. A live webcast of the call will also be available on the Company's website at in the Investor Relations section of the site, or by clicking here. Please log in approximately 5-10 minutes prior to the scheduled start time. Participants may access the call by dialing 1-800-717-1738 for domestic callers or 1-646-307-1865 for international callers. A replay of the call will be available for two weeks by dialing 1-844-512-2921 for domestic callers or 1-412-317-6671 for international callers and using Conference ID: 1130215. The archived webcast will be available in the Investor Relations section of the Company's website. About SIGA SIGA is a commercial-stage pharmaceutical company and leader in global health focused on the development of innovative medicines to treat and prevent infectious diseases. With a primary focus on orthopoxviruses, we are dedicated to protecting humanity against the world's most severe infectious diseases, including those that occur naturally, accidentally, or intentionally. Through partnerships with governments and public health agencies, we work to build a healthier and safer world by providing essential countermeasures against these global health threats. Our flagship product, TPOXX® (tecovirimat), is an antiviral medicine approved in the U.S. and Canada for the treatment of smallpox and authorized in Europe, the UK, and Japan for the treatment of smallpox, mpox (monkeypox), cowpox, and vaccinia complications. For more information about SIGA, visit Contacts:Suzanne Harnettsharnett@ and Investors Media Jennifer Drew-Bear, Edison GroupJdrew-bear@ Holly Stevens, CG Lifehstevens@

Sarepta shares rebound after shipments of gene therapy Elevidys resume in US
Sarepta shares rebound after shipments of gene therapy Elevidys resume in US

Yahoo

time4 hours ago

  • Yahoo

Sarepta shares rebound after shipments of gene therapy Elevidys resume in US

(Reuters) -Sarepta Therapeutics shares surged more than 30% before the bell on Tuesday, as analysts said the resumption of U.S. shipments for its muscular gene therapy partially removes financial headwinds and decreases the risk of market withdrawal. The company said on Monday it would resume shipments of Elevidys — approved in the U.S. to treat a rare condition called Duchenne muscular dystrophy — to patients who can walk. U.S. shipments to patients who cannot walk independently are still halted, following the death of two teenage boys earlier this year. These incidents brought heightened regulatory scrutiny to Sarepta in recent weeks, while the pause of shipments raised concerns about the future of Elevidys — the company's largest revenue generator. Sarepta's announcement followed the U.S. Food and Drug Administration's recommendation that the voluntary hold on shipments be removed after a probe showed the death of an 8-year-old boy in Brazil was not related to Elevidys. Wall Street analysts said the resumption of shipments would allow Sarepta to fulfill its near-term payments to partner Arrowhead and maintain access to its debt facilities. "The FDA's recommendation and the resumption of commercial treatment in the U.S. virtually eliminate the risk of Elevidys being formally withdrawn from the market," said William Blair analyst Sami Corwin. While the decision allows some patients to regain access to the treatment, analysts warned that patients and doctors could show hesitancy in light of the recent hit to reputation. "It remains to be seen how the news headlines regarding the patient deaths will affect commercial interest in the near term," Corwin said. Sarepta's partner Roche had also stopped Elevidys shipments in certain countries outside the U.S. Shares of Sarepta surged 36% to $18.85 in premarket trading. They have fallen more than 80% since the first Elevidys-related death was reported in March. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

What Is Superintelligence? Everything You Need to Know About AI's Endgame
What Is Superintelligence? Everything You Need to Know About AI's Endgame

CNET

time4 hours ago

  • CNET

What Is Superintelligence? Everything You Need to Know About AI's Endgame

You've probably chatted with ChatGPT, experimented with Gemini, Claude or Perplexity, or even asked Grok to verify a post on X. These tools are impressive, but they're just the tip of the artificial intelligence iceberg. Lurking beneath is something far bigger that has been all the talk in recent weeks: artificial superintelligence. Some people use the term "superintelligence" interchangeably with artificial general intelligence or sci-fi-level sentience. Others, like Meta CEO Mark Zuckerberg, use it to signal their next big moonshot. ASI has a more specific meaning in AI circles. It refers to an intelligence that doesn't just answer questions but could outthink humans in every field: medicine, physics, strategy, creativity, reasoning, emotional intelligence and more. We're not there yet, but the race has already started. In July, Zuckerberg said during an interview with The Information that his company is chasing "personal superintelligence" to "put the power of AI directly into individuals' hands." Or, in Meta's case, probably in everyone's smart glasses. Scott Stein/CNET That desire kicked off a recruiting spree for top researchers in Silicon Valley and a reshuffling inside Meta's FAIR team (now Meta AI) to push Meta closer to AGI and eventually ASI. So, what exactly is superintelligence, how close are we to it, and should we be excited or terrified? Let's break it down. What is superintelligence? Superintelligence doesn't have a formal definition, but it's generally described as a hypothetical AI system that would outperform humans at every cognitive task. It could process vast amounts of data instantly, reason across domains, learn from mistakes, self-improve, develop new scientific theories, write flawless code, and maybe even make emotional or ethical judgments. The idea became popularized through philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies , which warned of a scenario where an AI bot becomes smarter than humans, self-improves rapidly and then escapes our control. That vision sparked both excitement and fear among tech experts. Speaking to CNET, Bostrom says many of his 2014 warnings "have proven quite prescient." What has surprised him, he says, is "how anthropomorphic current AI systems are," with large language models behaving in surprisingly humanlike ways. Bostrom says he's now shifting his attention toward deeper issues, including "the moral status of digital minds and the relationship between the superintelligence we build with other superintelligences," which he refers to as "the cosmic host." For some, ASI represents the pinnacle of progress, a tool to cure disease, reverse climate change and crack the secrets of the universe. For others, it's a ticking time bomb -- one wrong move and we're outmatched by a machine we can't control. It's sometimes called the last human invention, not because it's final, but because ASI could invent everything else we need. British mathematician Irving John Good described it as an "intelligence explosion." Superintelligence doesn't exist yet. We're still in the early stages of what's called artificial narrow intelligence. It's an AI system that is great at specific tasks like translation, summarization and image generation, but not capable of broader reasoning. Tools like ChatGPT, Gemini, Copilot, Claude and Grok fall into this category. They're good at some tasks, but still flawed, prone to hallucinations and incapable of true reasoning or understanding. To reach ASI, AI needs to first pass through another stage: artificial general intelligence. What is AGI? AGI, or artificial general intelligence, refers to a system that can learn and reason across a wide range of tasks, not just one domain. It could match human-level versatility, such as learning new skills, adapting to unfamiliar problems and transferring knowledge across fields. Unlike current chatbots, which rely heavily on training data and struggle outside of predefined rules, AGI would handle complex problems flexibly. It wouldn't just answer questions about math and history; it could invent new solutions, explain them and apply them elsewhere. Current models hint at AGI traits, like multimodal systems that handle text, images and video. But true AGI requires breakthroughs in continual learning (updating knowledge without forgetting old stuff) and real-world grounding (understanding context beyond data). And none of the major models today qualify as true AGI, though many AI labs, including OpenAI, Google DeepMind and Meta, list it as their long-term target. Once AGI arrives and self-improves, ASI could follow quickly as a system smarter than any human in every area. How close are we to superintelligence? A superintelligent future concept I generated using Grok AI. Grok / Screenshot by CNET That depends on who you ask. A 2024 survey of 2,778 AI researchers paints a sobering picture. The aggregate forecasts give a 50% chance of machines outperforming humans in every possible task by 2047. That's 13 years sooner than a 2022 poll predicted. There's a 10% chance this could happen as early as 2027, according to the survey. For job automation specifically, researchers estimate a 10% chance that all human occupations become fully automatable by 2037, reaching 50% probability by 2116. Most concerning, 38% to 51% of experts assign at least a 10% risk of advanced AI causing human extinction. Geoffrey Hinton, often called the Godfather of AI, warned in a recent YouTube podcast that if superintelligent AI ever turned against us, it might unleash a biological threat like a custom virus -- super contagious, deadly and slow to show symptoms -- without risking itself. Resistance would be pointless, he said, because "there's no way we're going to prevent it from getting rid of us if it wants to." Instead, he argued that the focus should be on building safeguards early. "What you have to do is prevent it ever wanting to," he said in the podcast. He said this could be done by pouring resources into AI that stays friendly. Still, Hinton confessed he's struggling with the implications: "I haven't come to terms with what the development of superintelligence could do to my children's future. I just don't like to think about what could happen." Factors like faster computing, quantum AI and self-improving models could accelerate things. Hinton expects superintelligence in 10 to 20 years. Zuckerberg said during that podcast that he believes ASI could arrive within the next two to three years, and OpenAI CEO Sam Altman estimates it'll be somewhere in between those time frames. Most researchers agree we're still missing key ingredients, like more advanced learning algorithms, better hardware and the ability to generalize knowledge like a human brain. IBM points to areas like neuromorphic computing (hardware inspired by human neurons), evolutionary algorithms and multisensory AI as building blocks that might get us there. Meta's quest for 'personal superintelligence' Meta launched its Superintelligence Labs in June, led by Alexandr Wang (ex-Scale AI CEO) and Nat Friedman (ex-GitHub CEO), with $14.3 billion invested in Scale AI and $64 billion to $72 billion for data centers and AI infrastructure. Zuckerberg doesn't shy away from Greek mythology, with names like Prometheus and Hyperion for his two AI data superclusters (massive computing centers). He also doesn't talk about artificial superintelligence in abstract terms. Instead, he claims that Meta's specific focus is on delivering "personal super intelligence to everyone in the world." This vision, according to Zuckerberg, sets Meta apart from other research labs that he says primarily concentrate on "automating economically productive work." Bostrom thinks this isn't mere hype. "It's possible we're only a small number of years away from this," he said of Meta's plans, noting that today's frontier labs "are quite serious about aiming for superintelligence, so it is not just marketing moves." Though still in its early stages, Meta is actively recruiting top talent from companies like OpenAI and Google. Zuckerberg explained in his interview with The Information that the market is extremely competitive because so few people possess the requisite high level of skills. Facebook and Zuckerberg didn't respond to requests for comment. Should humans subscribe to the idea of superintelligent AI? There are two camps in the AI world: those who are overly enthusiastic, inflating its benefits and seemingly ignoring its downsides; and the doomers who believe AI will inevitably take over and end humanity. The truth probably lands somewhere in the middle. Widespread public fear and resistance, fueled by dystopian sci-fi and very real concerns over job loss and massive economic disruption, could slow progress toward superintelligence. One of the biggest problems is that we don't really know what even AGI looks like in machines, much less ASI. Is it the ability to reason across domains? Hold long conversations? Form intentions? Build theories? None of the current models, including Meta's Llama 4 and Grok 4, can reliably do any of this. There's also no agreement on what counts as "smarter than humans." Does it mean acing every test, inventing new math and physics theorems or solving climate change? And even if we get there -- should we? Building systems vastly more intelligent than us could pose serious risks, especially if they act unpredictably or pursue goals misaligned with ours. Without strict control, it could manipulate systems or even act autonomously in ways we don't fully understand. Brendan Englot, director of the Stevens Institute for Artificial Intelligence, shared with CNET that he believes "an important first step is to approach cyber-physical security similarly to how we would prepare for malicious human-engineered threats, except with the expectation that they can be generated and launched with much greater ease and frequency than ever before." That said, Englot isn't convinced that current AI can truly outpace human understanding. "AI is limited to acting within the boundaries of our existing knowledge base," Englot tells CNET. "It is unclear when and how that will change." Regulations like the EU AI Act aim to help, but global alignment is tricky. For example, China's approach differs wildly from the West's. Trust is one of the biggest open questions. A superintelligent system might be incredibly useful, but also nearly impossible to audit or constrain. And when AI systems draw from biased or chaotic data like real-time social media, those problems compound. Some researchers believe that given enough data, computing power and clever model design, we'll eventually reach AGI and ASI. Others argue that current AI approaches (especially LLMs) are fundamentally limited and won't scale to true general or superhuman intelligence because the human brain has 100 trillion connections. That's not even accounting for our capability of emotional experience and depth, arguably humanity's strongest and most distinctive attribute. But progress moves fast, and it would be naive to dismiss ASI as impossible. If it does arrive, it could reshape science, economics and politics -- or threaten them all. Until then, general intelligence remains the milestone to watch. If and when superintelligence does become a reality, it could profoundly redefine human life itself. According to Bostrom, we'd enter what he calls a "post-instrumental condition," fundamentally rethinking what it means to be human. Still, he's ultimately optimistic about what lies on the other side, exploring these ideas further in his most recent book, Deep Utopia. "It will be a profound transformation," Bostrom tells CNET.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store