logo
Teleneurology Bests Onboard Stroke Care Model

Teleneurology Bests Onboard Stroke Care Model

Medscape3 days ago

HELSINKI, Finland — Using telemedicine-based neurologist assessments in mobile stroke units proves to be safer and more resource-efficient than using traditional models in which a neurologist is on board the ambulance, new research shows.
The trial, conducted across 10 tertiary hospitals in Melbourne, Australia, is the first prospective, head-to-head comparison of these two models of stroke care delivery.
'There's been a long-standing assumption that having a neurologist physically on board is the gold standard for mobile stroke unit care, but our trial shows that telemedicine delivers comparable outcomes in key operational domains,' said principal investigator Vignan Yogendrakumar, MD.
Although the median time from arrival on scene to treatment decision was about 4 minutes longer in the telemedicine arm, this was deemed to be offset by a significant reduction in resource use.
Among the 18% of patients who received thrombolysis or were transported for endovascular therapy, there was no difference in functional outcomes at 90 days.
'When we put everything together, we found that there was no difference in safety between the telemedicine and neurologist on-board arms. There was a small difference in the time position favoring the on-board arm, but there was significant resource utilization favoring the telemedicine arm.
'So the overall evaluation favors the telemedicine arm when it comes to balancing resource utilization with time to treatment and safety,' Yogendrakumar, assistant professor at the University of Ottawa, Ottawa, Ontario, Canada, and a senior research fellow at the University of Melbourne, Melbourne, Australia, told Medscape Medical News .
The findings of the MSU-TELEMED trial were presented on May 23 at European Stroke Organization Conference (ESOC) 2025.
Hierarchical Composite Outcome
Yogendrakumar noted that telemedicine is the standard practice across many healthcare systems worldwide and is widely used for acute stroke management in hospital emergency departments. However, until now, there was no formal head-to-head comparison between telemedicine and models that rely on an onboard neurologist in mobile stroke units.
The MSU-TELEMED trial assessed whether a telemedicine neurologist model can provide superior resource efficiency without compromising safety or timely delivery of care than a traditional on-board model of care.
The prospective, randomized, open-label, blinded endpoint, parallel-arm trial enrolled 275 patients with suspected stroke who presented to a mobile stroke unit within 24 hours of symptom onset or last known well and who had undergone a full assessment by mobile stroke unit staff.
The study used a single mobile stroke unit active in Melbourne, which could take patients to 10 different receiving hospitals. The randomization was done by day, so on some days the mobile stroke unit would have a neurologist on board, and on other days it would operate with a telemedicine service communicating with a neurologist at the main hospital.
The researchers used a hierarchical composite outcome that integrated safety, time to treatment decision, and resource efficiency. They applied a win-odds approach, which allows multiple outcomes to be assessed at different levels of priority by comparing each patient with every other patient in the study.
Safety was the first priority outcome. For each patient comparison, researchers determined whether a safety event had occurred — such as symptomatic hemorrhage or any form of clinical deterioration — which was then counted as a win or loss for either the telemedicine or onboard intervention.
If no safety event occurred, the next endpoint — time to treatment decision — was evaluated. A difference of 15 minutes or more between interventions was considered a win or loss for the corresponding group.
Time to Thrombolysis
If there was no meaningful difference in time to treatment decision, the final endpoint — resource utilization — was assessed. This was measured by the productive percentage time, defined as the time actively spent caring for the patient divided by the total time dedicated to the case. A difference of more than 10% in favor of one group was considered a win for that intervention and a loss for the other.
Finally, if there was no difference in productive percentage time between the two arms in a given pairwise comparison, the result was considered a tie. The overall findings were then summarized using a win-odds measure — the odds that a participant in the telemedicine arm would have a better outcome than the one in the on-board arm.
Results showed a number/proportion of telemedicine wins of 14, 618 (76%); ties occurred in 692 cases (4%) and on-board wins in 3590 cases (20%), giving a stratified win odds of 3.5 (95% CI, 2.4-5.1; P < .001).
More specifically, safety events occurred in 17 patients in each group — 13% in the telemedicine group vs 12% in the on-board group.
The median time from arrival on scene to a definitive treatment decision was 19 minutes in the telemedicine group vs 13 minutes in the on-board group.
In addition, the percentage of neurologist 'productive' time was 100% in the telemedicine group vs 33% in the on-board group.
Of the 275 participants in the trial, about half were ultimately diagnosed with an ischemic stroke. Among those who received thrombolysis, the median time from arrival on scene to needle was 8.2 minutes longer in the telemedicine group.
'While every minute delay is important, this is a relatively small proportion of the time saved by mobile stroke unit care and needs to be balanced against the efficiency advantages achieved,' said Yogendrakumar.
'We were able to show that a telemedicine model is better able to utilize resources without sacrificing safety or delivery of care, and that will likely translate to cost savings,' he added.
A formal cost-effective analysis from this study is planned.
A Safe, Effective, Efficient Model
Yogendrakumar suggested that these findings could help inform the design of stroke care systems in various regions. For example, in Melbourne, a second mobile stroke unit is now being introduced, with both units connected via telemedicine to a single neurologist.
During the discussion following the MSU-TELEMED trial presentation, Guillaume Turc, MD, professor of neurology at Sainte-Anne Hospital in Paris, France, praised the study's innovative design, particularly its use of a hierarchical outcome structure and the win-ratio method.
He noted that the findings were highly positive, supporting the safety of telemedicine-based neurologist assessments and highlighting their greater efficiency in resource use.
Simona Sacco, MD, professor of neurology at the University of L'Aquila in L'Aquila, Italy, and current president of the European Stroke Organisation, added that this approach could be especially valuable in rural areas, where staffing ambulances with neurologists is often a significant challenge. She described the study as a welcome development and an important advancement in the field.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Researchers trace drugs and diseases from DNA drifting through city air
Researchers trace drugs and diseases from DNA drifting through city air

Yahoo

time44 minutes ago

  • Yahoo

Researchers trace drugs and diseases from DNA drifting through city air

DNA is in the air, and scientists are finally learning how to read it. In Dublin — a city known for its cozy pubs, flowing Guinness, and music that spills into cobbled streets — researchers have discovered something far less visible drifting through the air: traces of cannabis, poppy, and even psychedelic mushrooms. Not the plants themselves, but their DNA. A groundbreaking study by scientists at the University of Florida reveals that environmental DNA, or eDNA, vacuumed straight from the air, can offer stunning insights into the world around us. These range from identifying endangered wildlife and tracking human pathogens to detecting allergens and illicit drugs. 'The level of information that's available in environmental DNA is such that we're only starting to consider what the potential applications can be, from humans, to wildlife to other species that have implications for human health,' said David Duffy, Ph.D., lead author of the study and professor of wildlife disease genomics. Originally designed to study sea turtles, the technique developed by Duffy and his team has since transformed into a powerful tool for decoding the biological fingerprints of nearly any environment, including air, oceans, or forests. And all it takes is an air filter and a day in the lab to detect signs of nearly every living thing that's grown, passed through, or shed cells nearby. 'When we started, it seemed like it would be hard to get intact large fragments of DNA from the air. But that's not the case. We're actually finding a lot of informative DNA,' Duffy said in a release. 'That means you can study species without directly having to disturb them, without ever having to see them. It opens up huge possibilities to study all the species in an area simultaneously, from microbes and viruses all the way up to vertebrates like bobcats and humans, and everything in between.' In Dublin, researchers found DNA signatures from hundreds of sources, including human pathogens, bacteria, and allergens like peanut residue and pollen. In another striking demonstration of eDNA's potential, the researchers were able to trace the origins of bobcats and spiders by analyzing DNA captured from the air in a Florida forest. This powerful analysis also came with remarkable speed and efficiency. The team showed that a single researcher could process DNA from every species in a given area in just a day, using compact, low-cost equipment and cloud-based software. When trying to save and conserve wildlife, knowing where an animal originates from can be as important as knowing where it currently is. 'It seems like science fiction, but it's becoming science fact,' Duffy said. 'The technology is finally matching the scale of environmental problems.' The researchers say the implications of the study are vast. The method could help track disease outbreaks, identify endangered species, and even detect drug activity, all silently captured by the breeze. However, the same tools can also reveal sensitive human genetic information. The researchers have called for ethical guidelines to keep pace with the fast-moving science of study has been published in Nature, Ecology and Evolution.

Syfe Raises USD 80 Mn in Series C Funding, Focuses on India Tech Expansion
Syfe Raises USD 80 Mn in Series C Funding, Focuses on India Tech Expansion

Entrepreneur

time3 hours ago

  • Entrepreneur

Syfe Raises USD 80 Mn in Series C Funding, Focuses on India Tech Expansion

The company recently acquired Selfwealth, a well-established Australian online investment platform, to further solidify its presence in the region. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. Syfe, the Singapore-based digital wealth management platform, has raised a total of USD 80 million in its Series C round, with a fresh injection of USD 53 million through a Series C2 all-equity round. This latest round, led by two UK family offices, builds on the USD 27 million raised in Series C1 in August 2024. Existing backers, including Unbound and Peter Thiel's Valar Ventures, also participated in the raise. The newly acquired capital will be used to scale Syfe's engineering and product capabilities, with a strong focus on its technology headquarters in Gurugram. The company's tech team there has already grown by 15% since August 2024, and further expansion is planned. Funds will also be deployed to strengthen operations in Syfe's key markets—Singapore, Hong Kong, and Australia—where the platform is seeing increasing demand among the mass affluent segment. "This fund raise comes at an exciting time as we grow our presence across the region and expand our offerings," said Dhruv Arora, Founder and CEO of Syfe. "In our markets of Singapore, Hong Kong, and Australia, nearly half of all adults fall into the 'mass affluent' category. Our team in Gurugram plays a pivotal role in building a truly global offering that caters to their evolving financial needs." Founded in July 2019 by Arora, Syfe offers a suite of digital wealth solutions, including managed portfolios, cash management tools, and brokerage services. With a focus on access, advice, and affordability, Syfe serves over 60 countries and manages more than USD 10 billion in client assets. The company recently acquired Selfwealth, a well-established Australian online investment platform, to further solidify its presence in the region. Syfe also continues to make strategic leadership hires, including Sanjeev Malik, former Managing Director at BlackRock, and Dane Ricketts, who joins as VP of Marketing from Procter & Gamble and Grab. Licensed in Singapore, Hong Kong, and Australia, Syfe is backed by top global investors and partners with financial giants like BlackRock and PIMCO. With this latest funding, Syfe is set to deepen its market footprint while reinforcing its Indian tech backbone to support rapid product innovation and customer-centric growth.

Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040
Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040

Forbes

time3 hours ago

  • Forbes

Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040

Future forecasting the yearly path of advancing todays to AGI by 2040. In today's column, I am continuing my special series on the likely pathways that will get us from conventional AI to the avidly sought attainment of AGI (artificial general intelligence). AGI would be a type of AI that is fully on par with human intellect in all respects. I've previously outlined seven major paths that seem to be the most probable routes of advancing AI to reach AGI (see the link here). Here, I undertake an analytically speculative deep dive into one of those paths, namely I explore the year-by-year aspects of the considered most-expected route, the linear path. Other upcoming postings will cover each of the other remaining paths. The linear path consists of AI being advanced incrementally, one step at a time until we arrive at AGI. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Right now, efforts to forecast when AGI is going to be attained consist principally of two paths. First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus? Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused. The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways to AGI, I am going to proceed with the year 2040 as the consensus anticipated target date. Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here. As mentioned, in a previous posting I identified seven major pathways that AI is going to advance to become AGI (see the link here). The most often presumed path is the incremental progression trail. The AI industry tends to refer to this as the linear path. It is essentially slow and steady. Each of the other remaining major routes involves various twists and turns. Here's my list of all seven major pathways getting us from contemporary AI to the treasured AGI: You can apply those seven possible pathways to whatever AGI timeline that you want to come up with. Let's undertake a handy divide-and-conquer approach to identify what must presumably happen on a year-by-year basis to get from current AI to AGI. Here's how that goes. We are living in 2025 and somehow are supposed to arrive at AGI by the year 2040. That's essentially 15 years of elapsed time. In the particular case of the linear path, the key assumption is that AI is advancing in a stepwise fashion each year. There aren't any sudden breakthroughs or miracles that perchance arise. It is steady work and requires earnestly keeping our nose to the grind and getting the job done in those fifteen years ahead. The idea is to map out the next fifteen years and speculate what will happen with AI in each respective year. This can be done in a forward-looking mode and also a backward-looking mode. The forward-looking entails thinking about the progress of AI on a year-by-year basis, starting now and culminating in arriving at AGI in 2040. The backward-looking mode involves starting with 2040 as the deadline for AGI and then working back from that achievement on a year-by-year basis to arrive at the year 2025 (matching AI presently). This combination of forward and backward envisioning is a typical hallmark of futurecasting. Is this kind of a forecast of the future ironclad? Nope. If anyone could precisely lay out the next fifteen years of what will happen in AI, they probably would be as clairvoyant as Warren Buffett when it comes to predicting the stock market. Such a person could easily be awarded a Nobel Prize and ought to be one of the richest people ever. All in all, this strawman that I show here is primarily meant to get the juices flowing on how we can be future forecasting the state of AI. It is a conjecture. It is speculative. But at least it has a reasonable basis and is not entirely arbitrary or totally artificial. I went ahead and used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is the date for AGI instead, and thus this journey will play out over 25 years. The timeline and mapping would then have 25 years to deal with rather than fifteen. If 2030 is going to be the AGI arrival year, the pathway would need to be markedly compressed. I opted to identify AI technological advancements for each of the years and added some brief thoughts on the societal implications too. Here's why. AI ethics and AI law are bound to become increasingly vital and will to some degree foster AI advances and in other ways possibly dampen some AI advances, see my in-depth coverage of such tensions at the link here. Here then is a strawman futures forecast year-by-year roadmap from 2025 to 2040 of a linear path getting us to AGI: Year 2025: AI multi-modal models finally become robust and fully integrated into LLMs. Significant improvements in AI real-time reasoning, sensorimotor integration, and grounded language understanding occur. The use of AI in professional domains such as law, medicine, and the like rachet up. Regulatory frameworks remain sporadic and generally unadopted. Year 2026: Agentic AI starts to blossom and become practical and widespread. AI systems with memory and planning capabilities achieve competence in open-ended tasks in simulation environments. Public interest in governing AI increases. Year 2027: The use of AI large-scale world models spurs substantially improved AI capabilities. AI can now computationally improve from fewer examples via advancements in AI meta-learning. Some of these advances allow AI to be employed in white-collar jobs that have a mild displacement economically, but only to a minor degree. Year 2028: AI agents have gained wide acceptance and are capable of executing multi-step tasks semi-autonomously in digital and physical domains, including robotics. AI becomes a key element as taught in schools and as used in education, co-teaching jointly with human teachers. Year 2029: AI is advanced sufficiently to have a generalized understanding of physical causality and real-world constraints through embodied learning. Concerns about AI as a job displacer reach heightened attention. Year 2030: Self-improving AI systems begin modifying their own code under controlled conditions, improving efficiency without human input. This is an important underpinning. Some claim that AGI is now just a year or two away, but this is premature, and ten more years will first take place. Year 2031: Hybrid AI consisting of integrated cognitive architectures unifying symbolic reasoning, neural networks, and probabilistic models has become the new accepted approach to AI. Infighting among AI developers as to whether hybrid AI was the way to go has now evaporated. AI-based tutors fully surpass human teachers in personalization and subject mastery, putting human teachers at great job risk. Year 2032: AI agents achieve human-level performance across most cognitive benchmarks, including abstraction, theory of mind (ToM), and cross-domain learning. This immensely exceeds prior versions of AI that did well on those metrics but not nearly to this degree. Industries begin to radically restructure and rethink their businesses with an AI-first mindset. Year 2033: AI scalability alignment protocols improve in terms of human-AI values alignment. This opens the door to faster adoption of AI due to a belief that AI safety is getting stronger. Trust in AI grows. But so is societal dependence on AI. Year 2034: AI interaction appears to be indistinguishable from human-to-human interaction, even as tested by those who are versed in tricking AI into revealing itself. The role of non-human intelligence and how AI stretches our understanding of philosophy, religion, and human psychology has become a high priority. Year 2035: AI systems exhibit bona fide signs of self-reflection, not just routinized mimicry or parroting. Advances occur in having AI computationally learn from failure across domains and optimizing for long-term utility functions. Debates over some form of UBI (universal basic income) lead to various trials of the approach to aid human labor displacements due to AI. Year 2036: AI advancement has led to fluid generalization across a wide swath of domains. Heated arguments take place about whether AGI is emerging, some say it is, and others insist that a scaling wall is about to be hit and that this is the best that AI will be. Nations begin to covet their AI and set up barriers to prevent other nations from stealing or copying the early AGI systems. Year 2037: Advances in AI showcase human-like situational adaptability and innovation. New inventions and scientific discoveries are being led by AI. Questions arise about whether this pre-AGI has sufficient moral reasoning and human goal alignment. Year 2038: AI systems now embody persistent identities, seemingly able to reflect on experiences across time. Experts believe we are on the cusp of AI reaching cognitive coherence akin to humans. Worldwide discourse on the legal personhood and rights of AI intensifies. Year 2039: Some of the last barriers to acceptance of AI as nearing AGI are overcome when AI demonstrates creativity, emotional nuance, and abstract reasoning in diverse contexts. This was one of the last straws on the camel's back. Existential risks and utopian visions fully dominate public apprehensions. Year 2040: General agreement occurs that AGI has now been attained, though it is still early days of AGI and some are not yet convinced that AGI is truly achieved. Society enters a transitional phase: post-scarcity economics, redefinition of human purpose, and consideration of co-evolution with AGI. Mull over the strawman timeline and consider where you will be and what you will be doing during each of those fifteen years. One viewpoint is that we are all along for the ride and there isn't much that anyone can individually do. I don't agree with that sentiment. Any of us can make a difference in how AI plays out and what the trajectory and impact of reaching AGI is going to be. As per the famous words of Abraham Lincoln: 'The most reliable way to predict the future is to create it.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store