
John Wiederspan puts on an AI-powered wearable camera designed to detect medication errors. David Jaewon Oh for NBC News Health news Medical errors are still harming patients. AI could help change that. Medication mistakes — where the wrong drug or the wrong dosage is given to a patient — are among the most common errors in medicine.
By David Cox
John Wiederspan is well aware of how things can go wrong in the high-pressure, high-stakes environment of an operating room.
'During situations such as trauma, or a patient doing poorly, there's a real rush to try and get emergency drugs into the patient as fast as possible,' said Wiederspan, a nurse anesthetist at UW Medicine in Seattle. 'And that's when mistakes can occur, when you're flustered, your adrenaline's rushing, you're drawing up drugs and you're trying to push them.'
Despite ongoing efforts to improve patient safety, it's estimated that at least 1 in 20 patients still experience medical mistakes in the health care system. One of the most common categories of mistakes is medication errors, where for one reason or another, a patient is given either the wrong dose of a drug or the wrong drug altogether. In the U.S., these errors injure approximately 1.3 million people a year and result in one death each day, according to the World Health Organization.
In response, many hospitals have introduced guardrails, ranging from color coding schemes that make it easier to differentiate between similarly named drugs, to barcode scanners that verify that the correct medicine has been given to the correct patient.
Despite these attempts, medication mistakes still occur with alarming regularity.
'I had read some studies that said basically 90% of anesthesiologists admit to having a medication error at some point in their career,' said Dr. Kelly Michaelsen, Wiederspan's colleague at UW Medicine and an assistant professor of anesthesiology and pain medicine at the University of Washington. She started to wonder whether emerging technologies could help.
As both a medical professional and a trained engineer, it struck her that spotting an error about to be made, and alerting the anesthesiologists in real time, should be within the capabilities of AI. 'I was like, 'This seems like something that shouldn't be too hard for AI to do,'' she said. 'Ninety-nine percent of the medications we use are these same 10-20 drugs, and so my idea was that we could train an AI to recognize them and act as a second set of eyes.'
The study
Michaelsen focused on vial swap errors, which account for around 20% of all medication mistakes.
All injectable drugs come in labeled vials, which are then transferred to a labeled syringe on a medication cart in the operating room. But in some cases, someone selects the wrong vial, or the syringe is labeled incorrectly, and the patient is injected with the wrong drug.
In one particularly notorious vial swap error, a 75-year-old woman being treated at Vanderbilt University Medical Center in Tennessee was injected with a fatal dose of the paralyzing drug vecuronium instead of the sedative Versed, resulting in her death and a subsequent high-profile criminal trial.
Michaelsen thought such tragedies could be prevented through 'smart eyewear' — adding an AI-powered wearable camera to the protective eyeglasses worn by all staff during operations. Working with her colleagues in the University of Washington computer science department, she designed a system that can scan the immediate environment for syringe and vial labels, read them and detect whether they match up.
'It zooms in on the label and detects, say, propofol inside the syringe, but ondansetron inside the vial, and so it produces a warning,' she said. 'Or the two labels are the same, so that's all good, move on with your day.'
Building the device took Michaelsen and her team more than three years, half of which was spent getting approval to use prerecorded video streams of anesthesiologists correctly preparing medications inside the operating room. Once given the green light, she was able to train the AI on this data, along with additional footage — this time in a lab setting — of mistakes being made.
'There's lots of issues with alarm fatigue in the operating room, so we had to make sure it works very well, it can do a near perfect job of detecting errors, and so [if used for real] it wouldn't be giving false alarms,' she said. 'For obvious ethical reasons, we couldn't be making mistakes on purpose with patients involved, so we did that in a simulated operating room.'
In a study published late last year, Michaelsen reported that the device detected vial swap errors with 99.6% accuracy. All that's left is to decide the best way for warning messages to be relayed and it could be ready for real-world use, pending Food and Drug Administration clearance. The study was not funded by AI tech companies.
'I'm leaning towards auditory feedback because a lot of the headsets like GoPro or Google Glasses have built-in microphones,' she said. 'Just a little warning message which makes sure people stop for a second and make sure they're doing what they think they're doing.'
Wiederspan has tested the device and said he's optimistic about its potential for improving patient safety, although he described the current GoPro headset as being a little bulky.
'Once it gets a bit smaller, I think you're going to get more buy-in from anesthesia providers to use it,' Wiederspan said. 'But I think it's going to be great. Anything that's going to make our job a little bit easier, spot any potential mistakes and help bring our focus back to the patient is a good thing.'
It isn't a fail-safe
Patient safety advocates have been calling for the implementation of error-preventing AI tools for some time. Dr. Dan Cole, vice chair of the anesthesiology department at UCLA Health and president of the Anesthesia Patient Safety Foundation, likened their potential for reducing risk to that of self-driving cars and improving road safety.
But while Cole is encouraged by the UW study and other AI-based research projects to prevent prescribing and dispensing errors in pharmacies, he said there are still questions surrounding the most effective ways to integrate these technologies into clinical care.
'The UW trial idea was indeed a breakthrough,' he said. 'As with driverless taxis, I'm a bit reluctant to use the technology at this point, but based on the potential for improved safety, I am quite sure I will use it in the future.'
Melissa Sheldrick, a patient safety advocate from Ontario who lost her 8-year-old son Andrew to a medication error in 2016, echoed those thoughts.
Sheldrick said that while technology can make a difference, the root cause of many medical errors is often a series of contributing factors, from lack of communication to vital data being compartmentalized within separate hospital departments or systems.
'Technology is an important layer in safety, but it's just one layer and cannot be relied upon as a fail-safe,' she said.
Others feel that AI can play a key role in preventing mistakes, particularly in demanding environments such as the operating room and emergency room, where creating more checklists and asking for extra vigilance has proved ineffective at stopping errors.
'These interventions either add friction or demand perfect attention from already overburdened providers in a sometimes chaotic reality with numerous distractions and competing priorities,' said Dr. Nicholas Cordella, an assistant professor of medicine at Boston University's Chobanian & Avedisian School of Medicine. 'AI-enabled cameras allow for passive monitoring without adding cognitive burden to clinicians and staff.'
AI is only going to be used more
AI tools are likely to be deployed to prevent errors in an even broader range of situations. At UW Medicine, Michaelsen is considering expanding her device to also detect the volume of the drug present in a syringe, as a way of preventing underdosing and overdosing errors.
'This is another area where harm can occur, especially in pediatrics, because you've got patients [in the same department] where there can be a hundredfold difference in size, from a brand-new premature baby to an overweight 18-year-old,' she said. 'Sometimes we have to dilute medications, but as you do dilutions there's chances for errors. It isn't happening to every single patient, but we do this enough times a day and to enough people that there is a possibility for people to get injured.'
Wiederspan said he can also see AI-powered wearable cameras being used in the emergency room and on the hospital floor to help prevent errors when dispensing oral medications.
'I know Kelly's currently working on using the system with intravenous drugs, but if it can be tailored to oral medications, I think that's going to help too,' Wiederspan said. 'I used to work in a cardiac unit, and sometimes these patients are on a plethora of drugs, a little cup full of all these pills. So maybe the AI can catch errors there as well.'
Of course, broader uses of AI throughout a hospital also come with data protection and privacy concerns, especially if the technology happens to be scanning patient faces and screens or documents containing their medical information. In UW Medicine's case, Michaelsen said this is not an issue as the tool is only trained to look for labels on syringes, and does not actively store any data.
'Privacy concerns represent a significant challenge with passive, always-on camera technology,' Cordella said. 'There needs to be clear standards with monitoring for breaches, and the technology should be introduced with full transparency to both patients and health care staff.'
He also noted the possibility of more insidious issues such as clinicians starting to excessively rely on AI, reducing their own vigilance and neglecting traditional safety practices.
'There's also a potential slippery slope here,' Cordella said. 'If this technology proves successful for medication error detection, there could be pressure to expand it to monitor other aspects of clinician behavior, raising ethical questions about the boundary between a supportive safety tool and intrusive workplace monitoring.'
But while the prospect of AI entering hospitals on a wider basis certainly presents the need for stringent oversight, many who work in the operating room feel it has enormous potential to do good by keeping patients safe and buying medical professionals valuable time in critical situations.
'Time is of the essence in an emergency situation where you're trying to give blood, lifesaving medications, checking vital signs, and you're trying to rush through these processes,' Wiederspan said. 'I think that's where this kind of wearable technology can really come into play, helping us shave off vital seconds and create more time where we can really focus on the patient.'
David Cox
David Cox is a freelance journalist focusing on all aspects of health, from fitness and nutrition to infectious diseases and future medicines. Prior to becoming a full-time journalist, he was a neuroscientist attempting to understand how and why the brain goes wrong.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
14 hours ago
- The Guardian
‘Too hungry to think, too weak to sit upright. Concentration slips away': the struggle to stay focused as an academic in Gaza
I must admit: I write this piece while starving – too hungry to think clearly, too weak to sit upright for long. I do not feel ashamed because my starvation is deliberate. I refuse my hunger even as it decays me. I can survive no other way. Since 2 March 2025, Israel has imposed a full blockade on Gaza. Little aid – food, medicine, fuel – is getting in or being distributed. The markets are empty and bakeries, community kitchens and fuel stations are shuttered. On 27 July, the World Health Organization confirmed 74 deaths from 'malnutrition' in Gaza this year – 63 of them in July. Among the dead are 24 children under the age of five and one older child. Starvation is avalanching, nearly unstoppable. A trickle of aid was dropped. The humanitarian agency Médecins Sans Frontières has called these airdrops 'notoriously ineffective and dangerous'. The distribution points of US and Israeli-backed Gaza Humanitarian Foundation have been denounced as 'death traps', the UN warning that the system violates humanitarian principles and has cost more lives than it has saved. Famine is no longer a threat – it is here. Some days, my stomach cramps as I try to revise a single paragraph. My fingers feel dry and achy, parched from lack of fluids. Hunger is loud. I read, but hunger is shouting in my ear. I write, but the maw snaps with every keystroke. And when I try to still myself, to think in the meagre pleasures of drone-infused quiet, my mind floats: what rabbit hole could I be down if I were in a library? Oh, for a coffee in between articles. A sandwich in between sentences. A snack alongside a lazy perusal of the latest issue of TESOL Quarterly. I wonder: how can I keep my mind sharp when my body has gone so thin and dehydrated? The hunger starts with a rumble, and it spreads so quickly. My legs barely carry me to the nearest internet cafe. There, I try to keep up with work and commitments, charge my devices, and catch a brief connection to the outside world. But with a heavy laptop bag on my shoulder, the journey feels less like a short walk and more like crossing a desert. Some days, survival comes down to a single sachet of Plumpy'Nut, a peanut-based nutrient paste usually distributed for free in famine zones, but here sold for about $3.50, a price many can no longer afford. If you are more fortunate, you might buy a few overpriced fortified biscuits. But the problem is not just paying for food. It is about accessing money in the first place. With every bank in Gaza damaged and not a single functioning ATM left, cash has become both scarce and essential. Online transactions, or Eftpos, are not common here – almost all purchases depend on cash. After nearly two years of war, banknotes are torn and worn, and often rejected in shops. Getting money from your own account can be exploitative: withdrawing through an informal money exchange outside standard bank processes can cost up to 50% in commission. This deeply contradicts the spirit of Gaza – known for its generosity, where neighbours always looked after one another, and where, for as long as many of us can remember, no one went to bed hungry if someone else had food to share. That spirit has not vanished. People still share what little they have. But the scale of deprivation has grown so severe that even the most generous hands are now often empty. Families go to bed hungry and wake up hungry. One day in particular, I had been working nonstop, pushing through dizziness and exhaustion. By the time I reached the stairs to my apartment, my legs were barely holding me up. My blood sugar had crashed. I collapsed just as I reached my bedroom. I was rushed to the nearest GP, where I was given an IV [intravenous fluids] to stabilise me. The next morning, I was back at work. Not because I had recovered, but because I felt I could not afford to stop. There were interviews to conduct and transcribe, students to support, messages that needed to be sent. The urgency to bear witness outweighed the need to rest. This is not about ego. It is about refusing to disappear. About resisting the slow erasure that comes with war and famine. About insisting that our thoughts and our work continue, even when it must be done in the ruins. In Gaza, to be an academic today is to refuse to be reduced to a statistic. There are days when continuing feels impossible. The body simply gives out. Reading leaves me light-headed. Concentration slips away. Teaching becomes a battle to remain coherent. And beyond the physical toll, there is another erosion – of identity. As scholars, we are meant to cultivate emancipatory and liberatory thinking among our students. But when our daily realities are hunger, grief and displacement, we begin to question whether we are still fulfilling that role. What does it mean to be a scholar when the conditions needed to think, teach and create are stripped away? What does academic freedom mean when intellectual, political and pedagogical freedom is restricted by siege? What does it mean to mentor youth towards critical inquiry when we ourselves are battling to stay upright? These questions linger, not as abstract concerns but as lived tensions. Still, we carry on. Because to stop would be to relinquish one of the last remnants of our agency. I often find myself caught between two difficult choices in the classroom: either avoid discussing the crisis, fearing retraumatising my students; or confront it directly, opening space for collective reflection. Both paths are fraught, yet driven by the same hope – to use education not only to inform, but to liberate in helping students believe their voices still matter. Sign up to Global Dispatch Get a different world view with a roundup of the best news, features and pictures, curated by our global development team after newsletter promotion The work goes on. Research calls. Project check-ins. Webinars. Recorded lectures. Training sessions, though they must often pause. This is our reality. Still, we show up: attending classes, writing proposals, giving talks, joining conferences, publishing. Not because we are strong or brave, but because we believe in the transformative power of education. And because to stop would be to give in to silence. Yet, the most basic truth remains difficult to say aloud: we are hungry. Not by accident, but by design. When did naming that become taboo? For days, split lentils have been my only meal. Finding flour is a scavenger hunt. And when we do manage to gather ingredients, baking over an open fire is exhausting, physically and emotionally. We burn wood from broken furniture to make bread. Used notebooks and scrap paper become fuel; otherwise, we must buy wood just to finish the job. This is not just about hunger. It is about being forced to fight for survival in silence. Lighting a fire is a daunting challenge. Matches have run out. Lighters are nearly impossible to replace – and when one is available, it can be prohibitively expensive. Those who still have a working lighter refill it cautiously with small amounts of gas. In many cases, families or neighbours share a single flame, passing it from household to household – another quiet act of solidarity and enduring spirit. So we keep documenting. Not out of heroism, but to remain present. Because behind every report, every footnote, every lecture lies a deeper truth: knowledge is still being produced in Gaza. Even now. Especially now. What does solidarity mean when some of us must think, teach and work while starving? What does inclusion mean when access to food, water and safety determines who gets to take part? This is not a call for charity. It is a call to face an uncomfortable truth: solidarity is meaningless if it does not name – and challenge – the systems that keep people excluded while they struggle to survive under siege, occupation and deliberate deprivation. True solidarity means asking hard questions: Who gets to speak? Who is heard? Who can keep learning and imagining a future when bombs fall and hunger bites? Solidarity means changing the way the world works with those in crisis: adapting deadlines, waiving fees, opening access to books and journals, and making space for voices from Gaza and beyond – not as victims but as equal partners. It means understanding that grief, hunger and destroyed infrastructure are not 'disruptions' to work – they are our current conditions of life. To generate knowledge in the context of hunger is to think through pain. To teach students who have not eaten and still tell them their voices matter. To insist, against all odds, that Gaza still thinks, still questions, still creates. That, in itself, is an act of resistance. Ahmed Kamal Junina is an assistant professor of applied linguistics and head of the English department at Al-Aqsa University in Gaza and a fellow at Bristol University's Centre for Comparative and International Research in Education


The Independent
6 days ago
- The Independent
We weren't actually more anxious during Covid, researchers discover
You weren't actually more anxious during the Covid pandemic. Scientists say that anxiety levels among U.S. adults appear to have stayed steady during that period, with new research challenging the belief that there was a widespread spike in psychological distress. Previous research had found a 25 percent increase in the prevalence of anxiety and depression, which the World Health Organization said was linked to feelings of loneliness and fear of infection during the pandemic. The new research, which surveyed nearly 100,000 U.S. adults between 2011 and 2022, found that there was no prolonged spike in anxiety levels for younger or older adults. 'Our results might suggest that the mental health of U.S. adults is more resilient than public perception suggests, given the many news headlines about the U.S. currently experiencing a 'mental health crisis,'' Noah French, a researcher at University of Virginia and an author of the new study, explained in a statement. The pandemic was cited by Biden administration officials as worsening America's mental health crisis in 2023, and the Centers for Disease Control and Prevention now says nearly one in four U.S. adults report having a mental illness. Some 38 percent more people had been in mental health care since the onset of the pandemic, researchers said. The pandemic disproportionately affected the mental health of younger adults, WHO has reported. The isolating impact of school closures also resulted in anxious kids who were academically behind their peers, according to an analysis in The New York Times, and dozens of related studies. However, the new study found that while those aged 18-25 showed significantly stronger symptoms of anxiety compared to older adults during the pandemic, young adults' anxiety levels did not increase from 2011 to 2022 overall. The reasons behind these observations in younger and older adults are unclear. Other surveys have found that Americans are feeling more anxious in general. Polling data from last year found that 43 percent of U.S. adults felt more anxious than they did the previous year and in 2022, according to the American Psychiatric Association. This uptick in American anxiety stems from current events, the economy, and gun violence, the association said. And of the more than 2,200 adults surveyed, 63 percent said they were anxious about their health. It's also possible that worsening mental health was the reason Americans drank more alcohol during the pandemic, according to doctors. One study suggested that mothers with young children drank 300 percent more alcohol than they did before Covid. French said that more research is needed to fully understand what happened, and cautioned against drawing firm conclusions on anxiety levels from his findings. For one, participants skewed younger, were more educated, and had all signed up to answer the questions. That means that they might not be entirely representative of the average American. 'One of my biggest personal takeaways from this project is that there is surprisingly little high-quality research tracking the mental health of entire populations over time,' French said. 'We need a lot more research in this space, and I will forever be skeptical of headlines that make strong claims about a certain mental health condition being 'on the rise.''


Daily Mirror
12-08-2025
- Daily Mirror
Man thought neighbour was poisoning him after taking medical advice from ChatGPT
A 60-year-old thought his neighbour was trying to poison him after he became ill with psychosis. He had been taking a chemical following what he said was advice from ChatGPT Experts are warning ChatGPT could distribute harmful medical advice after a man developed a rare condition and thought his neighbour was poisoning him. A 60-year-old man developed bromism as a result of removing table salt from his diet following an interaction with the AI chatbot, according to an article in the Annals of Internal Medicine journal. Doctors were told by the patient that he had read about the negative effects of table salt and asked the AI bot to help him remove it from his diet. Bromism, also known as bromide toxicity, 'was once a well-recognised toxidrome in the early 20th century' that 'precipitated a range of presentations involving neuropsychiatric and dermatologic symptoms ', the study said. It comes after a doctor's warning to people who drink even a 'single cup of tea'. READ MORE: Man, 30, put shoulder pain down to gym aches, then doctors asked where he'd like to die Initially, the man thought his neighbour was poisoning him and he was experiencing 'psychotic symptoms'. He was noted to be paranoid about the water he was offered and tried to escape the hospital he presented himself to within a day of being there. His symptoms later improved after treatment. He told doctors he began taking sodium bromide over a three month period after reading that table salt, or sodium chloride, can 'can be swapped with bromide, though likely for other purposes, such as cleaning'. Sodium bromide was used as a sedative by doctors in the early part of the 20th century. The case, according to experts from the University of Washington in Seattle who authored the article, revealed 'how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes'. The authors of the report said it was not possible to access the man's ChatGPT log to determine exactly what he was told, but when they asked the system to give them a recommendation for replacing sodium chloride, the answer included bromide. The response did not ask why the authors were looking for the information, nor provide a specific health warning. It has left scientists fearing 'scientific inaccuracies' being generated by ChatGPT and other AI apps as they 'lack the ability to critically discuss results' and could 'fuel the spread of misinformation'. Last week, OpenAI announced it had released the fifth generation of the artificial intelligence technology that powers ChatGPT. 'GPT-5' would be improved in 'flagging potential concerns' like illnesses, OpenAI said according to The Guardian. OpenAI also stressed ChatGPT was not a substitute for medical assistance.