logo
AI Helps Prevent Medical Errors in Real-World Clinics

AI Helps Prevent Medical Errors in Real-World Clinics

There has been a lot of talk about the potential for AI in health, but most of the studies so far have been stand-ins for the actual practice of medicine: simulated scenarios that predict what the impact of AI could be in medical settings.
But in one of the first real-world tests of an AI tool, working side-by-side with clinicians in Kenya, researchers showed that AI can reduce medical errors by as much as 16%.
In a study available on OpenAI.com that is being submitted to a scientific journal, researchers at OpenAI and Penda Health, a network of primary care clinics operating in Nairobi, found that an AI tool can provide a powerful assist to busy clinicians who can't be expected to know everything about every medical condition. Penda Health employs clinicians who are trained for four years in basic health care: the equivalent of physician assistants in the U.S. The health group, which operates 16 primary care clinics in Nairobi Kenya, has its own guidelines for helping clinicians navigate symptoms, diagnoses, and treatments, and also relies on national guidelines as well. But the span of knowledge required is challenging for any practitioner.
That's where AI comes in. 'We feel it acutely because we take care of such a broad range of people and conditions,' says Dr. Robert Korom, chief medical officer at Penda. 'So one of the biggest things is the breadth of the tool.'
Read More: A Psychiatrist Posed As a Teen With Therapy Chatbots. The Conversations Were Alarming
Previously, Korom says he and his colleague, Dr. Sarah Kiptinness, head of medical services, had to create separate guidelines for each scenario that clinicians might commonly encounter—for example, guides for uncomplicated malaria cases, or for malaria cases in adults, or for situations in which patients have low platelet counts. AI is ideal for amassing all of this knowledge and dispensing it under the appropriately matched conditions.
Korom and his team built the first versions of the AI tool as a basic shadow for the clinician. If the clinician had a question about what diagnosis to provide or what treatment protocol to follow, he or she could hit a button that would pull a block of related text collated by the AI system to help the decision-making. But the clinicians were only using the feature in about half of visits, says Korom, because they didn't always have time to read the text, or because they often felt they didn't need the added guidance.
So Penda improved on the tool, calling it AI Consult, that runs silently in the background of visits, essentially shadowing the clinicians' decisions, and prompting them only if they took questionable or inappropriate actions, such as over prescribing antibiotics.
'It's like having an expert there,' says Korom—similar to how a senior attending physician reviews the care plan of a medical resident. 'In some ways, that's how [this AI tool] is functioning. It's a safety net—it's not dictating what the care is, but only giving corrective nudges and feedback when it's needed.'
Read More: The World's Richest Woman Has Opened a Medical School
Penda teamed up with OpenAI to conduct a study of AI Consult to document what impact it was having on helping about 20,000 doctors to reduce errors, both in making diagnoses and in prescribing treatments. The group of clinicians using the AI Consult tool reduced errors in diagnosis by 16% and treatment errors by 13% compared to the 20,000 Penda providers who weren't using it.
The fact that the study involved thousands of patients in a real-world setting sets a powerful precedent for how AI could be effectively used in providing and improving health care, says Dr. Isaac Kohane, professor of biomedical informatics at Harvard Medical School, who looked at the study. 'We need much more of these kinds of prospective studies as opposed to the retrospective studies, where [researchers] look at big observational data sets and predict [health outcomes] using AI. This is what I was waiting for.'
Not only did the study show that AI can help reduce medical errors, and therefore improve the quality of care that patients receive, but the clinicians involved viewed the tool as a useful partner in their medical education. That came as a surprise to OpenAI's Karan Singhal, Health AI lead, who led the study. 'It was a learning tool for [those who used it] and helped them educate themselves and understand a wider breadth of care practices that they needed to know about,' says Singhal. 'That was a bit of a surprise, because it wasn't what we set out to study.'
Kiptinness says AI Consult served as an important confidence builder, helping clinicians gain experience in an efficient way. 'Many of our clinicians now feel that AI Consult has to stay in order to help them have more confidence in patient care and improve the quality of care.'
Clinicians get immediate feedback in the form of a green, yellow, and red-light system that evaluates their clinical actions, and the company gets automatic evaluations on their strengths and weaknesses. 'Going forward, we do want to give more individualized feedback, such as, 'You are great at managing obstetric cases, but in pediatrics, these are the areas you should look into,'" says Kiptinness. "We have many ideas for customized training guides based on the AI feedback.'
Read More: The Surprising Reason Rural Hospitals Are Closing
Such co-piloting could be a practical and powerful way to start incorporating AI into the delivery of health care, especially in areas of high need and few health care professionals. The findings have 'shifted what we expect as standard of care within Penda,' says Korom. 'We probably wouldn't want our clinicians to be completely without this.'
The results also set the stage for more meaningful studies of AI in health care that move the practice from theory to reality. Dr. Ethan Goh, executive director of the Stanford AI Research and Science Evaluation network and associate editor of the journal BMJ Digital Health & AI, anticipates that the study will inspire similar ones in other settings, including in the U.S. 'I think that the more places that replicate such findings, the more the signal becomes real in terms of how much value [from AI-based systems] we can capture," he says. "Maybe today we are just catching mistakes, but what if tomorrow we are able to go beyond, and AI suggests accurate plans before a doctor makes mistakes to being with?'
Tools like AI Consult may extend access of health care even further by putting it in the hands of non-medical people such as social workers, or by providing more specialized care in areas where such expertise is unavailable. 'How far can we push this?' says Korom.
The key, he says, would be to develop, as Penda did, a highly customized model that accurately incorporates the work flow of the providers and patients in a given setting. Penda's AI Consult, for example, focused on the types of diseases most likely to occur in Kenya, and the symptoms clinicians are most likely to see. If such factors are taken into account, he says, 'I think there is a lot of potential there.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI's global race in the dark
AI's global race in the dark

Axios

time9 hours ago

  • Axios

AI's global race in the dark

The U.S.'s great AI race with China, now freshly embraced by President Trump, is a competition in the dark with no clear prize or finish line. Why it matters: Similar "races" of the past — like the nuclear arms race and the space race — have sparked innovation, but victories haven't lasted long or meant much. The big picture: Both Silicon Valley and the U.S. government now agree that we must invest untold billions to build supporting infrastructure for an error-prone, energy-hungry technology with an unproven business model and an unpredictable impact on the economy and jobs. What they're saying:"America is the country that started the AI race. And as president of the United States, I'm here today to declare that America is going to win it," Trump said at a Wednesday event titled "Winning the AI Race." Policy experts and industry leaders who promote the "race" idea argue that the U.S. and China are in a head-to-head competition to win the future of AI by achieving research breakthroughs, establishing the technology's standards and breaking the AGI or "superintelligence" barrier. They suggest that the world faces a binary choice between free, U.S.-developed AI imbued with democratic values or a Chinese alternative that's under the thumb of the Communist Party. Flashback: The last time a scientific race had truly world-shaping consequences was during the Second World War, as the Manhattan Project beat the Nazis to the atomic bomb. But Germany surrendered well before the U.S. had revealed or made use of its discovery. The nuclear arms race with the Soviet Union that followed was a decades-long stalemate that cost fortunes and more than once left the planet teetering on an apocalyptic brink. The 1960s space race was similarly inconclusive. Russia got humanity into space ahead of the U.S., but the U.S. made it to the moon first. Once that leg of the race was over, both countries retreated from further human exploration of space for decades. State of play: With AI, U.S. leaders are once again saying the race is on — but this time the scorecard is even murkier. "Build a bomb before Hitler" or "Put a man on the moon" are comprehensible objectives, but no one is providing similar clarity for the AI competition. The best the industry can say is that we are racing toward AI that's smarter than people. But no two companies or experts have the same definition of "smart" — for humans or AI models. We can't even say with confidence which of any two AI models is "smarter" right now, because we lack good measures and we don't always know or agree on what we want the technology to do. Between the lines: The "beat China" drumbeat is coming largely from inside the industry, which now has a direct line to the White House via Trump's AI adviser, David Sacks. "Whoever ends up winning ends up building the AI rails for the world," OpenAI chief global affairs officer Chris Lehane said at an Axios event in March. Arguing for controls on U.S. chip exports to China earlier this year, Anthropic CEO Dario Amodei described competitor DeepSeek as "beholden to an authoritarian government that has committed human rights violations, has behaved aggressively on the world stage, and will be far more unfettered in these actions if they're able to match the U.S. in AI." Yes, but: In the era of the second Trump administration, many Americans view their own government as increasingly authoritarian. With Trump himself getting into the business of dictating the political slant of AI products, it's harder for America's champions to sell U.S. alternatives as more "free." China has been catching up to the U.S. in AI research and development, most tech experts agree. They see the U.S. maintaining a shrinking lead of at most a couple of years and perhaps as little as months. But this edge is largely meaningless, since innovations propagate broadly and quickly in the AI industry. And cultural and language differences mean that the U.S. and its allies will never just switch over to Chinese suppliers even if their AI outruns the U.S. competition. In this, AI is more like social media than like steel, solar panels or other fungible goods. The bottom line: The U.S. and China are both going to have increasingly advanced AI in coming years. The race between them is more a convenient fiction that marshals money and minds than a real conflict with an outcome that matters.

Fact Check: Watch out for videos of ginormous anacondas in Amazon River
Fact Check: Watch out for videos of ginormous anacondas in Amazon River

Yahoo

time14 hours ago

  • Yahoo

Fact Check: Watch out for videos of ginormous anacondas in Amazon River

Claim: A series of videos shows massive, real-life anacondas swimming through the Amazon River. Rating: In July 2025, several videos began circulating across social media platforms, allegedly showing aerial footage of gigantic anacondas slithering through the Amazon River. These videos, some of which received millions of views, appeared to show unnaturally large snakes moving through dense jungle waterways. One TikTok post (archived) with the footage gained over 36 million views on TikTok. Similar videos spread across platforms including Instagram, Facebook, X and Reddit. However, these videos did not depict real-life events. They were created using AI video generation tools, most likely Sora, a text-to-video model developed by OpenAI. Many of the clips included a subtle watermark in the bottom-right corner indicating the use of such a tool. Additionally, no reputable news agencies or wildlife organizations reported any recent sightings of extraordinarily large anacondas in the Amazon River. While green anacondas (Eunectes murinus) are indeed among the largest snakes in the world and inhabit the region, the ones shown in these videos are far larger than any known specimen, and clearly fabricated. As such, we have rated these videos as fake. Wave of AI anaconda videos spreads online The TikTok profile (archived) @ which had over 387,300 followers as of this writing, featured dozens of AI-generated videos, many of which depicted surreal wildlife scenes, such as giant anacondas in the Amazon rain forest (see screenshot below). (TikTok user @ One of the user's TikTok videos (archived) reached over 12 million views, with a caption reading: During a routine surveillance mission over the Amazon River, a military helicopter crew spotted something unbelievable — a massive black anaconda gliding through the jungle waters. The size and speed of the serpent stunned the crew, and the footage is sending chills across the internet!Was this just a freak encounter… or is something colossal lurking in the Amazon depths? The description further stated 📍Location: Amazon River Basin 🎥 Type: AI-generated concept" and featured "#AIConcept" hashtag, among other ones. While this video at least included the disclosure that it was AI-generated, many viewers seemed to take it at face value, expressing shock and belief in the video's authenticity. Most other viral versions of the footage, however, lacked any disclaimer or context indicating they were AI-generated. Additionally, the clips featured a subtle OpenAI Blossom logo watermark in the bottom-right corner, indicating they were created using the AI video generator Sora (see image below). (TikTok use @ Illustration) The TikTok user confirmed personally creating all the videos using the AI tool Sora, consistent with the watermarks visible in the clips. The videos also displayed several telltale signs of being artificially created. For example, the massive snakes appeared to float on the surface of the water without interacting realistically with their surroundings. What's more, the textures of the snakes' skin and the water looked unnatural and overly smooth, lacking the detail and complexity seen in real footage. Finally, no reputable media outlets reported any verified sightings of such creatures, and the snakes depicted in the videos were of a size far beyond anything known to science. While real green anacondas are among the biggest snakes on Earth, the exaggerated scale shown in these AI-generated clips indicated they were entirely fabricated. (Don't) do it yourself To test how easily such viral snake videos could be created, we used Sora. With just a simple prompt, reading: " "generate a video, a view from above, pretty high, showing multiple anacondas in amazon river," we were able to produce a video that closely resembled the viral videos circulating online. (Sora) The videos were created in under a minute, with no need for editing software or wildlife knowledge. Notably, this was done with a very basic, first-try prompt. With just a little more effort, such as specifying camera movement, lighting, environmental effects, or more natural snake behavior, the results could be made even more realistic and harder to distinguish from genuine footage. For reference, here's a video of a real anaconda: We regularly debunk AI-generated content, including a recent rumor claiming that "The Simpsons" predicted a July 2025 scandal in which a kiss cam at a Coldplay concert allegedly caught a tech CEO cheating with the company's chief people officer. - YouTube. Accessed 23 Jul. 2025. AI Generated Content Articles | Accessed 23 Jul. 2025. Esposito, Joey. "Did 'The Simpsons' Predict Coldplay Kiss Cam Scandal?" Snopes, 21 Jul. 2025, "Green Anaconda, Facts and Information." Animals, 13 Jun. 2025, OpenAI. Accessed 23 Jul. 2025. Sora: Creating Video from Text. Accessed 23 Jul. 2025.

Meta names ChatGPT co-creator as chief scientist of Superintelligence Lab
Meta names ChatGPT co-creator as chief scientist of Superintelligence Lab

Yahoo

time2 days ago

  • Yahoo

Meta names ChatGPT co-creator as chief scientist of Superintelligence Lab

By Echo Wang NEW YORK (Reuters) -Meta Platforms has appointed Shengjia Zhao, co-creator of ChatGPT, as chief scientist of its Superintelligence Lab, CEO Mark Zuckerberg said on Friday, as the company accelerates its push into advanced AI. "In this role, Shengjia will set the research agenda and scientific direction for our new lab working directly with me and Alex," Zuckerberg wrote in a Threads post, referring to Meta's Chief AI Officer Alexandr Wang, who Zuckerberg hired from startup Scale AI when Meta took a big stake in it. Zhao, a former research scientist at OpenAI, co-created ChatGPT, GPT-4 and several of OpenAI's mini models, including 4.1 and o3. He is among several researchers who have moved from OpenAI to Meta in recent weeks, part of a broader talent arms race as Zuckerberg aggressively hires from rivals to close the gap in advanced AI. Meta has been offering some of Silicon Valley's most lucrative pay packages and striking startup deals to attract top researchers, a strategy that follows the underwhelming performance of its Llama 4 model. Meta launched the Superintelligence Lab recently to consolidate work on its Llama models and long‑term artificial general intelligence ambitions. Zhao is a co-founder of the lab, according to the Threads post, which operates separately from FAIR, Meta's established AI research division led by deep learning pioneer Yann LeCun. Zuckerberg has said Meta aims to build 'full general intelligence' and release its work as open source — a strategy that has drawn both praise and concern within the AI community.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store