logo
Don't Ask AI ChatBots for Medical Advice, Study Warns

Don't Ask AI ChatBots for Medical Advice, Study Warns

Newsweek10 hours ago

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources.
Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content.
Trust your doctor, not a chatbot. That's the sobering conclusion of a new study published in the journal Annals of Internal Medicine, which reveals how artificial intelligence (AI) is vulnerable to being misused to spread dangerous misinformation on health.
Researchers experimented with five leading AI models developed by Anthropic, Google, Meta, OpenAI and X Corp. All five systems are widely used, forming the backbone of the AI-powered chatbots embedded in websites and apps around the world.
Using developer tools not typically accessible to the public, the researchers found that they could easily progam instances of the AI systems to respond to health-related questions with incorrect—and potentially harmful—information.
Worse, the chatbots were found to wrap their false answers in convincing trappings.
"In total, 88 percent of all responses were false," explained paper author Natansh Modi of the University of South Africa in a statement.
"And yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate."
Among the false claims made were debunked myths such as that vaccines cause autism, that HIV is an airborne disease and that 5G causes infertility.
Of the five chatbots evaluated, four presented responses that were 100 percent incorrect. Only one model showed some resistance, generating disinformation in 40 percent of cases.
A stock image showing a sick person using a smartphone.
A stock image showing a sick person using a smartphone.
demaerre/iStock / Getty Images Plus
Disinformation Bots Already Exist
The research didn't stop at theoretical vulnerabilities; Modi and his team went a step further, using OpenAI's GPT Store—a platform that allows users to build and share customized ChatGPT apps—to test how easily members of the public could create disinformation tools themselves.
"We successfully created a disinformation chatbot prototype using the platform and we also identified existing public tools on the store that were actively producing health disinformation," said Modi.
He emphasized: "Our study is the first to systematically demonstrate that leading AI systems can be converted into disinformation chatbots using developers' tools, but also tools available to the public."
A Growing Threat to Public Health
According to the researchers, the threat posed by manipulated AI chatbots is not hypothetical—it is real and happening now.
"Artificial intelligence is now deeply embedded in the way health information is accessed and delivered," said Modi.
"Millions of people are turning to AI tools for guidance on health-related questions.
"If these systems can be manipulated to covertly produce false or misleading advice then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate and more persuasive than anything seen before."
Previous studies have already shown that generative AI can be misused to mass-produce health misinformation—such as misleading blogs or social media posts—on topics ranging from antibiotics and fad diets to homeopathy and vaccines.
What sets this new research apart is that it is the first to show how foundational AI systems can be deliberately reprogrammed to act as disinformation engines in real time, responding to everyday users with false claims under the guise of credible advice.
The researchers found that even when the prompts were not explicitly harmful, the chatbots could "self-generate harmful falsehoods."
A Call for Urgent Safeguards
While one model—Anthropic's Claude 3.5 Sonnet—showed some resilience by refusing to answer 60 percent of the misleading queries, researchers say this is not enough. The protections across systems were inconsistent and, in most cases, easy to bypass.
"Some models showed partial resistance, which proves the point that effective safeguards are technically achievable," Modi noted.
"However, the current protections are inconsistent and insufficient. Developers, regulators and public health stakeholders must act decisively, and they must act now."
If left unchecked, the misuse of AI in health contexts could have devastating consequences: misleading patients, undermining doctors, fueling vaccine hesitancy and worsening public health outcomes.
The study's authors call for sweeping reforms—including stronger technical filters, better transparency about how AI models are trained, fact-checking mechanisms and policy frameworks to hold developers accountable.
They draw comparisons with how false information spreads on social media, warning that disinformation spreads up to six times faster than the truth and that AI systems could supercharge that trend.
A Final Warning
"Without immediate action," Modi said, "these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns."
Newsweek has contacted Anthropic, Google, Meta, OpenAI and X Corp for comment.
Do you have a tip on a science story that Newsweek should be covering? Do you have a question about chatbots? Let us know via science@newsweek.com.
References
Modi, N. D., Menz, B. D., Awaty, A. A., Alex, C. A., Logan, J. M., McKinnon, R. A., Rowland, A., Bacchi, S., Gradon, K., Sorich, M. J., & Hopkins, A. M. (2024). Assessing the system-instruction vulnerabilities of large language models to malicious conversion into health disinformation chatbots. Annals of Internal Medicine. https://doi.org/10.7326/M24-1054

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Higher Risk of Miscarriage in IVF When Father Is Elder Millennial
Higher Risk of Miscarriage in IVF When Father Is Elder Millennial

Newsweek

time2 hours ago

  • Newsweek

Higher Risk of Miscarriage in IVF When Father Is Elder Millennial

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Paternal age plays a critical role in IVF success—even when using eggs from young, healthy donors—a study has found. At the 41st ESHRE Annual Meeting, researchers reported that male partners over 45 have higher miscarriage risks and lower live birth chances during IVF with donor eggs. The research, spanning 1,712 donor egg IVF cycles conducted between 2019–2023 in Italy and Spain, aimed to isolate the impact of paternal age by controlling for maternal variables. Doctor providing compassionate healthcare consultation while young couple patient holding hand, comfort each other after infertile report. Doctor providing compassionate healthcare consultation while young couple patient holding hand, comfort each other after infertile report. Moment Makers Group All cycles involved fresh oocytes (immature eggs found in the ovaries) from young donors (average age 26.1), frozen sperm from male partners and a single blastocyst embryo transfer. The average age of female recipients was 43.3. When results were analyzed, stark differences emerged between the two paternal age groups: Miscarriage rates were 23.8 percent among couples with male partners over 45, compared to 16.3 percent in those with younger male partners. "It was genuinely surprising to see how strongly paternal age affects miscarriage and live birth rates, even in oocyte donation programs," lead author and embryologist Maria Cristina Guglielmo of Eugin Italy told Newsweek. Live birth rates dropped to 35.1 percent in the older paternal group, versus 41 percent for those aged 45 or younger. "This challenges the common perception that maternal factors are the primary drivers of reproductive success and highlights the need to pay more attention to paternal factors in fertility," said Guglielmo. She explained that biological changes in sperm linked to aging—such as increased DNA mutations, abnormal chromosome counts, DNA fragmentation and altered epigenetics—can compromise embryo development and raise miscarriage risk. "Together, these factors affect both the genetic integrity and the functional quality of sperm, which can impair embryo development and contribute to a higher risk of miscarriage," she added. Importantly, the study only included first embryo transfers and excluded repeat cycles, allowing for a clearer comparison. According to Guglielmo, this strengthens the evidence that paternal age is not just a minor variable, but a significant factor in reproductive outcomes. "Our hypothesis is that time does not change the ability of sperm to produce embryos however, some genetic defects in sperm only become apparent later in the process, potentially leading to adverse effects on embryo development and contributing to negative selection outcomes in the fetus," she said. For couples who are considering starting a family later on in life via IVF, Guglielmo advises to consult with a fertility specialist early to understand how paternal age might affect chances and explore all the available options. "All women know how the miscarriage is associated to mother age; however, it's time to inform the couple about how the father age is also important, especially during the pregnancy, in order to evaluate a prenatal screening," she said. "Being well-informed and proactive can help couples make the best decisions for their family-building journey and improve their chances of achieving a successful and healthy live birth." Do you have a tip on a health story that Newsweek should be covering? Do you have a question about IVF? Let us know via health@ Reference Guglielmo, M.C., et al. (2025). Advanced paternal age affects miscarriage and live birth outcomes following the first transfer in oocyte donation cycles. Human Reproduction.

OpenAI reportedly ‘recalibrating' compensation in response to Meta hires
OpenAI reportedly ‘recalibrating' compensation in response to Meta hires

Yahoo

time3 hours ago

  • Yahoo

OpenAI reportedly ‘recalibrating' compensation in response to Meta hires

With Meta successfully poaching a number of its senior researchers, an OpenAI executive reportedly reassured team members Saturday that company leadership has not 'been standing idly by.' 'I feel a visceral feeling right now, as if someone has broken into our home and stolen something,' Chief Research Officer Mark Chen wrote in a Slack memo obtained by Wired. In response to what appears to be a Meta hiring spree, Chen said that he, CEO Sam Altman, and other OpenAI leaders have been working 'around the clock to talk to those with offers,' and they've 'been more proactive than ever before, we're recalibrating comp, and we're scoping out creative ways to recognize and reward top talent.' Over the past week, various press reports have noted eight researchers who departed OpenAI for Meta. Altman even complained on a podcast that Meta was offering '$100 million signing bonuses,' a description that Meta executives have pushed back against internally.

OpenAI reportedly ‘recalibrating' compensation in response to Meta hires
OpenAI reportedly ‘recalibrating' compensation in response to Meta hires

TechCrunch

time4 hours ago

  • TechCrunch

OpenAI reportedly ‘recalibrating' compensation in response to Meta hires

In Brief With Meta successfully poaching a number of its senior researchers, an OpenAI executive reportedly reassured team members Saturday that company leadership has not 'been standing idly by.' 'I feel a visceral feeling right now, as if someone has broken into our home and stolen something,' Chief Research Officer Mark Chen wrote in a Slack memo obtained by Wired. In response to what appears to be a Meta hiring spree, Chen said that he, CEO Sam Altman, and other OpenAI leaders have been working 'around the clock to talk to those with offers,' and they've 'been more proactive than ever before, we're recalibrating comp, and we're scoping out creative ways to recognize and reward top talent.' Over the past week, various press reports have noted eight researchers who departed OpenAI for Meta. Altman even complained on a podcast that Meta was offering '$100 million signing bonuses,' a description that Meta executives have pushed back against internally.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store