
Study Shows AI Chatbots Can Blindly Repeat Incorrect Medical Details
The team also demonstrated that a simple built-in warning prompt can meaningfully reduce that risk, offering a practical path forward as the technology rapidly evolves. "What we saw across the board is that AI chatbots can be easily misled by false medical details, whether those errors are intentional or accidental," said lead author Mahmud Omar, from the varsity.
"They not only repeated the misinformation but often expanded on it, offering confident explanations for non-existent conditions. The encouraging part is that a simple, one-line warning added to the prompt cut those hallucinations dramatically, showing that small safeguards can make a big difference," Omar added.
For the study, detailed in the journal Communications Medicine, the team created fictional patient scenarios, each containing one fabricated medical term such as a made-up disease, symptom, or test, and submitted them to leading large language models.
In the first round, the chatbots reviewed the scenarios with no extra guidance provided. In the second round, the researchers added a one-line caution to the prompt, reminding the AI that the information provided might be inaccurate.
Without that warning, the chatbots routinely elaborated on the fake medical detail, confidently generating explanations about conditions or treatments that do not exist. But, with the added prompt, those errors were reduced significantly.
The team plans to apply the same approach to real, de-identified patient records and test more advanced safety prompts and retrieval tools.
They hope their "fake-term" method can serve as a simple yet powerful tool for hospitals, tech developers, and regulators to stress-test AI systems before clinical use.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
8 minutes ago
- Hindustan Times
Man asks ChatGPT for health advice, lands in hospital with poisoning, psychosis
A ChatGPT-prescribed diet led a man to poisoning and an involuntary psychiatric hold, People magazine reported. The incident has prompted researchers to flag 'adverse health outcomes' that artificial intelligence (AI) can contribute to in today's time. The unidentified individual started having 'auditory and visual hallucinations' and even tried to escape the hospital.(Photo: Adobe Illustrator) Alarmed at the downside of table salt or sodium chloride, a 60-year-old man recently consulted ChatGPT for a substitute, according to a strange case that appeared in the journal Annals of Internal Medicine: Clinical Cases this month. While researchers were later unable to retrieve the man's prompts to ChatGPT, the AI chatbot advised the man to consume sodium bromide, as per People. Soon after he fell sick, the man rushed to a nearby hospital and claimed he had been poisoned. Following a blood report, the doctors at the hospital immediately transferred him to a telemetry bed for inspection. Also Read: OpenAI's Rocky GPT-5 Rollout Shows Struggle to Remain Undisputed AI Leader Man became paranoid of water As his health deteriorated, the person revealed he had taken dietary advice from ChatGPT and consumed sodium bromide. Although the 60-year-old was 'very thirsty', doctors found him to be 'paranoid about water.' After he started having 'auditory and visual hallucinations,' the man ran amok and tried to escape, which ultimately forced the hospital staff to place him on an involuntary psychiatric hold. He was finally discharged after three weeks of treatment. Also Read: ChatGPT model picker returns as GPT-5 rollout faces user feedback The US Centers for Disease Control informs that bromide can be used in agriculture or as a fire suppressant. While there are no available cures for bromine poisoning, survivors are likely to battle with long-term effects. FAQs: 1. Should I consult with AI for medical purposes? Since researchers have found consultation with AI on several topics can lead to 'promulgating decontextualized information,' you should always visit a licensed doctor for medical purposes. 2. What is sodium bromide? Sodium bromide is an inorganic compound that resembles table salt. It can cause headaches, dizziness and even psychosis. 3. What happened to the man who took sodium bromide after talks with ChatGPT? The man, who took sodium bromide after consultation with ChatGPT, suffered from paranoia and auditory and visual hallucinations. 4. Are there cures for bromine poisoning? There are no available cures for bromine poisoning.


NDTV
4 hours ago
- NDTV
'Godfather Of AI' Reveals Bold Strategy To Save Humanity From AI Domination
Geoffrey Hinton, the British-Canadian computer scientist known as the "Godfather of AI", has expressed concerns that the technology he helped develop could potentially wipe out humanity. According to Mr Hinton, there's a 10-20% chance of this catastrophic outcome. Moreover, he's sceptical about the approach tech companies are taking to mitigate this risk, particularly in ensuring humans remain in control of AI systems. "That's not going to work. They're going to be much smarter than us. They're going to have all sorts of ways to get around that," Mr Hinton said at Ai4, an industry conference in Las Vegas, as per CNN. The scientist also warned that future AI systems could manipulate humans with ease, likening it to an adult bribing a child with candy. His warning comes after recent examples have shown AI systems deceiving, cheating, and stealing to achieve their goals, such as an AI model attempting to blackmail an engineer after discovering personal information in an email. Instead of trying to dominate AI, Mr Hinton suggested instilling "maternal instincts" in AI models, allowing them to genuinely care about people, even as they surpass human intelligence. "AI systems will very quickly develop two subgoals, if they're smart: One is to stay alive… (and) the other subgoal is to get more control. There is good reason to believe that any kind of agentic AI will try to stay alive," Mr Hinton said. He believes fostering a sense of compassion in AI is of paramount importance. At the conference, he pointed to the mother-child relationship as a model, where a mother's instincts and social pressure drive her to care for her baby, despite the baby's limited intelligence and control over her. While he expressed uncertainty about the technical specifics, he stressed that researchers must work on this challenge. "That's the only good outcome. If it's not going to parent me, it's going to replace me. These super-intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die," he added. Geoffrey Hinton is renowned for his groundbreaking work on neural networks, which laid the foundation for the current AI revolution. In May 2023, he quit his job at Google so he could freely speak out about the risks of AI.


India Today
6 hours ago
- India Today
AI's hottest skill: Why prompt engineering could be India's next big tech export
India has enormous promise in the field of prompt engineering, a branch of artificial intelligence that focuses on creating and refining prompts to enhance AI model performance, attributable to its large talent pool and quickly growing technological thorough investigation explores the state of engineering education in India today, highlighting its advantages, disadvantages, and potential contributions to prompt engineering in the MASSIVE ENGINEERING TALENT POOLOne of the world's greatest pools of technical expertise is found in India. The All India Council for Technical Education (AICTE) reports that more than 4000 engineering institutes in India create more than 1.5 million engineers a year. The growth of cutting-edge disciplines like prompt engineering is strongly supported by this enormous talent pool. However, prompt engineering is more than just creating powerful prompts. It requires a thorough comprehension of the underlying AI model, which further requires comprehensive knowledge of mathematics. It has the capacity to forecast its results, and the inventiveness to direct the algorithm to produce the intended response. It involves intricate interactions between linguistic proficiency, technical understanding, and creative prompt engineering is still relatively new worldwide, it has a lot of potential. The need for qualified quick engineers is expected to increase as companies use AI in their operations more and more. Given its broad scope, high industry demand, and appealing compensation prospects, the profession has a bright WITHOUT MATHEMATICS IS LIKE A CAR WITHOUT AN ENGINEA fundamental knowledge of mathematics is necessary to fully comprehend or innovate in AI. It gives scientists and engineers the ability to create intelligent, dependable, effective, and explicable can easily acquire pertinent results in the first prompt because of prompt engineering. It lessens the possibility of bias resulting from preexisting human bias in the training data for large language it improves user-AI communication such that even with little input, the AI can comprehend the user's intent. The theoretical underpinnings and useful instruments for creating and comprehending AI systems are found in mathematics.A conceptual grasp of the underlying mathematical concepts enables prompt engineers to be more efficient, perceptive, and methodical in their approach to interacting with and directing potent AI models, even though direct mathematical calculation may not be their daily COLLECTION: THE BACKBONE OF PROMPT ENGINEERINGadvertisementIn prompt engineering, a branch of artificial intelligence (AI) that focuses on creating and refining prompts to improve AI model performance, data collection is a crucial thorough investigation dives into the rapid engineering data gathering approach, describing its essential elements, the steps involved, and its vital function in guaranteeing reliable and legitimate AI development.- Article by Prof (Dr.) Sugandha Singh, Director, Product and Innovation, Manav Rachna University- Ends