logo
From chatbot to hospital bed: how a dash of salt became a dangerous AI prescription

From chatbot to hospital bed: how a dash of salt became a dangerous AI prescription

The Stara day ago
'Hey ChatGPT, how do I eat healthier salt?' It sounds innocent enough, a question many of us might ask online as we navigate the endless maze of health advice. However, there are increasing incidents involving AI and our trust in it.
However, a recent real-life story urges us all, especially those who desire to be healthy, to pause and reflect on the emotional dependence we have on artificial intelligence (AI) when it comes to caring for our wellbeing and health decisions: Can AI be trusted with our health??
A 60-year-old man was hospitalised earlier this year after seeking advice from ChatGPT on reducing sodium chloride (table salt) in his diet, "The Independent" reported.
According to a case report published by the "American College of Physicians" (ACP), the man, inspired by his college nutrition background, decided to conduct a personal experiment: swap out table salt for sodium bromide purchased online, as suggested by an AI chatbot.
Within months, he began to suffer from paranoia, hallucinations, severe thirst, and unusual skin changes. Online reports reveal that the man had no previous history of mental health or physical illness.
Following his ChatGPT-recommended dietary change, he began experiencing hallucinations, skin eruptions and extreme thirst.
According to "The Times of India", upon hospital admission, the man, who had no prior mental health or medical conditions, was admitted after experiencing severe paranoia and hallucinations.
"He displayed confusion and even refused water, fearing contamination. Doctors diagnosed him with bromide toxicity, a condition now almost unheard of but once common when bromide was prescribed for anxiety, insomnia, and other ailments."
After three weeks in the hospital, the man recovered, but not before his story became a wake-up call for anyone who relies on AI for health advice.
OpenAI, ChatGPT's developer, explicitly states in its Terms of Use: 'You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.'
The terms also clarify that the service is not intended for diagnosing or treating medical conditions.
Researchers at the American College of Physicians (ACP) echo the same sentiments, warning that AI tools can spread health misinformation. 'It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, cannot critically discuss results, and ultimately fuel the spread of misinformation,' the report warned.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

From chatbot to hospital bed: how a dash of salt became a dangerous AI prescription
From chatbot to hospital bed: how a dash of salt became a dangerous AI prescription

The Star

timea day ago

  • The Star

From chatbot to hospital bed: how a dash of salt became a dangerous AI prescription

'Hey ChatGPT, how do I eat healthier salt?' It sounds innocent enough, a question many of us might ask online as we navigate the endless maze of health advice. However, there are increasing incidents involving AI and our trust in it. However, a recent real-life story urges us all, especially those who desire to be healthy, to pause and reflect on the emotional dependence we have on artificial intelligence (AI) when it comes to caring for our wellbeing and health decisions: Can AI be trusted with our health?? A 60-year-old man was hospitalised earlier this year after seeking advice from ChatGPT on reducing sodium chloride (table salt) in his diet, "The Independent" reported. According to a case report published by the "American College of Physicians" (ACP), the man, inspired by his college nutrition background, decided to conduct a personal experiment: swap out table salt for sodium bromide purchased online, as suggested by an AI chatbot. Within months, he began to suffer from paranoia, hallucinations, severe thirst, and unusual skin changes. Online reports reveal that the man had no previous history of mental health or physical illness. Following his ChatGPT-recommended dietary change, he began experiencing hallucinations, skin eruptions and extreme thirst. According to "The Times of India", upon hospital admission, the man, who had no prior mental health or medical conditions, was admitted after experiencing severe paranoia and hallucinations. "He displayed confusion and even refused water, fearing contamination. Doctors diagnosed him with bromide toxicity, a condition now almost unheard of but once common when bromide was prescribed for anxiety, insomnia, and other ailments." After three weeks in the hospital, the man recovered, but not before his story became a wake-up call for anyone who relies on AI for health advice. OpenAI, ChatGPT's developer, explicitly states in its Terms of Use: 'You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.' The terms also clarify that the service is not intended for diagnosing or treating medical conditions. Researchers at the American College of Physicians (ACP) echo the same sentiments, warning that AI tools can spread health misinformation. 'It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, cannot critically discuss results, and ultimately fuel the spread of misinformation,' the report warned.

How ChatGPT's medical advice nearly cost a man his life
How ChatGPT's medical advice nearly cost a man his life

IOL News

time2 days ago

  • IOL News

How ChatGPT's medical advice nearly cost a man his life

Can AI chatbots be really trusted with our health? The digital revolution has brought with it an unprecedented ease of access to information. However, as our reliance on artificial intelligence (AI) grows, a new, potentially lethal threat has emerged: the dangerously persuasive power of AI chatbots when it comes to our health. While a quick search for symptoms online has long been a source of anxiety, a recent, alarming case highlights how generative AI can lead to severe, real-world harm. In a recent case, a 60-year-old man, seeking to reduce his salt intake, turned to ChatGPT for dietary advice instead of consulting a medical professional. The AI chatbot suggested he replace sodium chloride (common table salt) with sodium bromide. Sodium bromide, while having some historic medicinal uses, is now primarily an industrial chemical and is toxic in large doses. Believing the AI's confident assertion, the man sourced the compound online and began consuming it for three months. What followed was a terrifying descent into bromide toxicity, a condition now so rare it is almost unheard of. After three months, the man was admitted to hospital with a cocktail of severe symptoms, including paranoia, auditory and visual hallucinations, extreme thirst and ataxia, a neurological condition affecting muscle coordination. The man, with no prior psychiatric history, became so suspicious that he believed his neighbour was poisoning him and even refused water from hospital staff. His condition only improved after being treated with fluids and electrolytes in the hospital's inpatient psychiatric unit, where he was finally diagnosed. The doctors who treated the man noted that his symptoms, which also included acne and small, red growths on the skin were classic signs of bromism. This case serves as a stark warning of the problem with AI; an incorrect question can lead to a dangerously incorrect answer, which the AI presents with an air of absolute authority.

From chatbot to hospital bed: how a dash of salt became a dangerous AI prescription
From chatbot to hospital bed: how a dash of salt became a dangerous AI prescription

IOL News

time3 days ago

  • IOL News

From chatbot to hospital bed: how a dash of salt became a dangerous AI prescription

While AI can shine in areas such as providing lifestyle tips and tech advice, experts warn that its allure can pose significant risks in healthcare decisions Image: Kampus Production/pexels 'Hey ChatGPT, how do I eat healthier salt?' It sounds innocent enough, a question many of us might ask online as we navigate the endless maze of health advice. However, there are increasing incidents involving AI and our trust in it. However, a recent real-life story urges us all, especially those who desire to be healthy, to pause and reflect on the emotional dependence we have on artificial intelligence (AI) when it comes to caring for our wellbeing and health decisions: Can AI be trusted with our health?? A 60-year-old man was hospitalised earlier this year after seeking advice from ChatGPT on reducing sodium chloride (table salt) in his diet, "The Independent" reported. According to a case report published by the "American College of Physicians" (ACP), the man, inspired by his college nutrition background, decided to conduct a personal experiment: swap out table salt for sodium bromide purchased online, as suggested by an AI chatbot. Within months, he began to suffer from paranoia, hallucinations, severe thirst, and unusual skin changes. Online reports reveal that the man had no previous history of mental health or physical illness. Following his ChatGPT-recommended dietary change, he began experiencing hallucinations, skin eruptions and extreme thirst. Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ Ad Loading A 60-year-old man, inspired by his background in college nutrition, found himself hospitalised earlier this year after an unsettling experiment prompted by an AI chatbot. This is an illustration image. Image: Tima Miroshnichenko /pexels According to "The Times of India", upon hospital admission, the man, who had no prior mental health or medical conditions, was admitted after experiencing severe paranoia and hallucinations. "He displayed confusion and even refused water, fearing contamination. Doctors diagnosed him with bromide toxicity, a condition now almost unheard of but once common when bromide was prescribed for anxiety, insomnia, and other ailments." After three weeks in the hospital, the man recovered, but not before his story became a wake-up call for anyone who relies on AI for health advice. OpenAI, ChatGPT's developer, explicitly states in its Terms of Use: 'You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.' The terms also clarify that the service is not intended for diagnosing or treating medical conditions. Researchers at the American College of Physicians (ACP) echo the same sentiments, warning that AI tools can spread health misinformation. 'It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, cannot critically discuss results, and ultimately fuel the spread of misinformation,' the report warned. Turning to AI for nutrition advice can sometimes lead you down a dangerous path. Image: Kindel Media/pexels The double-edged sword of AI health advice There's no denying it: AI chatbots like ChatGPT have become a part of our daily lives. They offer quick answers, support, and even companionship. According to a 2023 Pew Research Centre study, nearly 60% of Americans have used an AI chatbot for questions about health, relationships, or work. However, experts warn that AI is not a replacement for professional health care. As Dr. John Torous, director of the Digital Psychiatry Division at Harvard Medical School, puts it, 'AI chatbots can be helpful for general education, but they do not possess the nuanced understanding of a human clinician who knows your medical history, context, and unique needs.' AI chatbots like ChatGPT are designed to generate text based on patterns from vast amounts of data, not to offer personalised medical advice. Dr Mike Varshavski, a family medicine physician known for health education on social media, has cautioned in interviews that while AI can be a great learning tool, 'it lacks the nuance of a trained professional who considers your history, symptoms, and lifestyle.' The danger lies in the fact that AI can produce information confidently, even when it's wrong, a phenomenon called hallucination in AI research. Beyond medical risks, experts are warning about the emotional pull of AI. Many people, especially during periods of isolation, like the Covid-19 pandemic, have turned to chatbots for companionship, advice, and even therapy-like conversations. The lines blur when a tool designed for convenience becomes a substitute for real human connection or professional guidance. We've seen it with Replika, an AI companion app, where users reported forming deep emotional bonds, sometimes to the point of avoiding real-world relationships. While these experiences can feel comforting, psychologists caution that AI lacks the empathy and ethical responsibility of human support systems. Part of AI's appeal is convenience; it's fast, available 24/7, and never judges. For lifestyle advice, recipes, or tech tips, it can be brilliant. But when it comes to health and wellness, trust can be dangerous.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store