
60-year-old man lands in hospital after following ChatGPT's advice to eliminate salt from diet
The incident has sparked fresh concerns about relying on AI tools like ChatGPT for medical guidance, especially without consulting healthcare professionals. The case was recently detailed in a report published in the American College of Physicians Journals.
According to the report, the man asked ChatGPT how to eliminate sodium chloride (commonly known as table salt) from his diet. In response, he replaced it with sodium bromide-- a substance once commonly used in medications in the early 1900s, but now known to be toxic in large quantities. He had reportedly been using sodium bromide for three months, sourced online, based on what he read from the AI chatbot.
The man, who had no prior history of psychiatric or physical health issues, was admitted to the hospital after experiencing hallucinations, paranoia, and severe thirst. During his initial 24 hours in care, he showed signs of confusion and refused water, suspecting it was unsafe.
Doctors soon diagnosed him with bromide toxicity, a condition that is now extremely rare but was once more common when bromide was used to treat anxiety, insomnia, and other conditions. Symptoms include neurological disturbances, skin issues like acne, and red skin spots known as cherry angiomas--all of which the man displayed.
'Inspired by his past studies in nutrition, he decided to run a personal experiment to remove chloride from his diet,' the report noted. He told doctors he had seen on ChatGPT that bromide could be used in place of chloride, though the source seemed to reference industrial rather than dietary use.
Following three weeks of treatment involving fluids and electrolyte balance, the man was stabilised and discharged from the hospital.
The authors of the case study warned about the growing risk of misinformation from AI: 'It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, cannot critically discuss results, and ultimately fuel the spread of misinformation.'
OpenAI, the developer of ChatGPT, acknowledges this in its Terms of Use, stating: 'You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.'
The terms further clarify: 'Our Services are not intended for use in the diagnosis or treatment of any health condition.'
The alarming case adds to the ongoing global conversation about the limitations and responsibilities around AI-generated advice--particularly in matters involving physical and mental health.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
3 hours ago
- NDTV
Man Nearly Poisons Himself Following ChatGPT's Advice To Remove Salt From Diet
A 60-year-old man was hospitalised after he asked ChatGPT how to remove salt (sodium chloride) from his diet, having read about the negative health effects of table salt. After consulting the artificial intelligence (AI) chatbot, the man made a dietary change and removed salt from his lifestyle and replaced it with sodium bromide, a substance once commonly used in medications in the early 1900s, but now known to be toxic in large quantities. According to the case report published in the American College of Physicians Journals, the patient had been using sodium bromide for three months, which he had sourced online after seeking advice from AI. However, after developing health issues, the man was hospitalised, where he claimed that his neighbour was poisoning him. Initially, the man did not report taking any medications, including supplements, but upon admission, he revealed that he maintained dietary restrictions and that he distilled his own water at home. During the course of the hospitalisations, he developed severe neuropsychiatric symptoms, including paranoia and hallucinations, along with dermatological issues. "He was noted to be very thirsty but paranoid about water he was offered," the case report read, adding that he was treated with fluids and electrolytes and became medically stable, allowing him to be admitted to the hospital's inpatient psychiatry unit. The report highlighted that the patient had developed bromism after asking ChatGPT for advice on his diet. "He had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning," the report highlighted. AI for health advice? In the olden times, bromide salts were found in many over-the-counter medications to treat insomnia, hysteria and anxiety. However, ingesting too much can have severe health consequences The case report warns that AI systems like ChatGPT can generate inaccuracies and spread misinformation, a point echoed by OpenAI's own terms of use. While much of the debate has been about AI chatbots being used for therapy and mental health, the case shows that the technology is not able to correctly guide users about their physical health, either.


Time of India
13 hours ago
- Time of India
ChatGPT advice lands 60-year-old man in hospital; the reason will surprise you
A 60-year-old man in New York was hospitalised after following a strict salt-reduction regimen suggested by ChatGPT. According to doctors, the man abruptly cut sodium from his diet to nearly zero over several weeks, leading to dangerously low sodium levels, a condition known as hyponatraemia . His family said he relied on an AI-generated health plan without consulting a physician. The case, recently published in the American College of Physicians journal, highlights the risks of applying AI health advice without professional oversight, particularly when it involves essential nutrients like sodium. The man recovered after spending three weeks in hospital. ChatGPT advice leads to dangerous substitute According to the report, the man asked ChatGPT how to eliminate sodium chloride (commonly known as table salt) from his diet. The AI tool suggested sodium bromide as an alternative, a compound once used in early 20th-century medicines but now recognised as toxic in large doses. Acting on this advice, the man purchased sodium bromide online and used it in his cooking for three months. With no previous history of mental or physical illness, the man began experiencing hallucinations, paranoia, and extreme thirst. Upon hospital admission, he displayed confusion and even refused water, fearing contamination. Doctors diagnosed him with bromide toxicity, a condition now almost unheard of but once common when bromide was prescribed for anxiety, insomnia, and other ailments. He also exhibited neurological symptoms, acne-like skin eruptions, and distinctive red spots known as cherry angiomas, all classic signs of bromism. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Man Saves Pennies For 45 Years, Then Realizes What They're Worth Undo Hospital treatment focused on rehydration and restoring electrolyte balance. Over the course of three weeks, the man's condition gradually improved, and he was discharged once his sodium and chloride levels returned to normal. AI misinformation risks The authors of the case study stressed the growing risk of health misinformation from AI tools. 'It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, cannot critically discuss results, and ultimately fuel the spread of misinformation,' the report warned. OpenAI , ChatGPT's developer, explicitly states in its Terms of Use: 'You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.' The terms also clarify that the service is not intended for diagnosing or treating medical conditions. A global conversation about AI responsibility This case highlights the urgent need for critical thinking when interpreting AI-generated advice, especially in matters involving health. Experts say AI tools can be valuable for general information but should never replace professional consultation. As AI adoption grows, so too does the responsibility to ensure that its outputs are accurate, safe, and clearly understood by the public.


Time of India
13 hours ago
- Time of India
Man swaps table salt for toxic bromide after ChatGPT advice, lands in hospital with rare poisoning
What began as a seemingly harmless dietary tweak turned into a dangerous medical ordeal for a 60-year-old man. After reading about the potential harms of sodium in table salt, he became fixated on eliminating chloride from his diet. Seeking alternative options, he turned to ChatGPT for guidance. Diet experiment from ChatGPT lead to hospital bed According to his doctors, the man reported that the AI suggested bromide as a possible substitute. Without professional consultation, he replaced all sodium chloride in his meals with sodium bromide purchased online, a chemical compound long removed from over-the-counter medicines due to its toxic effects when consumed chronically. The comeback of a forgotten poison, thanks to ChatGPT Bromide was once a staple in sedatives and anticonvulsants during the 19th and 20th centuries. Overuse, however, led to a condition called bromism, a toxidrome marked by neurological and psychiatric symptoms, from confusion and memory loss to paranoia and full-blown psychosis. By the 1980s, regulators had stripped bromide from consumer medicines, making such poisonings rare. Yet, with online sales of bromide-containing products, isolated cases are resurfacing. In this instance, three months into his self-devised bromide diet, the man began experiencing paranoia, convinced his neighbour was poisoning him. Unravelling the symptoms Tests initially suggested elevated chloride levels, but further investigation revealed pseudohyperchloremia, a false reading caused by high bromide levels interfering with the lab analysis. Alongside paranoia and hallucinations, the man also suffered insomnia, muscle coordination issues, and extreme thirst. During his hospital stay, his paranoia worsened, leading to an involuntary psychiatric hold. Doctors began antipsychotic treatment while replenishing fluids and electrolytes. His condition improved over the next three weeks, and he was eventually discharged, stable and symptom-free at follow-up. A cautionary tale on AI health advice Medical experts noted that a qualified health professional would be unlikely to suggest bromide as a salt substitute. The case, published in Annals of Internal Medicine Clinical Cases, underscores the dangers of acting on AI-generated health advice without professional input. As the authors concluded, AI can bridge information gaps, but without safeguards, it risks promoting dangerous, decontextualised recommendations. In the age of instant answers, this man's story serves as a stark reminder: not every quick solution is a safe one.