Latest news with #AmericanCollegeofPhysiciansJournals


NDTV
2 days ago
- Health
- NDTV
Man Nearly Poisons Himself Following ChatGPT's Advice To Remove Salt From Diet
A 60-year-old man was hospitalised after he asked ChatGPT how to remove salt (sodium chloride) from his diet, having read about the negative health effects of table salt. After consulting the artificial intelligence (AI) chatbot, the man made a dietary change and removed salt from his lifestyle and replaced it with sodium bromide, a substance once commonly used in medications in the early 1900s, but now known to be toxic in large quantities. According to the case report published in the American College of Physicians Journals, the patient had been using sodium bromide for three months, which he had sourced online after seeking advice from AI. However, after developing health issues, the man was hospitalised, where he claimed that his neighbour was poisoning him. Initially, the man did not report taking any medications, including supplements, but upon admission, he revealed that he maintained dietary restrictions and that he distilled his own water at home. During the course of the hospitalisations, he developed severe neuropsychiatric symptoms, including paranoia and hallucinations, along with dermatological issues. "He was noted to be very thirsty but paranoid about water he was offered," the case report read, adding that he was treated with fluids and electrolytes and became medically stable, allowing him to be admitted to the hospital's inpatient psychiatry unit. The report highlighted that the patient had developed bromism after asking ChatGPT for advice on his diet. "He had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning," the report highlighted. AI for health advice? In the olden times, bromide salts were found in many over-the-counter medications to treat insomnia, hysteria and anxiety. However, ingesting too much can have severe health consequences The case report warns that AI systems like ChatGPT can generate inaccuracies and spread misinformation, a point echoed by OpenAI's own terms of use. While much of the debate has been about AI chatbots being used for therapy and mental health, the case shows that the technology is not able to correctly guide users about their physical health, either.


Mint
2 days ago
- Health
- Mint
60-year-old man lands in hospital after following ChatGPT's advice to eliminate salt from diet
A 60-year-old man's attempt to eat healthier by cutting salt from his diet took a dangerous turn after he followed advice from ChatGPT. His decision ultimately led to a hospital stay and a diagnosis of a rare and potentially life-threatening condition called bromism. The incident has sparked fresh concerns about relying on AI tools like ChatGPT for medical guidance, especially without consulting healthcare professionals. The case was recently detailed in a report published in the American College of Physicians Journals. According to the report, the man asked ChatGPT how to eliminate sodium chloride (commonly known as table salt) from his diet. In response, he replaced it with sodium bromide-- a substance once commonly used in medications in the early 1900s, but now known to be toxic in large quantities. He had reportedly been using sodium bromide for three months, sourced online, based on what he read from the AI chatbot. The man, who had no prior history of psychiatric or physical health issues, was admitted to the hospital after experiencing hallucinations, paranoia, and severe thirst. During his initial 24 hours in care, he showed signs of confusion and refused water, suspecting it was unsafe. Doctors soon diagnosed him with bromide toxicity, a condition that is now extremely rare but was once more common when bromide was used to treat anxiety, insomnia, and other conditions. Symptoms include neurological disturbances, skin issues like acne, and red skin spots known as cherry angiomas--all of which the man displayed. 'Inspired by his past studies in nutrition, he decided to run a personal experiment to remove chloride from his diet,' the report noted. He told doctors he had seen on ChatGPT that bromide could be used in place of chloride, though the source seemed to reference industrial rather than dietary use. Following three weeks of treatment involving fluids and electrolyte balance, the man was stabilised and discharged from the hospital. The authors of the case study warned about the growing risk of misinformation from AI: 'It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, cannot critically discuss results, and ultimately fuel the spread of misinformation.' OpenAI, the developer of ChatGPT, acknowledges this in its Terms of Use, stating: 'You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.' The terms further clarify: 'Our Services are not intended for use in the diagnosis or treatment of any health condition.' The alarming case adds to the ongoing global conversation about the limitations and responsibilities around AI-generated advice--particularly in matters involving physical and mental health.