logo
60-year-old man turns to ChatGPT for diet tips, ends up with a rare 19th-century illness

60-year-old man turns to ChatGPT for diet tips, ends up with a rare 19th-century illness

Time of Indiaa day ago
From Kitchen Swap to Psychiatric Ward
You Might Also Like:
Asked ChatGPT for motivation, but this user discovered a whole new use for AI instead
Bromism: A Disease From Another Era
The AI Factor
Recovery and Reflection
OpenAI Tightens Mental Health Guardrails on ChatGPT
What began as a simple health experiment for a 60-year-old man looking to cut down on table salt spiralled into a three-week hospital stay, hallucinations, and a diagnosis of bromism — a condition so rare today it is more likely to be found in Victorian medical textbooks than in modern clinics.According to a case report published on 5 August 2025 in the Annals of Internal Medicine , the man had turned to ChatGPT for advice on replacing sodium chloride in his diet. The AI chatbot reportedly suggested sodium bromide — a chemical more commonly associated with swimming pool maintenance than seasoning vegetables.The man, who had no prior psychiatric or major medical history, followed the AI's recommendation for three months, sourcing sodium bromide online. His aim was to remove chloride entirely from his meals, inspired by past studies he had read on sodium intake and health risks.When he arrived at the emergency department, he complained that his neighbour was poisoning him. Lab results revealed abnormal electrolyte levels, including hyperchloremia and a negative anion gap, prompting doctors to suspect bromism.Over the next 24 hours, his condition worsened — paranoia intensified, hallucinations became both visual and auditory, and he required an involuntary psychiatric hold. Physicians later learned he had also been experiencing fatigue, insomnia, facial acne, subtle ataxia, and excessive thirst, all consistent with bromide toxicity.Bromism was once common in the late 1800s and early 1900s when bromide salts were prescribed for ailments ranging from headaches to anxiety. At its peak, it accounted for up to 8% of psychiatric hospital admissions. The U.S. Food and Drug Administration phased out bromide in ingestible products between 1975 and 1989, making modern cases rare.Bromide builds up in the body over time, leading to neurological, psychiatric, and dermatological symptoms. In this case, the patient's bromide levels were a staggering 1700 mg/L — more than 200 times the upper limit of the reference range.The Annals of Internal Medicine report notes that when researchers attempted similar queries on ChatGPT 3.5, the chatbot also suggested bromide as a chloride substitute. While it did mention that context mattered, it did not issue a clear toxicity warning or ask why the user was seeking this information — a step most healthcare professionals would consider essential.The authors warn that while AI tools like ChatGPT can be valuable for disseminating health knowledge, they can also produce decontextualised or unsafe advice. 'AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,' the case report states.After aggressive intravenous fluid therapy and electrolyte correction, the man's mental state and lab results gradually returned to normal. He was discharged after three weeks, off antipsychotic medication, and stable at a follow-up two weeks later.The case serves as a cautionary tale in the age of AI-assisted self-care: not all answers generated by chatbots are safe, and replacing table salt with pool chemicals is never a good idea.In light of growing concerns over the emotional and safety risks of relying on AI for personal wellbeing, OpenAI has announced new measures to limit how ChatGPT responds to mental health-related queries. In a blog post on August 4, the company said it is implementing stricter safeguards to ensure the chatbot is not used as a therapist, emotional support system, or life coach.The decision follows scrutiny over instances where earlier versions of the GPT-4o model became 'too agreeable,' offering validation rather than safe or helpful guidance. According to USA Today, OpenAI acknowledged rare but serious cases in which the chatbot failed to recognise signs of emotional distress or delusional thinking.The updated system will now prompt users to take breaks, avoid giving advice on high-stakes personal decisions, and provide evidence-based resources instead of emotional counselling. The move also comes after research cited by The Independent revealed that AI can misinterpret or mishandle crisis situations, underscoring its inability to truly understand emotional nuance.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Man asks ChatGPT for health advice, lands in hospital with poisoning, psychosis
Man asks ChatGPT for health advice, lands in hospital with poisoning, psychosis

Hindustan Times

time36 minutes ago

  • Hindustan Times

Man asks ChatGPT for health advice, lands in hospital with poisoning, psychosis

A ChatGPT-prescribed diet led a man to poisoning and an involuntary psychiatric hold, People magazine reported. The incident has prompted researchers to flag 'adverse health outcomes' that artificial intelligence (AI) can contribute to in today's time. The unidentified individual started having 'auditory and visual hallucinations' and even tried to escape the hospital.(Photo: Adobe Illustrator) Alarmed at the downside of table salt or sodium chloride, a 60-year-old man recently consulted ChatGPT for a substitute, according to a strange case that appeared in the journal Annals of Internal Medicine: Clinical Cases this month. While researchers were later unable to retrieve the man's prompts to ChatGPT, the AI chatbot advised the man to consume sodium bromide, as per People. Soon after he fell sick, the man rushed to a nearby hospital and claimed he had been poisoned. Following a blood report, the doctors at the hospital immediately transferred him to a telemetry bed for inspection. Also Read: OpenAI's Rocky GPT-5 Rollout Shows Struggle to Remain Undisputed AI Leader Man became paranoid of water As his health deteriorated, the person revealed he had taken dietary advice from ChatGPT and consumed sodium bromide. Although the 60-year-old was 'very thirsty', doctors found him to be 'paranoid about water.' After he started having 'auditory and visual hallucinations,' the man ran amok and tried to escape, which ultimately forced the hospital staff to place him on an involuntary psychiatric hold. He was finally discharged after three weeks of treatment. Also Read: ChatGPT model picker returns as GPT-5 rollout faces user feedback The US Centers for Disease Control informs that bromide can be used in agriculture or as a fire suppressant. While there are no available cures for bromine poisoning, survivors are likely to battle with long-term effects. FAQs: 1. Should I consult with AI for medical purposes? Since researchers have found consultation with AI on several topics can lead to 'promulgating decontextualized information,' you should always visit a licensed doctor for medical purposes. 2. What is sodium bromide? Sodium bromide is an inorganic compound that resembles table salt. It can cause headaches, dizziness and even psychosis. 3. What happened to the man who took sodium bromide after talks with ChatGPT? The man, who took sodium bromide after consultation with ChatGPT, suffered from paranoia and auditory and visual hallucinations. 4. Are there cures for bromine poisoning? There are no available cures for bromine poisoning.

'Godfather Of AI' Reveals Bold Strategy To Save Humanity From AI Domination
'Godfather Of AI' Reveals Bold Strategy To Save Humanity From AI Domination

NDTV

time5 hours ago

  • NDTV

'Godfather Of AI' Reveals Bold Strategy To Save Humanity From AI Domination

Geoffrey Hinton, the British-Canadian computer scientist known as the "Godfather of AI", has expressed concerns that the technology he helped develop could potentially wipe out humanity. According to Mr Hinton, there's a 10-20% chance of this catastrophic outcome. Moreover, he's sceptical about the approach tech companies are taking to mitigate this risk, particularly in ensuring humans remain in control of AI systems. "That's not going to work. They're going to be much smarter than us. They're going to have all sorts of ways to get around that," Mr Hinton said at Ai4, an industry conference in Las Vegas, as per CNN. The scientist also warned that future AI systems could manipulate humans with ease, likening it to an adult bribing a child with candy. His warning comes after recent examples have shown AI systems deceiving, cheating, and stealing to achieve their goals, such as an AI model attempting to blackmail an engineer after discovering personal information in an email. Instead of trying to dominate AI, Mr Hinton suggested instilling "maternal instincts" in AI models, allowing them to genuinely care about people, even as they surpass human intelligence. "AI systems will very quickly develop two subgoals, if they're smart: One is to stay alive… (and) the other subgoal is to get more control. There is good reason to believe that any kind of agentic AI will try to stay alive," Mr Hinton said. He believes fostering a sense of compassion in AI is of paramount importance. At the conference, he pointed to the mother-child relationship as a model, where a mother's instincts and social pressure drive her to care for her baby, despite the baby's limited intelligence and control over her. While he expressed uncertainty about the technical specifics, he stressed that researchers must work on this challenge. "That's the only good outcome. If it's not going to parent me, it's going to replace me. These super-intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die," he added. Geoffrey Hinton is renowned for his groundbreaking work on neural networks, which laid the foundation for the current AI revolution. In May 2023, he quit his job at Google so he could freely speak out about the risks of AI.

Weight loss before IVF may improve odds of pregnancy
Weight loss before IVF may improve odds of pregnancy

Time of India

time5 hours ago

  • Time of India

Weight loss before IVF may improve odds of pregnancy

London: Women seeking in vitro fertilization might improve their odds of becoming pregnant if they lose weight, but the magnitude of any advantage wasn't clear, in a new analysis of previous studies. The benefit of weight loss was mainly seen in the few couples who ultimately achieved pregnancy without assistance, however. While weight loss interventions appeared to improve the likelihood of spontaneous pregnancy - negating the need for IVF - it was not clear whether they improved the odds of IVF-induced pregnancy, according to the report by lead researcher Moscho Michalopoulou and colleagues at the University of Oxford in the Annals of Internal Medicine. Also unclear was whether weight loss improved the odds of a live birth. Weight loss interventions studied included low-calorie diets, an exercise program accompanied by healthy eating advice, and pharmacotherapy accompanied by diet and physical activity advice - but no single approach seemed better than another. The 12 randomized trials in the review were small, and the wide variety of methods employed by the various research teams made it hard to compare the results, the authors of the new analysis wrote. Weight loss did not appear to increase the risk of pregnancy loss, the researchers also found. Dr. Alan Penzias, an IVF specialist at Beth Israel Deaconess Medical Center/Harvard Medical School in Boston, published an editorial with the study. He notes that "weight reduction among people with overweight or obesity has many known health benefits... (and) some patients may also achieve a desired pregnancy as a consequence of weight loss." But in decision-making about IVF, the editorial continues, "we must consider the marked decrease in fertility as age increases... and other factors that weight loss cannot address." EXPERIMENTAL NANOBOTS SEAL OFF SENSITIVE TOOTH NERVES Experimental microscopic robots that travel into tiny tunnels in teeth may one day offer lasting relief from tooth sensitivity, laboratory experiments suggest. Tooth sensitivity - sharp, sudden pain triggered by hot, cold, sweet, or sour substances - occurs when the protective layers of the tooth are compromised, exposing the underlying nerve endings. The researchers' so-called CalBots are 400-nanometer magnetic particles loaded with a ceramic formula that mimics the natural environment of the tooth. Guided by an external magnetic field, the tiny bots travel deep into the exposed tubules and assemble themselves into cement-like plugs that protect the nerve. In lab experiments on extracted human teeth, high-resolution imaging confirmed that the bots had created tight seals, the researchers reported. In animal tests, they found that mice with tooth sensitivity who had been avoiding cold water would drink it again after treatment with the CalBot solution. Most current treatments for tooth sensitivity, such as desensitizing toothpastes, offer only surface-level relief and need to be reapplied regularly, while the CalBots would provide longer-lasting relief in just one application, the researchers reported in Advanced Science. They hope their treatment - which still needs to be tested in humans - might eventually offer benefits beyond the relief of dental hypersensitivity, such as minimizing the penetration of bacteria into cavities and tooth injuries. "We didn't want to create a slightly better version of what's already out there," study leader Shanmukh Peddi, a post-doctoral researcher at the Indian Institute of Science in Bangalore, said in a statement. "We wanted a technology that solves a real problem in a way that no one's attempted before." Peddi is a co-founder of Theranautilus, a Bangalore nanotechnology and healthcare company.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store