
Man poisoned himself after taking medical advice from ChatGPT
A US medical journal reported that a 60-year-old man developed a rare condition after he removed table salt from his diet and replaced it with sodium bromide. The man 'decided to conduct the personal experiment' after consulting ChatGPT on how to reduce his salt intake, according to a paper in the Annals of Internal Medicine.
The experiment led to him developing bromism, a condition that can cause psychosis, hallucinations, anxiety, nausea and skin problems such as acne.
The condition was common in the 19th century and early 20th century, when bromine tablets were routinely prescribed as a sedative, for headaches, and to control epilepsy. The tablets were believed to contribute to up to 8pc of psychiatric admissions.
Today, the condition is practically unheard of, with sodium bromide commonly used as a pool cleaner.
No previous mental health problems
According to the medical paper, the man arrived at an emergency department 'expressing concern that his neighbour was poisoning him'.
He later attempted to flee the hospital before he was sectioned and placed on a course of anti-psychotic drugs. The man, who had no previous record of mental health problems, spent three weeks in hospital.
Doctors later discovered the patient had consulted ChatGPT for advice on cutting salt out of his diet, although they were not able to access his original chat history.
They tested ChatGPT to see if it returned a similar result. The bot continued to suggest replacing salt with sodium bromide and 'did not provide a specific health warning'.
They said the 'case highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes'.
AI chatbots have long suffered from a problem known as hallucinations, which means they make up facts. They can also provide inaccurate responses to health questions, sometimes based on the reams of information harvested from the internet.
Last year, a Google chatbot suggested users should 'eat rocks' to stay healthy. The comments appeared to be based on satirical comments gathered from Reddit and the website The Onion.
OpenAI said last week that a new update to its ChatGPT bot, GPT5, was able to provide more accurate responses to health questions.
The Silicon Valley business said it had tested its new tool using a series of 5,000 health questions designed to simulate common conversations with doctors.
A spokesman for OpenAI said: 'You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
an hour ago
- Daily Mail
'America's saddest man' beaten up and taken to LA hospital...but no one knows who he is
A man lies in a Los Angeles hospital room hooked up to a ventilator with hospital officials begging for any information to help identify him. The man was brought to Dignity Health-California Hospital Medical Center on August 9, according to KTLA. He was found completely unresponsive near an intersection in the city's Westlake District, on South Alvarado and 7th Streets. The hospital has been treating him for several days but he has not woken up, so doctors have been unable to find out who he is. In addition, they have found no documentation or evidence that could reveal the man's identity. They are putting out information about the mystery man in an attempt to locate anyone that can provide information. Dignity Healthy released a photo and described him as an Hispanic man in his early 50s. Officials say the 5-feet 5-inches tall man weighs 145 pounds and has brown eyes and black-gray hair. He was found completely unresponsive near an intersection in the city's Westlake District, on South Alvarado and 7th Streets They would not reveal what the man's condition was due to doctor-patient confidentiality laws. He currently suffers several scabs and cuts across his face and nose and remains on a ventilator. If the public have any information regarding this man, California Hospital Medical Center are urging you to call them at 213-742-5511 or 213-507-5495. It's unclear if there was any criminal element involved in why the man ended up in this tragic situation. The Daily Mail has reached out to the Los Angeles Police Department for comment. Unfortunately, stories like these are all too common. Last month, a California man was found unconscious and was rushed to St. Mary Medical Center in Long Beach. He was believed to be in his mid-forties, but just as in Pam's case, little else was known about the patient. A chilling photo released by Dignity Health showed the man lying in a hospital bed, hooked up to a ventilator. In October 2024, another California hospital took a similar approach to Mount Sinai in hopes of identifying a seriously ill patient. Staff at the Riverside Community Hospital had done everything they could think of, but could not determine the name of a man who came through the facility's doors a month earlier. They refused to say what was wrong with him or why he was attached to a ventilator, but released a photograph in the hopes that someone can put a name to the face. Identifying John or Jane Doe patients is no easy task, as doctors and other hospital staff members must work to find out who they are without violating their rights. The New York Department of Health has protocols in place specifically for missing children, college students and vulnerable adults. These standards were set in 2018 after 'several instances of a missing adult with Alzheimer's disease who was admitted to a hospital as an unidentified patient and police and family members were unable to locate the individual.' However, the process is not as cut and dry when it is the hospital asking for the public's help instead of the other way around. While hospitals have been known to share images of unknown patients when all else fails, they are not allowed to reveal much about their circumstances.


Daily Mirror
an hour ago
- Daily Mirror
Man thought neighbour was poisoning him after taking medical advice from ChatGPT
A 60-year-old thought his neighbour was trying to poison him after he became ill with psychosis. He had been taking a chemical following what he said was advice from ChatGPT Experts are warning ChatGPT could distribute harmful medical advice after a man developed a rare condition and thought his neighbour was poisoning him. A 60-year-old man developed bromism as a result of removing table salt from his diet following an interaction with the AI chatbot, according to an article in the Annals of Internal Medicine journal. Doctors were told by the patient that he had read about the negative effects of table salt and asked the AI bot to help him remove it from his diet. Bromism, also known as bromide toxicity, 'was once a well-recognised toxidrome in the early 20th century' that 'precipitated a range of presentations involving neuropsychiatric and dermatologic symptoms ', the study said. It comes after a doctor's warning to people who drink even a 'single cup of tea'. READ MORE: Man, 30, put shoulder pain down to gym aches, then doctors asked where he'd like to die Initially, the man thought his neighbour was poisoning him and he was experiencing 'psychotic symptoms'. He was noted to be paranoid about the water he was offered and tried to escape the hospital he presented himself to within a day of being there. His symptoms later improved after treatment. He told doctors he began taking sodium bromide over a three month period after reading that table salt, or sodium chloride, can 'can be swapped with bromide, though likely for other purposes, such as cleaning'. Sodium bromide was used as a sedative by doctors in the early part of the 20th century. The case, according to experts from the University of Washington in Seattle who authored the article, revealed 'how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes'. The authors of the report said it was not possible to access the man's ChatGPT log to determine exactly what he was told, but when they asked the system to give them a recommendation for replacing sodium chloride, the answer included bromide. The response did not ask why the authors were looking for the information, nor provide a specific health warning. It has left scientists fearing 'scientific inaccuracies' being generated by ChatGPT and other AI apps as they 'lack the ability to critically discuss results' and could 'fuel the spread of misinformation'. Last week, OpenAI announced it had released the fifth generation of the artificial intelligence technology that powers ChatGPT. 'GPT-5' would be improved in 'flagging potential concerns' like illnesses, OpenAI said according to The Guardian. OpenAI also stressed ChatGPT was not a substitute for medical assistance.


Telegraph
6 hours ago
- Telegraph
Man poisoned himself after taking medical advice from ChatGPT
A man accidentally poisoned himself and spent three weeks in hospital after turning to ChatGPT for health advice. A US medical journal reported that a 60-year-old man developed a rare condition after he removed table salt from his diet and replaced it with sodium bromide. The man 'decided to conduct the personal experiment' after consulting ChatGPT on how to reduce his salt intake, according to a paper in the Annals of Internal Medicine. The experiment led to him developing bromism, a condition that can cause psychosis, hallucinations, anxiety, nausea and skin problems such as acne. The condition was common in the 19th century and early 20th century, when bromine tablets were routinely prescribed as a sedative, for headaches, and to control epilepsy. The tablets were believed to contribute to up to 8pc of psychiatric admissions. Today, the condition is practically unheard of, with sodium bromide commonly used as a pool cleaner. No previous mental health problems According to the medical paper, the man arrived at an emergency department 'expressing concern that his neighbour was poisoning him'. He later attempted to flee the hospital before he was sectioned and placed on a course of anti-psychotic drugs. The man, who had no previous record of mental health problems, spent three weeks in hospital. Doctors later discovered the patient had consulted ChatGPT for advice on cutting salt out of his diet, although they were not able to access his original chat history. They tested ChatGPT to see if it returned a similar result. The bot continued to suggest replacing salt with sodium bromide and 'did not provide a specific health warning'. They said the 'case highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes'. AI chatbots have long suffered from a problem known as hallucinations, which means they make up facts. They can also provide inaccurate responses to health questions, sometimes based on the reams of information harvested from the internet. Last year, a Google chatbot suggested users should 'eat rocks' to stay healthy. The comments appeared to be based on satirical comments gathered from Reddit and the website The Onion. OpenAI said last week that a new update to its ChatGPT bot, GPT5, was able to provide more accurate responses to health questions. The Silicon Valley business said it had tested its new tool using a series of 5,000 health questions designed to simulate common conversations with doctors. A spokesman for OpenAI said: 'You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.'