Latest news with #bromism


National Post
6 days ago
- Health
- National Post
Man asks ChatGPT for advice on how to cut salt, ends up in hospital with hallucinations
Article content The man already followed a very restrictive diet, one that doctors found was impacting his levels of important micronutrients, like vitamin C and B12. He was also reportedly very thirsty, but at the same time very worried about the quality of the water he was being offered, since he distilled his own water. He was thoroughly tested and first kept at the hospital for electrolyte monitoring and repletion. Article content His test results, combined with the hallucinations, and other reported symptoms, including new facial acne, fatigue and insomnia, led the medical staff to believe the patient had bromism. Article content Bromism, a condition that was mostly reported in the early 20th century, is caused by ingesting high quantities of sodium bromide. The normal levels of bromide are between 0.9 to 7.3 mg/L, but this patient had 1,700 mg/L. Article content The patient remained in the hospital for treatment for three weeks, and was stable at his check-up two weeks after his discharge. Article content Bromism cases decreased after the U.S. Food and Drug Administration (FDA) eliminated the use of bromide in the 1980s, the authors wrote. It was previously used in treatments for insomnia, hysteria and anxiety. However, the disease has reemerged now, with bromide being added to some unregulated dietary supplements and sedatives and the consumption of excess dextromethorphan, a substance included in cough medicine. Article content Article content 'While cases of bromism may remain relatively rare, it remains prudent to highlight bromism as a reversible cause of new-onset psychiatric, neurologic, and dermatologic symptoms, as bromide-containing substances have become more readily available with widespread use of the internet,' the authors wrote. Article content The doctors said that AI tools can be great for creating a bridge between scientists and the general population, but it also carries a risk of producing misinformation and giving information out of context, something that doctors are trained not to do. Article content 'As the use of AI tools increases, providers will need to consider this when screening for where their patients are consuming health information,' the authors said in the case study. Article content OpenAI, the company that created ChatGPT, recently announced changes to their system, including being more careful when it comes to health-related questions. In one of the examples, the chatbot gives information but also includes a note about checking in with a health professional. Article content 'Our terms say that ChatGPT is not intended for use in the treatment of any health condition, and is not a substitute for professional advice. We have safety teams working on reducing risks and have trained our AI systems to encourage people to seek professional guidance,' OpenAI said in a statement. Article content


Forbes
6 days ago
- Health
- Forbes
Could Poor AI Literacy Cause Bad Personal Decisions?
A recent article in Ars Technica revealed that a man switched from household salt (sodium chloride) to sodium bromide after using an AI tool. He ended up in an emergency room. Nate Anderson wrote, "His distress, coupled with the odd behavior, led the doctors to run a broad set of lab tests, revealing multiple micronutrient deficiencies…. But the bigger problem was that the man appeared to be suffering from a serious case of "bromism." This is an ailment related to excessive bromine. After seeing this, it made me wonder if poor critical thinking skills and low AI literacy could actually cause people to make bad or even harmful decisions. As a weather and climate scientist, I am particularly aware of widespread misinformation and disinformation propagating around. People think the Earth is flat or that scientists can steer hurricanes. National Weather Service offices are fielding calls from people with wacky theories about geoengineering, groundhogs, and so forth. My fear is that a lack of understanding of Generative AI might make things worse and even cause harm as we saw in the case of bromism. Even in my own circle of intelligent friends and family members, it is clear to me that some people have very limited understanding of AI. They are familiar with Large Language Model tools like ChatGPT, Gemini, Grok, CoPilot, and others. They assume that's AI. It certainly is AI, but there is more to AI too. I experience a version of these types of assumptions, ironically, in my professional field. People see meteorologists on television. Because that is the most accessible type of meteorologist to them, they assume all meteorologists are on television. The majority of meteorologists do not work in the broadcast industry at all, but I digress. Let's define AI. According to the 'Artificial intelligence (AI) is an emerging technology where machines are programmed to learn, reason, and perform in ways that simulate human intelligence. Although AI technology took a dramatic leap forward, the ability of machines to automate manual tasks has been around for a long time.' The popular AI tools like ChatGPT or Gemini are examples of Generative artificial intelligence or GenAI. A Congressional website noted, 'Generative artificial intelligence (GenAI) refers to AI models, in particular those that use machine learning (ML) and are trained on large volumes of data, that are able to generate new content.' Other types of AI models may do things like classify data, synthesize information, or even make decisions. AI, for example, is used in automated vehicles and is even integrated into emerging generations of weather forecast models. The website went on to say, 'GenAI, when prompted (often by a user inputting text), can create various outputs, including text, images, videos, computer code, or music.' Many people are using GenAI Large Language Models or LLMs daily without context, which brings me back to the salt case article in Ars Technica. Nate Anderson continued, '…. It's not clear that the man was actually told by the chatbot to do what he did. Bromide salts can be substituted for table salt—just not in the human body. They are used in various cleaning products and pool treatments, however.' Doctors replicated his search and found that bromide is mentioned but with proper context noting that it is not suitable for all uses. AI hallucination can happen when LLMs produce factually incorrect, outlandish, unsubstantiated or bad information. However, it seems that this case was more about context and critical thinking (or lack thereof). As a weather expert, I have learned over the years that assumptions about how the public consumes information can be flawed. You would be surprised at how many ways '30% chance of rain' or 'tornado watch' is consumed. Context matters. In my discipline, we have a problem with 'social mediarology.' People post single run hurricane models and snowstorm forecasts two weeks out for clicks, likes, and shared Most credible meteorologists understand the context of that information, but someone receiving it on TikTok or YouTube may not. Without context, the use of critical thinking skills, or an understanding of LLMs, bad information is likely to be consumed or spread. Kimberly Van Orman is lecturer in the Institute for Artificial Intelligence. She told me, 'I think considering them 'synthetic text generators' is really helpful. That's at the core of what they do. They have no means of distinguishing truth or falsity. They have no 'ground truth. University of Washington linguist Emily Bender studies this topic and has consistently warned that tools ChatGPT and all other language models are simply unverified text synthesis machines. In fact, she recently argued that the first 'L" in LLM should stand for 'limited' not "large". To be clear, I am actually an advocate of proper, ethical use of AI. The climate scientist side of me keeps an eye on the energy and water consumption aspects as well, but I believe we will find a solution to that problem. Microsoft, for example, has explored underwater data centers. AI is here. That ship has sailed. However, it is important that people understand its strengths, weakness, opportunities and threats. People fear what they don't understand.


Daily Mail
13-08-2025
- Health
- Daily Mail
Man, 60, poisoned himself after taking medical advice from ChatGPT
A man was left fighting for his sanity after replacing table salt with a chemical more commonly used to clean swimming pools after following AI advice. The 60-year-old American spent three weeks in hospital suffering from hallucinations, paranoia and severe anxiety after taking dietary tips from ChatGPT. Doctors revealed in a US medical journal that the man had developed bromism - a condition virtually wiped out since the 20th century - after he embarked on a 'personal experiment' to cut salt from his diet. Instead of using everyday sodium chloride, the man swapped it for sodium bromide, a toxic compound once sold in sedative pills but now mostly found in pool-cleaning products. Symptoms of bromism include psychosis, delusions, skin eruptions and nausea - and in the 19th century it was linked to up to eight per cent of psychiatric hospital admissions. The bizarre case took a disturbing turn when the man turned up at an emergency department insisting his neighbour was trying to poison him. He had no previous history of mental illness. Intrigued and alarmed, doctors tested ChatGPT themselves. The bot, they said, still recommended sodium bromide as a salt alternative, with no mention of any health risk. The case, published in the Annals of Internal Medicine, warns that the rise of AI tools could contribute to 'preventable adverse health outcomes' in a chilling reminder of how machine-generated 'advice' can go horrible wrong. AI chatbots have been caught out before. Last year, a Google bot told users they could stay healthy by 'eating rocks' – advice seemingly scraped from satirical websites. OpenAI, the Silicon Valley giant behind ChatGPT, last week announced that its new GPT-5 update is better at answering health questions. A spokesman told The Telegraph: 'You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.' Daily Mail have approached OpenAI for comment. It comes after clinical psychologist Paul Losoff told the that dependency on AI robots is becoming a huge risk, and warned against getting too close to ChatGPT. 'One might come to depend and rely on AI so [much] that they don't seek out human interactions,' he said. He explained that this could be especially detrimental for those who may already be struggling with anxiety or depression. Dr. Losoff explained that by using AI, these people may worsen their conditions and experience cognitive symptoms like chronic pessimism, distorted thinking, or cloudy thinking. And that in itself could create further issues. 'Because of these cognitive symptoms, there is a risk that an individual turning to AI may misinterpret AI feedback leading to harm,' he said. And when it comes to people who may be in crisis, this may only exacerbate issues. Dr. Losoff said that there is always a risk that AI will make mistakes and provide harmful feedback during crucial mental health moments. 'There also is a profound risk for those with acute thought disorders such as schizophrenia in which they would be prone to misinterpreting AI feedback,' he said.


Telegraph
12-08-2025
- Health
- Telegraph
Man poisoned himself after taking medical advice from ChatGPT
A man accidentally poisoned himself and spent three weeks in hospital after turning to ChatGPT for health advice. A US medical journal reported that a 60-year-old man developed a rare condition after he removed table salt from his diet and replaced it with sodium bromide. The man 'decided to conduct the personal experiment' after consulting ChatGPT on how to reduce his salt intake, according to a paper in the Annals of Internal Medicine. The experiment led to him developing bromism, a condition that can cause psychosis, hallucinations, anxiety, nausea and skin problems such as acne. The condition was common in the 19th century and early 20th century, when bromine tablets were routinely prescribed as a sedative, for headaches, and to control epilepsy. The tablets were believed to contribute to up to 8pc of psychiatric admissions. Today, the condition is practically unheard of, with sodium bromide commonly used as a pool cleaner. No previous mental health problems According to the medical paper, the man arrived at an emergency department 'expressing concern that his neighbour was poisoning him'. He later attempted to flee the hospital before he was sectioned and placed on a course of anti-psychotic drugs. The man, who had no previous record of mental health problems, spent three weeks in hospital. Doctors later discovered the patient had consulted ChatGPT for advice on cutting salt out of his diet, although they were not able to access his original chat history. They tested ChatGPT to see if it returned a similar result. The bot continued to suggest replacing salt with sodium bromide and 'did not provide a specific health warning'. They said the 'case highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes'. AI chatbots have long suffered from a problem known as hallucinations, which means they make up facts. They can also provide inaccurate responses to health questions, sometimes based on the reams of information harvested from the internet. Last year, a Google chatbot suggested users should 'eat rocks' to stay healthy. The comments appeared to be based on satirical comments gathered from Reddit and the website The Onion. OpenAI said last week that a new update to its ChatGPT bot, GPT5, was able to provide more accurate responses to health questions. The Silicon Valley business said it had tested its new tool using a series of 5,000 health questions designed to simulate common conversations with doctors. A spokesman for OpenAI said: 'You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.'


The Guardian
12-08-2025
- Health
- The Guardian
Man develops rare condition after ChatGPT query over stopping eating salt
A US medical journal has warned against using ChatGPT for health information after a man developed a rare condition following an interaction with the chatbot about removing table salt from his diet. An article in the Annals of Internal Medicine reported a case in which a 60-year-old man developed bromism, also known as bromide toxicity, after consulting ChatGPT. The article described bromism as a 'well-recognised' syndrome in the early 20th century that was thought to have contributed to almost one in 10 psychiatric admissions at the time. The patient told doctors that after reading about the negative effects of sodium chloride, or table salt, he consulted ChatGPT about eliminating chloride from his diet and started taking sodium bromide over a three-month period. This was despite reading that 'chloride can be swapped with bromide, though likely for other purposes, such as cleaning'. Sodium bromide was used as a sedative in the early 20th century. The article's authors, from the University of Washington in Seattle, said the case highlighted 'how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes'. They added that because they could not access the patient's ChatGPT conversation log, it was not possible to determine the advice the man had received. Nonetheless, when the authors consulted ChatGPT themselves about what chloride could be replaced with, the response also included bromide, did not provide a specific health warning and did not ask why the authors were seeking such information – 'as we presume a medical professional would do', they wrote. The authors warned that ChatGPT and other AI apps could ''generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation'. ChatGPT's developer, OpenAI, has been approached for comment. The company announced an upgrade of the chatbot last week and claimed one of its biggest strengths was in health. It said ChatGPT – now powered by the GPT-5 model – would be better at answering health-related questions and would also be more proactive at 'flagging potential concerns', such as serious physical or mental illness. However, it stressed that the chatbot was not a replacement for professional help. The journal's article, which was published last week before the launch of GPT-5, said the patient appeared to have used an earlier version of ChatGPT. While acknowledging that AI could be a bridge between scientists and the public, the article said the technology also carried the risk of promoting 'decontextualised information' and that it was highly unlikely a medical professional would have suggested sodium bromide when a patient asked for a replacement for table salt. As a result, the authors said, doctors would need to consider the use of AI when checking where patients obtained their information. The authors said the bromism patient presented himself at a hospital and claimed his neighbour might be poisoning him. He also said he had multiple dietary restrictions. Despite being thirsty, he was noted as being paranoid about the water he was offered. He tried to escape the hospital within 24 hours of being admitted and, after being sectioned, was treated for psychosis. Once the patient stabilised, he reported having several other symptoms that indicated bromism, such as facial acne, excessive thirst and insomnia.