
ChatGPT dietary advice sends man to hospital with dangerous chemical poisoning
The 60-year-old man, who was looking to eliminate table salt from his diet for health reasons, used the large language model (LLM) to get suggestions for what to replace it with, according to a case study published this week in the Annals of Internal Medicine.
When ChatGPT suggested swapping sodium chloride (table salt) for sodium bromide, the man made the replacement for a three-month period — although, the journal article noted, the recommendation was likely referring to it for other purposes, such as cleaning.
Sodium bromide is a chemical compound that resembles salt, but is toxic for human consumption.
It was once used as an anticonvulsant and sedative, but today is primarily used for cleaning, manufacturing and agricultural purposes, according to the National Institutes of Health.
When the man arrived at the hospital, he reported experiencing fatigue, insomnia, poor coordination, facial acne, cherry angiomas (red bumps on the skin) and excessive thirst — all symptoms of bromism, a condition caused by long-term exposure to sodium bromide.
The man also showed signs of paranoia, the case study noted, as he claimed that his neighbor was trying to poison him.
He was also found to have auditory and visual hallucinations, and was ultimately placed on a psychiatric hold after attempting to escape.
The man was treated with intravenous fluids and electrolytes, and was also put on anti-psychotic medication. He was released from the hospital after three weeks of monitoring.
"This case also highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes," the researchers wrote in the case study.
"These are language prediction tools — they lack common sense and will give rise to terrible results if the human user does not apply their own common sense."
"Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs."
It is "highly unlikely" that a human doctor would have mentioned sodium bromide when speaking with a patient seeking a substitute for sodium chloride, they noted.
"It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results and ultimately fuel the spread of misinformation," the researchers concluded.
Dr. Jacob Glanville, CEO of Centivax, a San Francisco biotechnology company, emphasized that people should not use ChatGPT as a substitute for a doctor.
"These are language prediction tools — they lack common sense and will give rise to terrible results if the human user does not apply their own common sense when deciding what to ask these systems and whether to heed their recommendations," Glanville, who was not involved in the case study, told Fox News Digital.
"This is a classic example of the problem: The system essentially went, 'You want a salt alternative? Sodium bromide is often listed as a replacement for sodium chloride in chemistry reactions, so therefore it's the highest-scoring replacement here.'"
Dr. Harvey Castro, a board-certified emergency medicine physician and national speaker on artificial intelligence based in Dallas, confirmed that AI is a tool and not a doctor.
"Large language models generate text by predicting the most statistically likely sequence of words, not by fact-checking," he told Fox News Digital.
"ChatGPT's bromide blunder shows why context is king in health advice," Castro went on. "AI is not a replacement for professional medical judgment, aligning with OpenAI's disclaimers."
Castro also cautioned that there is a "regulation gap" when it comes to using LLMs to get medical information.
"Our terms say that ChatGPT is not intended for use in the treatment of any health condition, and is not a substitute for professional advice."
"FDA bans on bromide don't extend to AI advice — global health AI oversight remains undefined," he said.
There is also the risk that LLMs could have data bias and a lack of verification, leading to hallucinated information.
"If training data includes outdated, rare or chemically focused references, the model may surface them in inappropriate contexts, such as bromide as a salt substitute," Castro noted.
"Also, current LLMs don't have built-in cross-checking against up-to-date medical databases unless explicitly integrated."
To prevent cases like this one, Castro called for more safeguards for LLMs, such as integrated medical knowledge bases, automated risk flags, contextual prompting and a combination of human and AI oversight.
The expert added, "With targeted safeguards, LLMs can evolve from risky generalists into safer, specialized tools; however, without regulation and oversight, rare cases like this will likely recur."
For more health articles, visit www.foxnews.com/health
OpenAI, the San Francisco-based maker of ChatGPT, provided the following statement to Fox News Digital.
"Our terms say that ChatGPT is not intended for use in the treatment of any health condition, and is not a substitute for professional advice. We have safety teams working on reducing risks and have trained our AI systems to encourage people to seek professional guidance."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
40 minutes ago
- Forbes
Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct
In today's column, I first examine the new ChatGPT Study Mode that has gotten big-time headline news and then delve into whether the crafting of this generative AI capability could be similarly undertaken in the mental health realm. The idea is this. The ChatGPT Study Mode was put together by crafting custom instructions for ChatGPT. It isn't an overhaul or feature creation. It seems to be nothing new per se, other than specifying a set of detailed instructions, as dreamed up by various educational specialists, telling the AI what it is to undertake in an educational context. That's considered 'new' in the sense that it is an inspiring use of custom instructions and a commendable accomplishment that will be of benefit to students and eager learners. Perhaps by gathering psychologists and mental health specialists, an AI-based Therapy Mode could similarly be ingenuously developed. Mindful readers asked me about this. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here. ChatGPT Study Mode Introduced A recent announcement by OpenAI went relatively far and wide. They cheerfully introduced ChatGPT Study Mode, as articulated in their blog posting 'Introducing Study Mode' on July 29, 2025, and identified these salient points (excerpts): As far as can be discerned from the outside, this capability didn't involve revising the underpinnings of the AI, nor did it seem to require bolting on additional functionality. It seems that the mainstay was done using custom instructions (note, if they did make any special core upgrades, they seem to have remained quiet on the matter since it isn't touted in their announcements). Custom Instructions Are Powerful Assuming that they only or mainly used custom instructions to bring forth this useful result, it gives great hope and spurs avid attention to the amazing power of custom instructions. You can do a lot with custom instructions. But I would wager that few know about custom instructions and even fewer have done anything substantive with them. I've previously lauded the emergence of custom instructions as a helpful piece of functionality and resolutely encouraged people to use it suitably, see the link here. Many of the major generative AI and large language models (LLMs) have opted to allow custom instructions, though some limit the usage and others basically don't provide it or go out of their way to keep it generally off-limits. Allow me a brief moment to bring everyone up to speed on the topic. Suppose you want to tell AI to act a certain way. You want the AI to do this across all subsequent conversations. This usually only applies to your instance. I'll explain in a moment how to do so across instances and allow other people to tap into your use of custom instructions. I might want my AI to always give me its responses in a poetic manner. You see, perhaps I relish poems. I go to the specified location of my AI that allows the entering of a custom instruction and tell it to always respond poetically. After saving this, I will then find that any conversation will always be answered with poetic replies by the AI. In this case, my custom instruction was short and sweet. I merely told the AI to compose answers poetically. If I had something more complex in mind, I could devise a quite lengthy custom instruction. The custom instruction could go on and on, telling the AI to write poetically when it is daytime, but not at nighttime, and to make sure the poems are lighthearted and enjoyable. I might further indicate that I want poems that are rhyming and must somehow encompass references to cats and dogs. And so on. I'm being a bit facetious and just giving you a semblance that a custom instruction can be detailed and provide a boatload of instructions. Custom Instructions Mixed Bag The beauty of custom instructions is that they serve as an overarching form of guidance to the generative AI. They are considered to have a global scope for your instance. All conversations that you have will be subject to whatever the custom instruction says should take place. With such power comes some downsides. Imagine that I am using the AI and have a serious question that should not be framed in a poem. Lo and behold, I ask the solemn question and get a poetic answer. The AI is following what the custom instruction indicated. Period, end of story. The good news is that you can tell the AI that you want it to disregard the custom instructions. When I enter a question, I could mention in the prompt that the AI is not to abide by the custom instructions. Voila, the AI will provide a straightforward answer. Afterward, the custom instructions will continue to apply. The malleability is usually extensive. For example, I might tell the AI that for the next three prompts, do not abide by the custom instructions. Or I could tell the AI that the custom instructions are never to be obeyed unless I say in a prompt that they should be obeyed. I think you can see that this is a generally malleable aspect. Goofed Up Custom Instructions The most disconcerting downside of custom instructions is that you might inadvertently say something in the instructions that is to your detriment. Maybe you won't even realize what you've done. Consider my poetic-demanding custom instruction. I could include a line that insists that no matter what any of my prompts say, never allow me to override the custom instruction. Perhaps I thought that was a smart move. The problem will be that later, I might forget that I had included that line. When I try to turn off the custom instruction via a prompt, the AI might refuse. Usually, the AI will inform you of such a conflict, but there's no guarantee that it will. Worse still is a potential misinterpretation of something in your custom instructions. I might have said that the AI should never mention ugly animals in any of its responses. What in the world is an ugly animal? The sky is the limit. Unfortunately, the AI will potentially opt not to mention all kinds of animals that were not what I had in my mind. Would I realize what is happening? Possibly not. The AI responses would perchance mention some animals and not mention others. It might not be obvious which animals aren't being described. My custom instruction is haunting me because the AI interprets what I said, though the interpretation differs from what I meant. AI Mental Health Advice Shifting gears, let's aim to use custom instructions for the betterment of humanity, rather than the act of simply producing poetic responses. The ChatGPT Study Mode pushes the AI to perform Socratic dialogues with the user and gives guidance rather than spitting out answers. The custom instructions get this to occur. Likewise, the AI attempts to assess the level of proficiency of the user and adjusts to their skill level. Personalized feedback is given. The AI tracks your progress. It's nifty. All due to custom instructions. What other context might custom instructions tackle? I'll focus on the context of mental health. Here's the deal. We get together a bunch of psychologists, psychiatrists, therapists, mental health professionals, and the like. They work fervently on composing a set of custom instructions telling the AI how to perform therapy. This includes diagnosing mental health conditions. It includes generating personal recommendations on aiding your mental health. We could turn the generic generative AI that saunters around in the mental health context and turn it into something more bona fide and admirable. Boom, drop the mic. The World Is Never Easy If you are excited about the prospects of these kinds of focused custom instructions, such as for therapy, I am going to ask you to sit down and pour yourself a glass of fine wine. The reason I say this is that there have indeed been such efforts in the mental health realm. And, by and large, the result is not as standout as you might have hoped for. First, the topic of mental health is immense and involves risks to people when inappropriate therapy is employed. Trying to devise a set of custom instructions that can fully and sufficiently provide bona fide therapy is not only unlikely but also inevitably misleading. I say this because some have tried this route and made outlandish claims of what the AI can do as a result of the loaded custom instructions. Watch out for unfulfilled claims. See my extensive coverage at the link here. Second, any large set of custom instructions on performing therapy is bound to be incomplete, contain misinterpretable indications, and otherwise be subject to the downsides that I've noted above. The nature of using custom instructions as an all-in-one solution in this arena is like trying to use a hammer on everything, even though you ought to be using a screwdriver on screws, and so on. Third, some argue that using custom instructions for therapy is better than not having any custom instructions at all. The notion is that if you are using a generic generative AI that is working without mental health custom instructions, you are certainly better off by using one that at least has custom instructions. The answer there is that it depends on the nature of the custom instructions. There is a solid chance that the custom instructions might worsen what the AI is going to say. You can just as easily boost the AI as you can undercut the AI. Don't fall into the trap that custom instructions mean things are necessarily for the better. Accessing Custom GPTs I had earlier alluded to the aspect that there is a means of allowing other users to employ your set of custom instructions. Many of the popular LLMs tend to allow you to generate an AI applet of sorts, containing tailored custom instructions that can be used by others. Sometimes the AI maker establishes a library into which these applets reside and are publicly available. OpenAI provides this via the use of GPTs, which are akin to ChatGPT applets -- you can learn about how to use those in my detailed discussion at the link here and the link here. Unfortunately, as with all new toys, some have undermined these types of AI applets. There are AI applets that contain custom instructions written by licensed therapists who genuinely did their best to craft therapy-related custom instructions. That seems encouraging. But I'm hoping you now realize that even the best of intentions might not come out suitably. Good intentions don't guarantee suitable results. Those custom instructions could have trouble brewing within them. There are also AI applets that brashly claim to be for mental health, yet they are utterly shallow and devised by someone who has zero expertise in mental health. Don't let your guard down by flashy claims. The more egregious ones are AI applets that are marketed as though they are about mental health, when the reality is that it is a scam. The custom instructions have nothing to do with therapy. Instead, the custom instructions attempt to take over your AI, grab your personal info, and generally be a pest and make life miserable for you. Wolves in sheep's clothing. The Full Meal Deal Where do we go from here? The use of custom instructions for therapy when aiming to bring forth an AI-based Therapy Mode in a generic generative AI is not generally a good move. Even if you assemble a worthy collection of the best psychologists and mental health experts, you are trying to put fifty pounds into a five-pound bag. It just isn't a proper fit. The better path is being pursued. I am a big advocate and doing research on generative AI and LLMs that are built from the ground up for mental health advisement, see my framework layout at the link here. The approach consists of starting from the beginning when devising an LLM to make it into a suitable therapy-oriented mechanism. This is in stark contrast to trying to take an already completed generic generative AI and reshape it into a mental health context. I believe it is wiser to take a fresh uplift instead. Bottom Line Answered For readers who contacted me and asked whether the ChatGPT Study Mode foretells that the same impressive results of education-oriented custom instructions can be had in other domains, yes, for sure, there are other domains that this can readily apply to. Is mental health one of those suitable domains? I vote no. Mental health advisement deserves more. A final thought for now. Voltaire astutely observed: 'No problem can withstand the assault of sustained thinking.' We need to put on our thinking caps and aim for the right solution rather than those quick-fix options that might seem viable but contain unsavory gotchas and injurious hiccups. Sustained thinking is worth its weight in gold.
Yahoo
14 hours ago
- Yahoo
Why do I feel more awake after 7.5 hours of sleep compared to 9?
When you buy through links on our articles, Future and its syndication partners may earn a commission. The recommended amount of sleep for healthy adults is 7-9 hours, according to the National Institutes of Health. So why then, is it possible to feel better, more alert and refreshed after 7.5 hours sleep than getting a lengthier sleep of 9 hours? Surely, more is more when it comes to sleep? Well, not always. In fact, there are a number of reasons that you might feel worse after 9 hours compared to 7.5. Here, a sleep expert shares these, as well as revealing which is the most common cause of waking up feeling groggy. And, although we know one of the key ways to ensure you're waking up feeling great is to have one of the best mattresses for your sleep needs, we'll also be sharing a few more tips to help you ensure you wake up feeling refreshed every day of the week. How much sleep do we need? "Most adults need 7–9 hours a night. That number is largely set by biology – you can't train your body to genuinely need less without it taking a toll on mood, focus, and health," says UKCP psychotherapist and sleep specialist, and author of How To Be Awake So You Can Sleep Through The Night, Heather Darwall-Smith. Sleep runs in cycles, she explains, with each lasting for about 90 minutes, although they can extend to 100 minutes. During each sleep cycle we move "through light sleep, deep (slow-wave) sleep, and REM (dreaming) sleep. Deep sleep dominates the first half of the night, REM the second. Most people go through 4–6 cycles a night. You can't change the structure of sleep much, but you can improve its quality and timing," says Darwall-Smith. Given all of this, what's happening on those occasions we feel better after a shorter sleep duration like 7.5 hours compared to a longer sleep like 9 hours? Why do we feel more awake after 7.5 hours vs 9 hours sleep? There are a variety of reasons you might feel alert after 7.5 hours sleep and groggy after 9 hours, here Darwall-Smith talks us through them. 1. Sleep inertia "If you wake during deep sleep, you're likely to feel groggy, heavy-headed, and slow – this is called sleep inertia," says Darwall-Smith. Other symptoms of sleep inertia include disorientation and decreased cognitive ability. "At 7.5 hours, you might be waking at the end of a cycle, in lighter sleep, which makes getting up feel easier. At 9 hours, you might land in the middle of deep sleep, triggering inertia," she explains. Although we'll always experience sleep inertia to some extent upon waking, doing so during deep sleep means you'll experience it in a more pronounced way. 2. Circadian timing If you find that waking after 7.5 hours of sleep feels good, Darwall-Smith explains why this may be connected to your natural body clock. "Your internal body clock influences alertness. Waking in sync with your circadian rhythm (when your core body temperature is starting to rise) feels more natural," she says. Our circadian rhythm (also known as our internal body clock) regulates a whole host of our functions and processes, such as hormone release (like melatonin for sleep and cortisol for alertness), body temperature and our sleep-wake cycles. However, "if 9 hours takes you past that point, you might wake at a 'low' in your rhythm and feel sluggish," Darwall-Smith says. 3. Sleep fragmentation Sleeping for 9 hours might seem like a great way to catch up on some rest, but if your body doesn't require that much sleep, it can affect your sleep cycles and therefore, how you feel when you wake up. "If you stay in bed longer than your body needs, your last cycle(s) of sleep may be lighter and more broken," Darwall-Smith explains, adding that, "this can make you feel less refreshed, even if you technically slept longer." Is one reason more common than the others? "Sleep inertia is probably the most common culprit," Darwall-Smith says of why we might feel better after 7.5 hours sleep and sluggish after 9, providing two great visual analogies for the differences between waking up during deeper vs. lighter sleep. "Waking from deep sleep is like trying to start a car in freezing weather – you need time to warm up. Waking from light sleep is more like rolling downhill – much easier to get going," she explains. So which is better – 7.5 or 9 hours? I asked Darwall-Smith if all of this means that 7.5 hours sleep is better than 9 hours. "Not neccessarily," she says. "It's not about 'shorter vs longer' – it's about getting the right amount for your body and waking at the right point in a cycle. For some people, 9 hours genuinely works better; for others, 7.5 leaves them sharper. What matters is how you feel during the day." It's also worth keeping track of the recommended amount of sleep for your age, since the recommended 7-9 hours applies to healthy adults under 60, while children, teenagers and adults over 60 have, in general, different requirements. How can you make sure you wake up feeling refreshed? Track your patterns Sleep tracking apps can help you get a better sense of how you're sleeping through the night, giving you insights into how long you spend in each sleep stage and helping you to determine a sleep routine that gives you the amount of rest you need to feel your best in the morning. "Use a sleep diary or wearable for 1–2 weeks to see when you naturally wake feeling good. Look for patterns in bedtime, wake time, and total hours," advises Darwall-Smith. Speaking of the morning, Sleep Cycle , one of the best sleep tracking apps, even features a smart alarm clock that will do its best to wake you during lighter sleep within a 30 minute window, so you're not falling victim to the dreaded sleep inertia. Set a consistent wake time Consistency helps your circadian rhythm to function optimally, and one way to avoid issues, like sleep inertia from waking in the middle of a deep sleep stage, is to apply regularity to your wake up time. "Waking up at the same time every day keeps your circadian rhythm steady, making it more likely you'll wake from lighter sleep," explains Darwall-Smith. Let light in quickly in the morning Light plays a significant role in our sleep-wake cycles, with darkness promoting melatonin production, while exposure to bright light in the morning supresses it and increases cortisol levels. "Morning light signals your brain to stop melatonin production and raise alertness," agrees Darwall-Smith. To help you feel at your best and brightest in the morning, she suggests you, "open curtains, go outside, or use a light therapy lamp within 30 minutes of waking." One recent study also showed that getting morning sunlight can lead to better sleep efficiency and fewer nighttime awakenings.


CNN
a day ago
- CNN
NIH director lays out agency's research and funding priorities in new strategy statement
LGBTQ issuesFacebookTweetLink Follow The director of the US National Institutes of Health outlined on Friday a 'unified strategy' to align the agency's priorities and funding, a move he said was meant to offer clarification following sweeping changes at the agency, including massive budget cuts, grant cancellations and plans for reorganization. In Friday's statement, Dr. Jay Bhattacharya emphasized the need for transparency with the taxpaying American public and an intent to 'honor their trust.' He identified key priorities for the NIH, including chronic disease and nutrition – as per an executive order 'on Gold-Standard Science and the Make America Healthy Again Commission Report' – as well as artificial intelligence, alternative testing models and real-world data platforms. He also noted that the agency is dedicated to supporting research that pursues 'innovative, and sometimes controversial, questions.' NIH funding decisions will reflect these priorities and other 'core principles,' the statement said. 'As stewards of taxpayer funds, NIH must deliver results that matter to the public,' Bhattacharya wrote. 'Through this strategy, we will better leverage the synergistic missions of each NIH Institute and Center to fund the most meritorious science, address urgent health needs, and sustain a robust biomedical research workforce.' More details were shared on certain agency priorities with an intent to 'clarify specific issues that currently require additional guidance,' the statement said, including autism, nutrition, HIV/AIDS, research on racial disparities, transgender care and more. In April, a policy note from the NIH said the agency can pull medical research funding from universities with diversity and inclusion programs. Friday's statement noted that the NIH was 'shifting to solution-oriented approaches in health disparities research.' 'In contrast to research that considers race or ethnicity when scientifically justified […] research based on ideologies that promote differential treatment of people based on race or ethnicity, rely on poorly defined concepts or on unfalsifiable theories, does not follow the principles of gold-standard science,' the statement read. NIH also intends to prioritize research focused on what it called 'more promising avenues of research' for the the health of transgender youth than studies involving treatments such as puberty suppression, hormone therapy or surgery. 'Research that aims to identify and treat the harms these therapies and procedures have potentially caused to minors diagnosed with gender dysphoria, gender identity disorder, or gender incongruence, and how to best address the needs of these individuals so that they may live long, healthy lives is more promising,' the statement said. Multiple priorities emphasize a preference for domestic research, including a new system to manage projects with funding for foreign research institutions and a blueprint for domestic training programs. NIH will also prioritize research that can be replicated or reproduced. 'We are exploring various mechanisms to support scientists focused on replication work, to publish negative findings, and to elevate replication research,' the statement read.