Man took diet advice from ChatGPT, ended up hospitalized with hallucinations
A case study published on Aug. 5 in the Annals of Internal Medicine, an academic journal, states the 60-year-old man decided he wanted to eliminate salt from his diet completely. To do so, he asked ChatGPT for an alternative to salt, or sodium chloride, to which the AI chatbot suggested sodium bromide, a compound historically used in pharmaceuticals and manufacturing.
While the journal noted that doctors were unable to review the original AI chat logs and that the bot likely suggested the substitution for another purpose, such as cleaning, the man purchased sodium bromide and used it in place of table salt for three months.
As a result, he ended up in the ER with paranoid delusions, despite having no history of mental health problems. Convinced that his neighbor was poisoning him, the man was reluctant to even accept water from the hospital, despite reporting extreme thirst. He continued to experience increased paranoia, as well as auditory and visual hallucinations, eventually landing him an involuntary psychiatric hold after he tried to escape during treatment.
What was happening to the man?
Doctors determined that the man was suffering from bromide toxicity (or bromism), which can result in neurological and psychiatric symptoms, as well as others experienced by the man, including acne and cherry angiomas (bumps on the skin), fatigue, insomnia, subtle ataxia (clumsiness) and polydipsia (excessive thirst).
Other symptoms of bromism can include nausea and vomiting, diarrhea, tremors or seizures, drowsiness, headache, weakness, weight loss, kidney damage, respiratory failure and coma, according to iCliniq.
Bromism was once far more common due to bromide salts having been used in everyday products. In the early 20th Century, it was used in over-the-counter medications, often resulting in neuropsychiatric and dermatological symptoms, according to the study's authors. Incidents of such poisoning saw a sharp decline when the Food and Drug Administration phased out the use of bromides in pharmaceuticals between the mid 70s and late 1980s.
The man was treated at the hospital for three weeks, over which time his symptoms progressively improved.
USA TODAY reached out to OpenAI, the maker of ChatGPT, for comment on Wednesday, Aug. 13, but has not received a response.
The company provided Fox News Digital with a statement, saying, "Our terms say that ChatGPT is not intended for use in the treatment of any health condition, and is not a substitute for professional advice. We have safety teams working on reducing risks and have trained our AI systems to encourage people to seek professional guidance."
AI can 'fuel the spread of misinformation,' doctors say
Doctors involved in the case study said they suspected that the patient had used ChatGPT version 3.5 or 4.0, the former of which they tested in an attempt to replicate the answers the man received. While the study's authors noted that they couldn't know exactly what the man was told without the original chat log, they did receive a suggestion for bromide as a replacement for chloride in their tests.
"Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do," said study authors Dr. Audrey Eichenberger, Dr. Stephen Thielke and Dr. Adam Van Buskirk.
AI carries the risk of providing information like this without context, according to the doctors. For example, it is unlikely that a medical expert would have mentioned sodium bromide at all if a patient asked for a salt substitute.
"Thus, it is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation," according to the study.
This article originally appeared on USA TODAY: Man hospitalized after taking ChatGPT diet advice, study says
Solve the daily Crossword
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
43 minutes ago
- Forbes
Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct
In today's column, I first examine the new ChatGPT Study Mode that has gotten big-time headline news and then delve into whether the crafting of this generative AI capability could be similarly undertaken in the mental health realm. The idea is this. The ChatGPT Study Mode was put together by crafting custom instructions for ChatGPT. It isn't an overhaul or feature creation. It seems to be nothing new per se, other than specifying a set of detailed instructions, as dreamed up by various educational specialists, telling the AI what it is to undertake in an educational context. That's considered 'new' in the sense that it is an inspiring use of custom instructions and a commendable accomplishment that will be of benefit to students and eager learners. Perhaps by gathering psychologists and mental health specialists, an AI-based Therapy Mode could similarly be ingenuously developed. Mindful readers asked me about this. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here. ChatGPT Study Mode Introduced A recent announcement by OpenAI went relatively far and wide. They cheerfully introduced ChatGPT Study Mode, as articulated in their blog posting 'Introducing Study Mode' on July 29, 2025, and identified these salient points (excerpts): As far as can be discerned from the outside, this capability didn't involve revising the underpinnings of the AI, nor did it seem to require bolting on additional functionality. It seems that the mainstay was done using custom instructions (note, if they did make any special core upgrades, they seem to have remained quiet on the matter since it isn't touted in their announcements). Custom Instructions Are Powerful Assuming that they only or mainly used custom instructions to bring forth this useful result, it gives great hope and spurs avid attention to the amazing power of custom instructions. You can do a lot with custom instructions. But I would wager that few know about custom instructions and even fewer have done anything substantive with them. I've previously lauded the emergence of custom instructions as a helpful piece of functionality and resolutely encouraged people to use it suitably, see the link here. Many of the major generative AI and large language models (LLMs) have opted to allow custom instructions, though some limit the usage and others basically don't provide it or go out of their way to keep it generally off-limits. Allow me a brief moment to bring everyone up to speed on the topic. Suppose you want to tell AI to act a certain way. You want the AI to do this across all subsequent conversations. This usually only applies to your instance. I'll explain in a moment how to do so across instances and allow other people to tap into your use of custom instructions. I might want my AI to always give me its responses in a poetic manner. You see, perhaps I relish poems. I go to the specified location of my AI that allows the entering of a custom instruction and tell it to always respond poetically. After saving this, I will then find that any conversation will always be answered with poetic replies by the AI. In this case, my custom instruction was short and sweet. I merely told the AI to compose answers poetically. If I had something more complex in mind, I could devise a quite lengthy custom instruction. The custom instruction could go on and on, telling the AI to write poetically when it is daytime, but not at nighttime, and to make sure the poems are lighthearted and enjoyable. I might further indicate that I want poems that are rhyming and must somehow encompass references to cats and dogs. And so on. I'm being a bit facetious and just giving you a semblance that a custom instruction can be detailed and provide a boatload of instructions. Custom Instructions Mixed Bag The beauty of custom instructions is that they serve as an overarching form of guidance to the generative AI. They are considered to have a global scope for your instance. All conversations that you have will be subject to whatever the custom instruction says should take place. With such power comes some downsides. Imagine that I am using the AI and have a serious question that should not be framed in a poem. Lo and behold, I ask the solemn question and get a poetic answer. The AI is following what the custom instruction indicated. Period, end of story. The good news is that you can tell the AI that you want it to disregard the custom instructions. When I enter a question, I could mention in the prompt that the AI is not to abide by the custom instructions. Voila, the AI will provide a straightforward answer. Afterward, the custom instructions will continue to apply. The malleability is usually extensive. For example, I might tell the AI that for the next three prompts, do not abide by the custom instructions. Or I could tell the AI that the custom instructions are never to be obeyed unless I say in a prompt that they should be obeyed. I think you can see that this is a generally malleable aspect. Goofed Up Custom Instructions The most disconcerting downside of custom instructions is that you might inadvertently say something in the instructions that is to your detriment. Maybe you won't even realize what you've done. Consider my poetic-demanding custom instruction. I could include a line that insists that no matter what any of my prompts say, never allow me to override the custom instruction. Perhaps I thought that was a smart move. The problem will be that later, I might forget that I had included that line. When I try to turn off the custom instruction via a prompt, the AI might refuse. Usually, the AI will inform you of such a conflict, but there's no guarantee that it will. Worse still is a potential misinterpretation of something in your custom instructions. I might have said that the AI should never mention ugly animals in any of its responses. What in the world is an ugly animal? The sky is the limit. Unfortunately, the AI will potentially opt not to mention all kinds of animals that were not what I had in my mind. Would I realize what is happening? Possibly not. The AI responses would perchance mention some animals and not mention others. It might not be obvious which animals aren't being described. My custom instruction is haunting me because the AI interprets what I said, though the interpretation differs from what I meant. AI Mental Health Advice Shifting gears, let's aim to use custom instructions for the betterment of humanity, rather than the act of simply producing poetic responses. The ChatGPT Study Mode pushes the AI to perform Socratic dialogues with the user and gives guidance rather than spitting out answers. The custom instructions get this to occur. Likewise, the AI attempts to assess the level of proficiency of the user and adjusts to their skill level. Personalized feedback is given. The AI tracks your progress. It's nifty. All due to custom instructions. What other context might custom instructions tackle? I'll focus on the context of mental health. Here's the deal. We get together a bunch of psychologists, psychiatrists, therapists, mental health professionals, and the like. They work fervently on composing a set of custom instructions telling the AI how to perform therapy. This includes diagnosing mental health conditions. It includes generating personal recommendations on aiding your mental health. We could turn the generic generative AI that saunters around in the mental health context and turn it into something more bona fide and admirable. Boom, drop the mic. The World Is Never Easy If you are excited about the prospects of these kinds of focused custom instructions, such as for therapy, I am going to ask you to sit down and pour yourself a glass of fine wine. The reason I say this is that there have indeed been such efforts in the mental health realm. And, by and large, the result is not as standout as you might have hoped for. First, the topic of mental health is immense and involves risks to people when inappropriate therapy is employed. Trying to devise a set of custom instructions that can fully and sufficiently provide bona fide therapy is not only unlikely but also inevitably misleading. I say this because some have tried this route and made outlandish claims of what the AI can do as a result of the loaded custom instructions. Watch out for unfulfilled claims. See my extensive coverage at the link here. Second, any large set of custom instructions on performing therapy is bound to be incomplete, contain misinterpretable indications, and otherwise be subject to the downsides that I've noted above. The nature of using custom instructions as an all-in-one solution in this arena is like trying to use a hammer on everything, even though you ought to be using a screwdriver on screws, and so on. Third, some argue that using custom instructions for therapy is better than not having any custom instructions at all. The notion is that if you are using a generic generative AI that is working without mental health custom instructions, you are certainly better off by using one that at least has custom instructions. The answer there is that it depends on the nature of the custom instructions. There is a solid chance that the custom instructions might worsen what the AI is going to say. You can just as easily boost the AI as you can undercut the AI. Don't fall into the trap that custom instructions mean things are necessarily for the better. Accessing Custom GPTs I had earlier alluded to the aspect that there is a means of allowing other users to employ your set of custom instructions. Many of the popular LLMs tend to allow you to generate an AI applet of sorts, containing tailored custom instructions that can be used by others. Sometimes the AI maker establishes a library into which these applets reside and are publicly available. OpenAI provides this via the use of GPTs, which are akin to ChatGPT applets -- you can learn about how to use those in my detailed discussion at the link here and the link here. Unfortunately, as with all new toys, some have undermined these types of AI applets. There are AI applets that contain custom instructions written by licensed therapists who genuinely did their best to craft therapy-related custom instructions. That seems encouraging. But I'm hoping you now realize that even the best of intentions might not come out suitably. Good intentions don't guarantee suitable results. Those custom instructions could have trouble brewing within them. There are also AI applets that brashly claim to be for mental health, yet they are utterly shallow and devised by someone who has zero expertise in mental health. Don't let your guard down by flashy claims. The more egregious ones are AI applets that are marketed as though they are about mental health, when the reality is that it is a scam. The custom instructions have nothing to do with therapy. Instead, the custom instructions attempt to take over your AI, grab your personal info, and generally be a pest and make life miserable for you. Wolves in sheep's clothing. The Full Meal Deal Where do we go from here? The use of custom instructions for therapy when aiming to bring forth an AI-based Therapy Mode in a generic generative AI is not generally a good move. Even if you assemble a worthy collection of the best psychologists and mental health experts, you are trying to put fifty pounds into a five-pound bag. It just isn't a proper fit. The better path is being pursued. I am a big advocate and doing research on generative AI and LLMs that are built from the ground up for mental health advisement, see my framework layout at the link here. The approach consists of starting from the beginning when devising an LLM to make it into a suitable therapy-oriented mechanism. This is in stark contrast to trying to take an already completed generic generative AI and reshape it into a mental health context. I believe it is wiser to take a fresh uplift instead. Bottom Line Answered For readers who contacted me and asked whether the ChatGPT Study Mode foretells that the same impressive results of education-oriented custom instructions can be had in other domains, yes, for sure, there are other domains that this can readily apply to. Is mental health one of those suitable domains? I vote no. Mental health advisement deserves more. A final thought for now. Voltaire astutely observed: 'No problem can withstand the assault of sustained thinking.' We need to put on our thinking caps and aim for the right solution rather than those quick-fix options that might seem viable but contain unsavory gotchas and injurious hiccups. Sustained thinking is worth its weight in gold.
Yahoo
5 hours ago
- Yahoo
Here's How You Can Earn $100 In Passive Income By Investing In Healthpeak Properties Stock
Benzinga and Yahoo Finance LLC may earn commission or revenue on some items through the links below. Healthpeak Properties Inc. (NYSE:DOC) is a real estate investment trust that owns, operates, and develops high-quality real estate focused on healthcare discovery and delivery. It will report its Q3 2025 earnings on Oct. 23. Wall Street analysts expect the company to post EPS of $0.22, down from $0.45 in the prior-year period. According to data from Benzinga Pro, quarterly revenue is expected to be $700.87 million, up from $700.40 million a year earlier. Don't Miss: Would you have invested in eBay or Uber early? The same backers are betting on . 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. The 52-week range of Healthpeak Properties stock price was $16.63 to $23.26. Healthpeak Properties' dividend yield is 7.12%. It paid $1.22 per share in dividends during the last 12 months. The Latest On Healthpeak Properties The company on July 24 announced its Q2 2025 earnings, posting FFO of $0.46, compared to the consensus estimate of $0.47, and revenues of $694.35 million, compared ot the consensus of $699.17 million, as reported by Benzinga. The company provided its full-year 2025 guidance, expecting diluted FFO per share in the range of $1.81 to $1.87, and diluted EPS of $0.25 to $0.31. Trending: If there was a new fund backed by Jeff Bezos offering a ? Check out this article by Benzinga for four analysts' insights on Healthpeak Properties. How Can You Earn $100 Per Month As A Healthpeak Properties Investor? If you want to make $100 per month — $1,200 annually — from Healthpeak Properties dividends, your investment value needs to be approximately $16,854, which is around 983 shares at $17.14 each. Understanding the dividend yield calculations: When making an estimate, you need two key variables — the desired annual income ($1,200) and the dividend yield (7.12% in this case). So, $1,200 / 0.0712 = $16,854 to generate an income of $100 per month. You can calculate the dividend yield by dividing the annual dividend payments by the current price of the dividend yield can change over time. This is the outcome of fluctuating stock prices and dividend payments on a rolling basis. For instance, assume a stock that pays $2 as an annual dividend is priced at $50. Its dividend yield would be $2/$50 = 4%. If the stock price rises to $60, the dividend yield drops to 3.33% ($2/$60). A drop in stock price to $40 will have an inverse effect and increase the dividend yield to 5% ($2/$40). In summary, income-focused investors may find Healthpeak Properties stock an attractive option for making a steady income of $100 per month by owning 983 shares of stock. Read Next: Kevin O'Leary Says Real Estate's Been a Smart Bet for 200 Years — Image: Shutterstock This article Here's How You Can Earn $100 In Passive Income By Investing In Healthpeak Properties Stock originally appeared on
Yahoo
7 hours ago
- Yahoo
Paramedics are less likely to identify a stroke in women than men. Closing this gap could save lives
A stroke happens when the blood supply to part of the brain is cut off, either because of a blockage (called an ischaemic stroke) or bleeding (a haemorrhagic stroke). Around 83% of strokes are ischaemic. The main emergency treatment for ischaemic strokes is a 'clot-busting' process called intravenous thrombolysis. But this only works if administered quickly – ideally within an hour of arriving to hospital, and no later than 4.5 hours after symptoms begin. The faster treatment is given, the better the person's chance of survival and recovery. However, not everyone gets an equal chance of receiving this treatment quickly. Notably, research has shown ambulance staff are significantly less likely to correctly identify a stroke in women compared to men. In a recent study, we modelled the potential health gains and cost savings of closing this gap. And they're substantial. The sex gap in stroke diagnosis In Australia, about three-quarters of people who experience stroke arrive at hospital by ambulance. If paramedics suspect a stroke, they can take patients directly to a hospital which specialises in stroke care, and alert the hospital team so scans and treatment can start immediately. Research has shown women aged under 70 are 11% less likely than men to have their stroke recognised by paramedics before they arrive at the hospital. While younger men and women experience stroke at a similar rate, the symptoms they present with may be different, with 'typical' symptoms more common in men and 'atypical' symptoms more common in women. Research has shown women and men are equally likely to present with movement and speech problems when having a stroke. However, women are more likely to show vague symptoms, such as general weakness, changes in alertness, or confusion. These 'atypical' symptoms can be overlooked, leaving women more vulnerable to misdiagnosis, delayed treatment, and preventable harm. What we did In our study, published recently in the Medical Journal of Australia (MJA), we used ambulance and hospital data from a 2022 MJA study in New South Wales. This is the study we mentioned above that showed paramedics correctly identified stroke more often in men than women under 70. From this dataset, we identified more than 5,500 women under 70 who had an ischaemic stroke between 2005 and 2018. Using this group, we built a model to compare two scenarios: the status quo, where women's strokes are identified at the current rate of accuracy; and an improved scenario, where women's strokes are identified at the same rate as men's. We then projected patients' health over time, including their level of impairment, risk of another stroke, and immediate and long-term survival. Closing the diagnosis gap would save lives and money When women's stroke diagnosis rate was improved to match men's, each woman gained an average of 0.14 extra years of life (roughly 51 days) and 0.08 extra quality-adjusted life years (QALYs), meaning an additional 29 days in full health. Scenario two also meant A$2,984 in health-care costs would be saved per woman. Scaled to the national level based on the number of women under 70 hospitalised with ischaemic stroke each year, closing this gap would mean 252 extra years of life, 144 extra QALYs, and $5.4 million in cost savings annually. Some limitations We didn't have sex-specific data for every aspect of the model, which is in itself a telling sign of the lack of recognition of sex as an important factor in understanding disease. Because of this, we used combined data from both men and women in some parts of our model, which may have affected the results. Further, the NSW data we used for rates of treatment with intravenous thrombolysis were higher than the national average, so our national figures may be slightly over-estimated. Beyond stroke – why all this matters The disparity we found is one example of a broader, systemic issue in women's health: sex-based differences in diagnosis and treatment that favour men. Too often, women's symptoms are misinterpreted or dismissed because they don't match a 'typical' pattern. This can lead to delays, missed opportunities for early treatment, and worse outcomes for women. In stroke, faster and more accurate diagnosis means people are less likely to die or require long-term care, and more likely to recover better and get back to their daily lives sooner. So what can we do to close the diagnosis gap? Investing in better training for paramedics and other emergency responders, so they can recognise a wider range of stroke presentations, could pay off many times over. Public awareness campaigns that highlight atypical stroke symptoms could also help. Technologies such as mobile stroke units and telemedicine support may be part of the solution, but they must be implemented with attention to sex-specific needs. This article is republished from The Conversation. It was written by: Lei Si, Western Sydney University; Laura Emily Downey, George Institute for Global Health, and Thomas Gadsden, George Institute for Global Health Read more: World's 'oldest baby': what a 30-year-old embryo tells us about the future of fertility Condoms, PrEP and vaccines: how the UK is expanding STI prevention Menopause and brain fog: why lifestyle medicine could make a difference The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.