logo
5 ChatGPT Prompts to Help You Live a Healthier Lifestyle

5 ChatGPT Prompts to Help You Live a Healthier Lifestyle

This article is published by AllBusiness.com, a partner of TIME.
In an age when smart technology touches every corner of our lives, it's no surprise that artificial intelligence is now playing a role in helping us stay healthy. Whether you're struggling with persistent fatigue, trying to manage high blood pressure, or simply looking to gain or lose a few pounds, ChatGPT can be a helpful brainstorming tool—generating suggestions that you can take to your doctor or other licensed professionals.
But let's be very clear: ChatGPT should never be used to make actual health decisions. It is not a medical device, it is not trained to diagnose or treat, and it is not a replacement for your physician, dietitian, or licensed therapist. Instead, use it to gather information, clarify goals, and generate ideas you can review and personalize with your healthcare provider. With that purpose in mind, here are five useful prompts that can support a healthier lifestyle when paired with expert guidance. We wrote this article with research assistance and insights from AI.
ChatGPT Prompts to Help You Improve Your Health
Why It Works: Healthy eating is essential to managing many health conditions. Whether you're trying to gain weight, lower your cholesterol, or balance blood sugar, a smart meal plan can provide structure. ChatGPT can suggest meal ideas based on your goals and preferences—so you can bring those ideas to your doctor or dietitian for review.
Use Case Example:
'Create a 7-day meal plan for someone with high glucose levels. The meals should be low in added sugars, high in fiber, and include vegetarian options.'
Important: You should never follow a ChatGPT-generated meal plan without getting it reviewed by a qualified nutritionist or physician—especially if you have health conditions such as diabetes, hypertension, or food allergies.
Health Issues Potentially Addressed (With Professional Supervision):
Why It Works: Regular movement can improve energy, mood, and overall health. If you're looking for a starting point, ChatGPT can create sample workouts tailored to your fitness level and goals—but again, this should only be a way to start a conversation with your doctor or physical therapist, especially if you've been sedentary, injured, or have chronic conditions.
Use Case Example:
'Create a 4-week home workout plan for a beginner who wants to improve energy and lose 10 pounds. No equipment.'
You can also ask for modifications, time limits, or mobility-friendly routines.
Health Issues This May Support (Under Guidance):
Why It Works: Chronic stress can impact everything from sleep to blood pressure to immune function. ChatGPT can suggest mindfulness exercises, breathing techniques, or journaling prompts—but you should never rely on AI to treat anxiety, depression, or other mental health conditions. Consult a licensed mental health professional for diagnosis or treatment.
Use Case Example:
'Create a morning and evening mindfulness routine for someone with a stressful job and poor sleep.'
ChatGPT might recommend guided breathing, screen-free wind-down routines, or gratitude journaling.
Potential Areas of Support (With Supervision):
Why It Works: Diet and lifestyle can play a big role in managing high blood pressure. With this prompt, ChatGPT can generate a list of food suggestions and lifestyle habits that may help—but this is strictly general information.
Use Case Example:
'Give me a list of 10 foods and 5 habits that can help naturally reduce high blood pressure.'
ChatGPT may suggest potassium-rich foods, sodium reduction, exercise, hydration, or sleep improvements—but again, keep in mind these are only ideas and never a replacement for professional medical advice.
Common Conditions That Could Benefit (Always Under Care):
Why It Works: Habit tracking improves consistency. ChatGPT can generate checklists or logs to help you track things like water intake, sleep, mood, or steps—but even this should be shared with your doctor or specialist if it relates to ongoing treatment (e.g., blood sugar, blood pressure, fatigue symptoms).
Use case example:
'Create a printable weekly health tracker that includes water intake, exercise, sleep hours, and mood rating.'
Customizable trackers can keep you motivated—but always include your care team in the loop, especially when dealing with chronic conditions or if something feels off.
Examples of Use:
Final Thoughts: AI Can Inspire You to Get Healthier
ChatGPT can be a powerful partner for generating ideas to support your wellness journey—but it must be used responsibly. It is not a doctor, not a therapist, and not a substitute for licensed medical care. Any plan, idea, or suggestion created by AI should be reviewed and approved by your healthcare team before you act on it.
Use ChatGPT to help you explore new routines, organize your health goals, or clarify questions you want to ask your doctor. It's a brainstorming tool, not a treatment tool. When used this way, it can make your next doctor's visit more productive—and your health journey more personalized.
Your body is unique. What works for others may not work for you. The safest, most effective way to improve your health is to partner with licensed professionals—and use tools like ChatGPT to help you organize, question, and plan more effectively.
Related Articles:
FAQs on Using ChatGPT to Improve Health
No. ChatGPT is not a medical authority and should not be used to make health decisions. It can only generate general suggestions or ideas that should be discussed and approved by licensed healthcare professionals.
​​Not without medical clearance. Every body is different. What ChatGPT generates is not personalized to your medical history, medications, or risk factors. Always review any plan with your doctor, dietitian, or trainer before trying it.
Only in a supportive, informational way. You might use ChatGPT to come up with questions for your doctor or to help you better understand lifestyle changes—but management of chronic conditions must be directed by your healthcare provider.
About the Authors
Dominique A. Harroch is the Chief of Staff at AllBusiness.com. She has been the Chief of Staff or Operations Leader for multiple companies where she leveraged her extensive experience in operations management, strategic planning, and team leadership to drive organizational success. With a background that spans over two decades in operations leadership, event planning at her own start-up and marketing at various financial and retail companies. Dominique is known for her ability to optimize processes, manage complex projects and lead high-performing teams. She holds a BA in English and Psychology from U.C. Berkeley and an MBA from the University of San Francisco. She can be reached via LinkedIn.
Richard D. Harroch is a Senior Advisor to CEOs, management teams, and Boards of Directors. He is an expert on M&A, venture capital, startups, and business contracts. He was the Managing Director and Global Head of M&A at VantagePoint Capital Partners, a venture capital fund in the San Francisco area. His focus is on internet, digital media, AI and technology companies. He was the founder of several Internet companies. His articles have appeared online in Forbes, Fortune, TIME, MSN, Yahoo, Fox Business and AllBusiness.com. Richard is the author of several books on startups and entrepreneurship as well as the co-author of Poker for Dummies and a Wall Street Journal-bestselling book on small business. He is the co-author of a 1,500-page book published by Bloomberg on mergers and acquisitions of privately held companies. He was also a corporate and M&A partner at the international law firm of Orrick, Herrington & Sutcliffe. He has been involved in over 200 M&A transactions and 250 startup financings. He can be reached through LinkedIn.
Copyright © by Richard D. Harroch. All rights reserved.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Function Health CEO Jonathan Swerdlin Is Trying to Change the Way We Manage Our Wellbeing
Function Health CEO Jonathan Swerdlin Is Trying to Change the Way We Manage Our Wellbeing

Time​ Magazine

time5 hours ago

  • Time​ Magazine

Function Health CEO Jonathan Swerdlin Is Trying to Change the Way We Manage Our Wellbeing

When Jonathan Swerdlin tells you he pours his blood, sweat, and tears into his work, he means it more literally than most company leaders. Swerdlin is the CEO and co-founder of Function Health, a fast-growing personalized health testing platform that launched in 2023. I, too, have given the company dozens of vials of my blood: When I found out about Function last year, I hadn't been to the doctor in about a decade (a shameful confession for a health journalist). I was tired of what felt like a broken health-care system, but I knew I needed a check-up, so I signed up for Function, which costs $499 per year and includes two rounds of blood testing. First, you get an initial assessment with 160 lab tests, and then three to six months later, a follow-up including 60-plus retests to see how your numbers are changing. While no treatment regimen is provided based on results, clients receive personalized written insights from Function's team of clinicians. 'We, previous to this, knew less about our bodies than we knew about our cars. And Function has changed that,' Swerdlin says. 'Nobody can say in 2025 that they have no way to understand what's happening inside their body. By taking control today, you're taking control of the future. Data over time—and it's so important that it's done over time—can help ensure you're not missing anything that could materially impact your life.' Function—which is backed by big-name investors like Matt Damon, Zac Efron, Pedro Pascal, and Jay Shetty—now has hundreds of thousands of paying members. It's at the forefront of a growing trend of companies that aim to provide personalized insights to people tired of the traditional health care system. The idea is that, instead of waiting around for a doctor to figure out something is acutely wrong with you, you find out early enough to prevent it from becoming a bull-blown medical crisis. TIME spoke to Swerdlin in May about how Function, one of the TIME100 Most Influential Companies of 2025, is expanding and why the health care industry is so overdue for innovation. This interview has been condensed and edited for clarity. What inspired you to start Function Health, and what keeps you motivated to run this company every day? I had health issues when I was young, so I learned very early about the shortcomings of what health care can actually deliver. Fast forward, and my father beat prostate cancer, so I witnessed the power of medicine. I was caught between understanding the shortcomings of health care and the administration of it, and yet being in incredible admiration of the power of medicine. That sparked my own interest in taking control of my health, and I spent years and tens of thousands of dollars cobbling together what is mostly an analog version of what Function delivers today. It became clear to me that it wasn't a way to hack the system—it was actually the future of the way that people manage their lifelong health. It gave me so much agency and so much power that I felt like, I wish everybody could have this. I wish my whole family could have this; I wish my friends could have this, because I care about their life, and I want them to be able to benefit from this. Nobody knows what's happening inside their body, yet the objective should be to need less health care and to have more life. What do you think are the biggest challenges and shortcomings right now in the traditional health care system? Health care is incomplete. What we expect to get from health care, it is unable to meet. Health care is not about helping you manage your lifelong health—it's about solving problems. We've been missing any kind of platform or system that can help people manage their lifelong health outside of acute issues. And so what we need is a way to be on top of our heart, our liver, our kidneys, our thyroid, our hormones, and to always know what's going on and what we can be doing, outside of pharmaceutical interventions, to be healthier. What is a day in your life as CEO of Function Health like? I have one focus, which is to do every single thing in my power to make sure our members can live 100 healthy years. That's my job every day, so what do I actually tangibly do? I spend my mornings, first, taking care of my health: I exercise, I eat well, I hydrate, I sleep, I do those things that one needs to do. Once I've taken care of my health, I'll typically spend a lot of time understanding what's new in the landscape, what kind of research has been coming out, and what's the latest science. I have to keep on top of that every day. I also work with the team across all facets of the business, from how we tell our story, to where we are in engineering—as well as working diligently on expansion of the product. Speaking of staying on top of the science: Do you have a favorite way to do that? Well, I have a lot of help. I have a team that helps me do that; they surface and curate things that are coming up that are important in science and technology. But I also go digging around journals myself, and I work on keeping up with AI—which is moving at a breakneck pace. It's almost a job unto itself. You've previously remarked how cool it is that people finally care about health as a space in which to build a revolution. Has health care been pretty stagnant until now? Health care has largely seen incremental changes in mostly institutional systems, and this generation is seeing exponential change. The greatest problem that we have is that 40% of premature deaths are preventable. All the suffering that's attached to that—you can't quantify it. People are suffering. It is the greatest problem that we can address. It's a tangible problem, it's solvable, it's deeply impactful, and it sits inside of a tremendous market. There's $7 trillion spent across health and wellness. It's always been a bespoke and artisanal service, and it's moving from that to an engineered technology product model—which makes the ability to be healthy radically deflationary, and radically more accessible. What's the greatest challenge of running a company like Function Health? The biggest challenge is that we're racing against time to make sure that any person in the world gets access to this, and that means scaling up things that have previously not been scaled up. Function recently acquired Ezra and introduced a $499 full-body MRI scan. Why are you excited about this, and what will it bring to the company? Having a 360-degree perspective of your body has been difficult and very expensive. It takes a lot of time, traditionally, and it's hard to find. With this acquisition, we're introducing an FDA-cleared scan that AI brings from 60 minutes down to 22 minutes, and from $1,500 down to $499. By making it this affordable, this accessible, and this convenient and fast, we're able to make [it] a thing that people do annually—rather than something available to only a select few. People who get labs and scans done annually have, for the first time, a full 360-degree baseline of what's happening inside their body so they can fully understand how it's changing over time. You need to be able to detect changes, because changes are the best indicator of when something is going wrong. You've also partnered with companies like Equinox, Thrive Global, and GRAIL. What makes a potential partner a good fit? Our partners are committed to helping people achieve their best health. Equinox, for example, is clearly committed to making sure people are being healthy, because the whole purpose of their business is to get people to exercise and live a healthy lifestyle. So it's a natural mission alignment. The No. 1 thing is mission alignment, and then, does it actually make sense from a scale perspective? We think in the millions and billions. This is an 8 billion-person problem we're solving. Can you talk more about the way you prioritize accessibility–especially when good health care is out of reach for so many people? I'm so passionate about this. Function is $1.37 per day. With our partnership with Quest Diagnostics, we're available in over 2,200 locations around the country. It's also very convenient. The actual time it takes for a blood draw and urine collection is 3 to 5 minutes. It is the smallest investment of time, and $1.37 per day, to be on top of your health. We have people using Function in every state, including places like Mississippi and Alabama, where traditionally they weren't really adopters and are known for having medical deserts. We're creating health in what were previously medical deserts. I get the sense that running Function feels more like a calling to you than work. Is that right? Our life expectancy has flatlined. We are not getting healthier as a country. We can solve those problems—but upstream, we're not actually getting to the root of the issue. I wake up in the morning with the clear directive to help people not suffer or die of preventable death, and that's what drives me. That's what gets me to work. Running any company is hard. I can't imagine not fully, 100% believing in what you're doing, and caring about it with every fiber of your being. If you're going to be an entrepreneur, you're going to build something in the world, make sure it's actually solving a problem you can care about for the rest of your life. And this is my life's work.

Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct
Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct

Forbes

time9 hours ago

  • Forbes

Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct

In today's column, I first examine the new ChatGPT Study Mode that has gotten big-time headline news and then delve into whether the crafting of this generative AI capability could be similarly undertaken in the mental health realm. The idea is this. The ChatGPT Study Mode was put together by crafting custom instructions for ChatGPT. It isn't an overhaul or feature creation. It seems to be nothing new per se, other than specifying a set of detailed instructions, as dreamed up by various educational specialists, telling the AI what it is to undertake in an educational context. That's considered 'new' in the sense that it is an inspiring use of custom instructions and a commendable accomplishment that will be of benefit to students and eager learners. Perhaps by gathering psychologists and mental health specialists, an AI-based Therapy Mode could similarly be ingenuously developed. Mindful readers asked me about this. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here. ChatGPT Study Mode Introduced A recent announcement by OpenAI went relatively far and wide. They cheerfully introduced ChatGPT Study Mode, as articulated in their blog posting 'Introducing Study Mode' on July 29, 2025, and identified these salient points (excerpts): As far as can be discerned from the outside, this capability didn't involve revising the underpinnings of the AI, nor did it seem to require bolting on additional functionality. It seems that the mainstay was done using custom instructions (note, if they did make any special core upgrades, they seem to have remained quiet on the matter since it isn't touted in their announcements). Custom Instructions Are Powerful Assuming that they only or mainly used custom instructions to bring forth this useful result, it gives great hope and spurs avid attention to the amazing power of custom instructions. You can do a lot with custom instructions. But I would wager that few know about custom instructions and even fewer have done anything substantive with them. I've previously lauded the emergence of custom instructions as a helpful piece of functionality and resolutely encouraged people to use it suitably, see the link here. Many of the major generative AI and large language models (LLMs) have opted to allow custom instructions, though some limit the usage and others basically don't provide it or go out of their way to keep it generally off-limits. Allow me a brief moment to bring everyone up to speed on the topic. Suppose you want to tell AI to act a certain way. You want the AI to do this across all subsequent conversations. This usually only applies to your instance. I'll explain in a moment how to do so across instances and allow other people to tap into your use of custom instructions. I might want my AI to always give me its responses in a poetic manner. You see, perhaps I relish poems. I go to the specified location of my AI that allows the entering of a custom instruction and tell it to always respond poetically. After saving this, I will then find that any conversation will always be answered with poetic replies by the AI. In this case, my custom instruction was short and sweet. I merely told the AI to compose answers poetically. If I had something more complex in mind, I could devise a quite lengthy custom instruction. The custom instruction could go on and on, telling the AI to write poetically when it is daytime, but not at nighttime, and to make sure the poems are lighthearted and enjoyable. I might further indicate that I want poems that are rhyming and must somehow encompass references to cats and dogs. And so on. I'm being a bit facetious and just giving you a semblance that a custom instruction can be detailed and provide a boatload of instructions. Custom Instructions Mixed Bag The beauty of custom instructions is that they serve as an overarching form of guidance to the generative AI. They are considered to have a global scope for your instance. All conversations that you have will be subject to whatever the custom instruction says should take place. With such power comes some downsides. Imagine that I am using the AI and have a serious question that should not be framed in a poem. Lo and behold, I ask the solemn question and get a poetic answer. The AI is following what the custom instruction indicated. Period, end of story. The good news is that you can tell the AI that you want it to disregard the custom instructions. When I enter a question, I could mention in the prompt that the AI is not to abide by the custom instructions. Voila, the AI will provide a straightforward answer. Afterward, the custom instructions will continue to apply. The malleability is usually extensive. For example, I might tell the AI that for the next three prompts, do not abide by the custom instructions. Or I could tell the AI that the custom instructions are never to be obeyed unless I say in a prompt that they should be obeyed. I think you can see that this is a generally malleable aspect. Goofed Up Custom Instructions The most disconcerting downside of custom instructions is that you might inadvertently say something in the instructions that is to your detriment. Maybe you won't even realize what you've done. Consider my poetic-demanding custom instruction. I could include a line that insists that no matter what any of my prompts say, never allow me to override the custom instruction. Perhaps I thought that was a smart move. The problem will be that later, I might forget that I had included that line. When I try to turn off the custom instruction via a prompt, the AI might refuse. Usually, the AI will inform you of such a conflict, but there's no guarantee that it will. Worse still is a potential misinterpretation of something in your custom instructions. I might have said that the AI should never mention ugly animals in any of its responses. What in the world is an ugly animal? The sky is the limit. Unfortunately, the AI will potentially opt not to mention all kinds of animals that were not what I had in my mind. Would I realize what is happening? Possibly not. The AI responses would perchance mention some animals and not mention others. It might not be obvious which animals aren't being described. My custom instruction is haunting me because the AI interprets what I said, though the interpretation differs from what I meant. AI Mental Health Advice Shifting gears, let's aim to use custom instructions for the betterment of humanity, rather than the act of simply producing poetic responses. The ChatGPT Study Mode pushes the AI to perform Socratic dialogues with the user and gives guidance rather than spitting out answers. The custom instructions get this to occur. Likewise, the AI attempts to assess the level of proficiency of the user and adjusts to their skill level. Personalized feedback is given. The AI tracks your progress. It's nifty. All due to custom instructions. What other context might custom instructions tackle? I'll focus on the context of mental health. Here's the deal. We get together a bunch of psychologists, psychiatrists, therapists, mental health professionals, and the like. They work fervently on composing a set of custom instructions telling the AI how to perform therapy. This includes diagnosing mental health conditions. It includes generating personal recommendations on aiding your mental health. We could turn the generic generative AI that saunters around in the mental health context and turn it into something more bona fide and admirable. Boom, drop the mic. The World Is Never Easy If you are excited about the prospects of these kinds of focused custom instructions, such as for therapy, I am going to ask you to sit down and pour yourself a glass of fine wine. The reason I say this is that there have indeed been such efforts in the mental health realm. And, by and large, the result is not as standout as you might have hoped for. First, the topic of mental health is immense and involves risks to people when inappropriate therapy is employed. Trying to devise a set of custom instructions that can fully and sufficiently provide bona fide therapy is not only unlikely but also inevitably misleading. I say this because some have tried this route and made outlandish claims of what the AI can do as a result of the loaded custom instructions. Watch out for unfulfilled claims. See my extensive coverage at the link here. Second, any large set of custom instructions on performing therapy is bound to be incomplete, contain misinterpretable indications, and otherwise be subject to the downsides that I've noted above. The nature of using custom instructions as an all-in-one solution in this arena is like trying to use a hammer on everything, even though you ought to be using a screwdriver on screws, and so on. Third, some argue that using custom instructions for therapy is better than not having any custom instructions at all. The notion is that if you are using a generic generative AI that is working without mental health custom instructions, you are certainly better off by using one that at least has custom instructions. The answer there is that it depends on the nature of the custom instructions. There is a solid chance that the custom instructions might worsen what the AI is going to say. You can just as easily boost the AI as you can undercut the AI. Don't fall into the trap that custom instructions mean things are necessarily for the better. Accessing Custom GPTs I had earlier alluded to the aspect that there is a means of allowing other users to employ your set of custom instructions. Many of the popular LLMs tend to allow you to generate an AI applet of sorts, containing tailored custom instructions that can be used by others. Sometimes the AI maker establishes a library into which these applets reside and are publicly available. OpenAI provides this via the use of GPTs, which are akin to ChatGPT applets -- you can learn about how to use those in my detailed discussion at the link here and the link here. Unfortunately, as with all new toys, some have undermined these types of AI applets. There are AI applets that contain custom instructions written by licensed therapists who genuinely did their best to craft therapy-related custom instructions. That seems encouraging. But I'm hoping you now realize that even the best of intentions might not come out suitably. Good intentions don't guarantee suitable results. Those custom instructions could have trouble brewing within them. There are also AI applets that brashly claim to be for mental health, yet they are utterly shallow and devised by someone who has zero expertise in mental health. Don't let your guard down by flashy claims. The more egregious ones are AI applets that are marketed as though they are about mental health, when the reality is that it is a scam. The custom instructions have nothing to do with therapy. Instead, the custom instructions attempt to take over your AI, grab your personal info, and generally be a pest and make life miserable for you. Wolves in sheep's clothing. The Full Meal Deal Where do we go from here? The use of custom instructions for therapy when aiming to bring forth an AI-based Therapy Mode in a generic generative AI is not generally a good move. Even if you assemble a worthy collection of the best psychologists and mental health experts, you are trying to put fifty pounds into a five-pound bag. It just isn't a proper fit. The better path is being pursued. I am a big advocate and doing research on generative AI and LLMs that are built from the ground up for mental health advisement, see my framework layout at the link here. The approach consists of starting from the beginning when devising an LLM to make it into a suitable therapy-oriented mechanism. This is in stark contrast to trying to take an already completed generic generative AI and reshape it into a mental health context. I believe it is wiser to take a fresh uplift instead. Bottom Line Answered For readers who contacted me and asked whether the ChatGPT Study Mode foretells that the same impressive results of education-oriented custom instructions can be had in other domains, yes, for sure, there are other domains that this can readily apply to. Is mental health one of those suitable domains? I vote no. Mental health advisement deserves more. A final thought for now. Voltaire astutely observed: 'No problem can withstand the assault of sustained thinking.' We need to put on our thinking caps and aim for the right solution rather than those quick-fix options that might seem viable but contain unsavory gotchas and injurious hiccups. Sustained thinking is worth its weight in gold.

Man took diet advice from ChatGPT, ended up hospitalized with hallucinations
Man took diet advice from ChatGPT, ended up hospitalized with hallucinations

Yahoo

time2 days ago

  • Yahoo

Man took diet advice from ChatGPT, ended up hospitalized with hallucinations

A man was hospitalized for weeks and suffered from hallucinations after poisoning himself based on dietary advice from ChatGPT. A case study published Aug. 5 in the Annals of Internal Medicine, an academic journal, says the 60-year-old man decided he wanted to eliminate salt from his diet. To do so, he asked ChatGPT for an alternative to salt, or sodium chloride, to which the AI chatbot suggested sodium bromide, a compound historically used in pharmaceuticals and manufacturing. Though the journal noted that doctors were unable to review the original AI chat logs and that the bot likely suggested the substitution for another purpose, such as cleaning, the man purchased sodium bromide and used it in place of table salt for three months. As a result, he ended up in the hospital emergency room with paranoid delusions, despite having no history of mental health problems. Convinced that his neighbor was poisoning him, the man was reluctant to even accept water from the hospital, despite reporting extreme thirst. He continued to experience increased paranoia, as well as auditory and visual hallucinations, eventually landing him an involuntary psychiatric hold after he tried to escape during treatment. What was happening to the man? Doctors determined that the man was suffering from bromide toxicity, or bromism, which can result in neurological and psychiatric symptoms, as well as acne and cherry angiomas (bumps on the skin), fatigue, insomnia, subtle ataxia (clumsiness) and polydipsia (excessive thirst). Other symptoms of bromism can include nausea and vomiting, diarrhea, tremors or seizures, drowsiness, headache, weakness, weight loss, kidney damage, respiratory failure and coma, according to iCliniq. Bromism was once far more common because of bromide salts in everyday products. In the early 20th century, it was used in over-the-counter medications, often resulting in neuropsychiatric and dermatological symptoms, according to the study's authors. Incidents of such poisoning saw a sharp decline when the Food and Drug Administration phased out the use of bromides in pharmaceuticals in the mid-1970s and late 1980s. The man was treated at the hospital for three weeks, over which time his symptoms progressively improved. USA TODAY reached out to OpenAI, the maker of ChatGPT, for comment on Aug. 13 but had not received a response. The company provided Fox News Digital with a statement saying: "Our terms say that ChatGPT is not intended for use in the treatment of any health condition, and is not a substitute for professional advice. We have safety teams working on reducing risks and have trained our AI systems to encourage people to seek professional guidance." AI can 'fuel the spread of misinformation,' doctors say Doctors involved in the case study said they suspected that the patient had used ChatGPT version 3.5 or 4.0, the former of which they tested in an attempt to replicate the answers the man received. The study's authors noted they couldn't know exactly what the man was told without the original chat log, but they did receive a suggestion for bromide as a replacement for chloride in their tests. "Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do," said study authors Dr. Audrey Eichenberger, Dr. Stephen Thielke and Dr. Adam Van Buskirk. AI carries the risk of providing information without context, according to the doctors. For example, it is unlikely that a medical expert would have mentioned sodium bromide at all if a patient asked for a salt substitute. "Thus, it is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation," the study said. This article originally appeared on USA TODAY: Man hospitalized after taking ChatGPT diet advice, study says

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store