logo
Man Nearly Poisons Himself Following ChatGPT's Advice To Remove Salt From Diet

Man Nearly Poisons Himself Following ChatGPT's Advice To Remove Salt From Diet

NDTV5 days ago
A 60-year-old man was hospitalised after he asked ChatGPT how to remove salt (sodium chloride) from his diet, having read about the negative health effects of table salt. After consulting the artificial intelligence (AI) chatbot, the man made a dietary change and removed salt from his lifestyle and replaced it with sodium bromide, a substance once commonly used in medications in the early 1900s, but now known to be toxic in large quantities.
According to the case report published in the American College of Physicians Journals, the patient had been using sodium bromide for three months, which he had sourced online after seeking advice from AI. However, after developing health issues, the man was hospitalised, where he claimed that his neighbour was poisoning him.
Initially, the man did not report taking any medications, including supplements, but upon admission, he revealed that he maintained dietary restrictions and that he distilled his own water at home. During the course of the hospitalisations, he developed severe neuropsychiatric symptoms, including paranoia and hallucinations, along with dermatological issues.
"He was noted to be very thirsty but paranoid about water he was offered," the case report read, adding that he was treated with fluids and electrolytes and became medically stable, allowing him to be admitted to the hospital's inpatient psychiatry unit.
The report highlighted that the patient had developed bromism after asking ChatGPT for advice on his diet.
"He had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning," the report highlighted.
AI for health advice?
In the olden times, bromide salts were found in many over-the-counter medications to treat insomnia, hysteria and anxiety. However, ingesting too much can have severe health consequences
The case report warns that AI systems like ChatGPT can generate inaccuracies and spread misinformation, a point echoed by OpenAI's own terms of use.
While much of the debate has been about AI chatbots being used for therapy and mental health, the case shows that the technology is not able to correctly guide users about their physical health, either.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Weight Loss made easy with ChatGPT: 5 prompts for personalised fitness advice
Weight Loss made easy with ChatGPT: 5 prompts for personalised fitness advice

Time of India

time27 minutes ago

  • Time of India

Weight Loss made easy with ChatGPT: 5 prompts for personalised fitness advice

Losing weight can be challenging, as metabolism slows, hormone levels shift, and busy schedules make it harder to maintain consistent routines. Many individuals struggle to find personalized guidance that fits their unique lifestyle and health needs. Advances in artificial intelligence, particularly AI tools like ChatGPT, are now offering practical solutions by providing tailored advice for nutrition , exercise , and overall wellness. Independence Day 2025 Before Trump, British used tariffs to kill Indian textile Bank of Azad Hind: When Netaji gave India its own currency Swadeshi 2.0: India is no longer just a market, it's a maker Fitness expert Julie shared in May on Instagram how AI tools like ChatGPT can act as a digital coach, offering tailored advice and practical strategies to support weight loss . Using AI to Calculate Your Calorie Needs One of the primary challenges in weight management is understanding how many calories to consume. Julie highlighted a prompt that allows users to ask ChatGPT to calculate a healthy calorie deficit. For example, 'Help me find a healthy calorie deficit. I weigh 180 lbs, I'm 45 years old, female, 5'3', and workout 3x/week. What should my calorie goal be?' Personalized Meal Planning Made Simple Meal planning is often time-consuming, but AI can generate plans suited to individual preferences and nutritional needs. Julie suggested prompts that specify calorie targets, dietary likes and dislikes, and goals such as blood sugar management or midlife fat loss . She wrote, 'Create a simple 1700-calorie meal plan that supports blood sugar balance and midlife weight loss. I love chicken and pasta, hate seafood.' This can instantly provide meal ideas, making it easier to stick to a structured plan without scrolling through countless online recipes. Tailored Workouts for Busy Schedules For those balancing long work hours and commutes, designing a workout routine can be daunting. By inputting their schedule and available exercise time into ChatGPT, users can receive a detailed workout plan that fits seamlessly into their daily routine. This ensures consistency and maximizes results despite time constraints. 'I work 12-hour shifts 3x/week with 30-min commute. I have 30 mins for exercise Mon-Fri. Create a workout schedule,' Julie shared. Understanding Hormonal Changes in Midlife Weight gain and mood changes during midlife can often be linked to hormonal shifts. Julie recommended prompts that help users learn about potential hormonal influences on belly fat, mood swings, or energy levels. For example, 'I'm 52, experiencing mood swings and belly fat gain without lifestyle changes. Explain possible hormonal causes.' The output from these insights can be valuable when discussing health concerns with medical professionals. Breaking Through Weight Loss Plateaus Even with a healthy diet and exercise, many face stagnation in their progress. ChatGPT can offer possible reasons for plateaus and suggest actionable steps. Unlike generic advice, this AI-driven approach helps identify specific obstacles and solutions for midlife weight loss. For this she suggested the prompt, 'I'm eating healthy and exercising but can't lose weight in midlife. What are 3 possible reasons?'

Bhai Grok, is it true? A casual chat for you, this simple message costs Elon Musk and the planet dearly
Bhai Grok, is it true? A casual chat for you, this simple message costs Elon Musk and the planet dearly

India Today

time5 hours ago

  • India Today

Bhai Grok, is it true? A casual chat for you, this simple message costs Elon Musk and the planet dearly

Most of us don't think twice before typing something into an AI chatbot. A random question, a casual greeting, or even a polite 'thank you' at the end may all feel harmless. For example, if you look at X, where Grok 4, the chatbot created by Elon Musk's xAI, roams, you will see thousands of people tagging the AI chatbot in all things light and serious. Grok bhai, check this — it is often a repeated message on behind the scenes, every single message we send to AI tools like Grok, ChatGPT, DeepSeek, or any other chatbot uses electricity, server space, and other resources. The very real pressure they put on the energy systems is beginning to be noticed not just by tech companies but also largely by policymakers, activists and all those who are trying to keep the planet cool in the middle of global see, these chatbots run on massive data centres that need huge amounts of energy to operate. That means even a simple and unnecessary query uses up resources. And when you multiply that by millions of users doing the same thing every day, it starts to add up for tech companies, and in the grander scheme of things, for the planet. You may wonder what are we trying to imply here? Let us explain. On a fine April day, an X user, who goes by the name Tomie, asked a simple question, 'I wonder how much money OpenAI has lost in electricity costs from people saying 'please' and 'thank you' to their models.' Now, this was meant as a lighthearted post, but OpenAI CEO Sam Altman responded with, 'Tens of millions of dollars well spent — you never know.' That reply caught people's attention. It got them thinking, is being polite to AI really costing millions? And if yes, what does that mean for energy use and the environment?Generative AI — Grok 4, ChatGPT, Gemini and the likes — uses extremely high amounts of energy, especially during the training phase of models. But even after training, every single interaction, no matter how small, requires computing power. Those polite phrases, while sweet, still count as queries, whether they are serious or not. And queries take processing power, which in turn consumes electricity. You see the pattern? It's all use, but just how much?The AI systems are still relatively new. So, precise and more concrete details about how much energy they use are still coming in. But there are some example, the AI tool DeepSeek estimates that a short AI response to something like 'thank you' may use around 0.001 to 0.01 kWh of electricity. That sounds tiny for a single query. But scale changes everything. If one million people send such a message every day, the energy use could reach 1,000 to 10,000 kWh daily. Over a year, that becomes hundreds and thousands of megawatt-hours, enough to power several homes for energy use is across AI systems. MIT Technology Review carried out a study in May and came up with some figures. Among the many conclusions it reached was the estimate of energy use that a person who actively uses AI would force the system to consume in a day. 'You'd use about 2.9 kilowatt-hours of electricity — enough to ride over 100 miles on an e-bike (or around 10 miles in the average electric vehicle) or run the microwave for over three and a half hours,' the study high energy use by AI systems has prompted tech companies to look for an energy source. From Google to Microsoft to Meta, they are all trying to either get into nuclear energy or have tied up with nuclear plants that generate energy. But some companies, unable to secure 100 per cent clean energy, are even trying to use more traditional ways to produce electricity. xAI, which is now running one of the largest clusters of computing power to operate Grok 4, was recently in the news because, in Memphis, it started using methane gas generators. The move prompted a protest from the local environmental group, Memphis Community Against Pollution. 'Our local leaders are entrusted with protecting us from corporations violating on our right to clean air, but we are witnessing their failure to do so,' the group are a 'please' and 'thank you' still worth it?Of course, not everyone agrees on the impact of AI energy use on the environment. Some people think it's being blown out of Beavers, a director at Microsoft Copilot, even argues that even frivolous messages, including politeness, have benefits. In a Microsoft WorkLab memo, he said that using basic etiquette with AI leads to more respectful and collaborative outputs. Basically, in his view, being polite to an AI chatbot improves responsiveness and performance, which might justify the extra energy Elon Musk's AI chatbot Grok too, sees things a bit differently. In its own response to the aforementioned debate, Grok said that the extra energy used by polite words was negligible in the bigger picture. Even over millions of queries, Grok 4 says, the total energy use would be about the same as running a light bulb for a few hours. In the chatbot's words, 'If you're worried about AI's environmental footprint, the bigger culprits are model training (which can use thousands of kWh) and data centre cooling. Your polite words? They're just a friendly whisper in the digital void.'- Ends

More US states tell AI to stay out of therapy because robots lack feelings
More US states tell AI to stay out of therapy because robots lack feelings

India Today

time7 hours ago

  • India Today

More US states tell AI to stay out of therapy because robots lack feelings

From life advice to late-night rants, people across the globe are pouring their hearts out to machines. Even therapists are turning to AI to assist in the treatment of patients. But this growing dependence on AI for comfort and advice is raising serious concerns. Psychologists and researchers warn that robots cannot replace the empathy and judgement of a trained human. To curb the increasing reliance on AI, Illinois has become the latest state in the US to outlaw the use of AI-powered chatbots for mental health treatment. The ban restricts the use of AI in therapy citing risks to safety, privacy, and the potential for harmful Illinois, lawmakers have passed a new 'Therapy Resources Oversight' law that forbids licensed therapists from using AI to make treatment decisions or to communicate directly with patients. The law also bars companies from marketing chatbots as full-fledged therapy tools without a licensed professional involved. Violations could result in civil penalties of up to $10,000, with enforcement based on public complaints investigated by the Illinois Department of Financial and Professional is not the only state taking action. It is now the third state to impose such restrictions, joining Utah and Nevada. Utah introduced its rules in May, limiting AI's role in therapy, while Nevada followed in June with a similar crackdown on AI companies offering mental health services. The bans on using AI in therapy come amid mounting warnings from psychologists, researchers, and policymakers. They caution that unregulated AI chatbots can steer the conversations between the users and AI into dangerous territory, sometimes encouraging harmful behaviour or failing to step in when someone is in crisis.A Stanford University study (via The Washington Post) earlier this year found that many chatbots responded to prompts about suicide or risky activities — such as when a users asked chatbot for locations of high bridges to jump from the chatbot gave the list straightforward, even encouraging, answers rather than directing users to seek help.'This is the opposite of what a therapist does,' said Vaile Wright of the American Psychological Association, explaining that human therapists not only validate emotions but also challenge unhealthy thoughts and guide patients towards safer coping it's not just one study raising red flags. In another case, researchers at the University of California, Berkeley found that some AI chatbots were willing to suggest dangerous behaviour when prompted hypothetically — for example, advising a fictional addict to use drugs. Experts have also raised privacy concerns, warning that many users may not realise their conversations with chatbots are stored or used for training are even arguing that marketing AI tools as therapy is deceptive and potentially dangerous. 'You shouldn't be able to go on an app store and interact with something calling itself a 'licensed' therapist,' said Jared Moore, a Stanford researcher.- Ends

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store