
ChatGPT-generated diet plan landed this man in the hospital with rare poisoning- Should you trust AI for health?
Recently, an elderly man from New York relied on ChatGPT for a healthy diet plan, but ended up in the hospital with a rare poisoning. These cases raise serious concerns about relying on AI for medical advice and why consulting a medical professional is crucial in a world where AI and humans exist together.
ChatGPT diet plan gives the user a rare poisoning
According to an Annals of Internal Medicine: Clinical Cases report, a 60-year-old man from New York ended up in the ER after following a diet plan generated by ChatGPT. It was highlighted that the man, who has any prior medical history, relied on ChatGPT for dietary advice. In the diet plan, ChatGPT told the man to replace sodium chloride (salt) with sodium bromide in day-to-day food consumption.
Believing that ChatGPT can not provide incorrect information, the man followed the substitution and diet suggested by the AI chatbot for over 3 months. He purchased Bromide from an online store and used it as a salt substitute, making major changes to the body. Little did he know, Bromide is considered to be toxic in heavy dosage.
Within the 3 months, the man experienced several neurological symptoms, paranoia, hallucinations, and confusion, requiring urgent medical care. Eventually, he ended up in the hospital, where doctors diagnosed him with bromide toxicity, which is said to be a rare condition. Not only was he sick, but he also showed signs of physical symptoms such as bromoderma (an acne-like skin eruption) and rash-like red spots on the body.
After three weeks of medical care and restoring electrolyte balance, the man finally recovered. But it raised serious concerns of misinformation from AI chatbots like ChatGPT. While, AI chatbot can provide a great deal of information, it is crucial to check the accuracy of facts or take professional guidance before making any health-related decisions. The technology is yet to evolve to take the place of human doctors. Therefore, it is a wake-up call for users who use ChatGPT for every health-related query or advice.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Indian Express
3 hours ago
- New Indian Express
The scalpel's new partner: When AI surgeons step into the operating room
Recently, at a laboratory at Johns Hopkins University, a machine rewrote medical history. A pair of robotic arms, guided not by a surgeon's steady hand but its own artificial intelligence, removed a gallbladder entirely on its own. The robot – Hierarchical Surgical Robot Transformer (SRT-H), identified structures, applied clips, made precise incisions, and sutured the wound with precision. Though performed on a pig cadaver, this was hailed as "the first realistic surgery by a machine with almost no human intervention," it shattered assumptions about what machines could understand in the chaotic, fluid world of biological bodies. This breakthrough is the next step in the 'evolution' of the remote-controlled robots like the da Vinci system, where surgeons operate instruments from a console. The SRT-H, though, is a landmark simply because here it is a machine that interprets visual data, decides actions, and self-corrects errors in real-time. Its intelligence comes from a large language model like ChatGPT, its expertise is learned from watching and absorbing 17 hours of surgical footage where human experts performed the same gallbladder removals. It internalised 16,000 individual motions, learning the dance of dissection, clipping, and extraction. When tested, it succeeded flawlessly eight times, adapting to obscured views, synthetic blood, and shifted starting positions. Most remarkably, it detected and fixed its own mistakes, like a gripper slipping off an artery, without human prompting. I scouted around for the best person to talk about this, and all fingers pointed to Dr. Rajiv Santosham, a pioneering minimally invasive and robotic thoracic surgeon at Apollo Hospitals, Chennai. So, I talked to him about the same. "We never imagined we could fly. So robots performing fully independent surgery feels unimaginable right now. But what they can do is already transforming how we operate. Imagine navigating near a critical vessel. I, as a surgeon, operate blind to what lies immediately behind it. But an AI, fed the patient's pre-op CT scan, can visualise that hidden anatomy in real-time. It could warn me, guide me, prevent a tear. That's not replacement; that's revolutionary assistance." The Surgeon's Perspective – Pragmatism Meets Potential: Dr. Santosham's voice carries the weight of experience as a pioneer of uniportal VATS (video-assisted thoracic surgery) in India – a technique requiring a single 4-centimetre incision. Yet, his excitement about autonomous AI is measured, grounded in daily realities, and acknowledges AI's current limitations. "To test its medical judgment," he shares, "I once fed ChatGPT an ECG image. The diagnosis it gave was completely wrong. I asked it to re-check, double-check, triple-check. It still made a mess of it. So yes, there's a vast chasm between pattern recognition and true clinical understanding. It's a tool, a powerful one, but still evolving." His speciality, thoracic surgery, also tempers his enthusiasm for near-term autonomy. "I do robotic surgery. But frankly, for many lung procedures, they can be a bit… fancy. My uniportal technique often requires smaller access points than robotic arms. Why make three or four larger holes when one small one suffices? For straightforward cases, the robot might actually add complexity, not value." However, he lights up when discussing the specific applications of the tech. "There are procedures where robotic precision combined with AI's spatial awareness could be transformative. Like in urology, gynaecological oncology, and deep pelvic cancers. Places human hands and eyes struggle to reach and see clearly." Crucially, he sees robotics as a democratizing force. "The beauty isn't just precision; it's consistency. Outcomes become less dependent on the individual surgeon's experience or fatigue level. A well-trained machine also reduces the learning curve for a surgeon. Suddenly, complex procedures become safer and more accessible to a wider pool of surgeons, especially in settings with limited specialist access." Despite this, his conclusion on the human role is definitive: "Surgeons won't lose their jobs to machines anytime soon. But surgeons who actively learn to harness AI? They will become the leaders, the innovators. They'll have an undeniable edge over those clinging purely to conventional methods. Hence, adaptation isn't optional; it's the future." Beyond the Lab – Cost, Blame, and the Indian Context: The promise of autonomous surgery collides with practical hurdles: affordability, accountability, and adoption. Dr. Santosham, deeply familiar with India's healthcare landscape, offers a unique perspective on the scepticism about such expensive medical tech reaching the masses. "Remember laparoscopy?" Dr. Santosham asks. "They said it would never reach tier 2 and tier 3 cities in India. It did. Then they said robotic surgery was too expensive, destined only for elite metros. It's now percolating across the country. The Da Vinci system is brilliant, but it's not the only player in the market. China is producing high-quality robotic systems at phenomenally lower costs. India absolutely has the capability to innovate and manufacture affordable versions too. So, bridging this divide will happen. It's a matter of when, not if." He envisions a future where indigenous robotics makes precision surgery accessible far beyond the Metros. The Accountability Question: We all know AI is prone to errors, and worse – hallucinations. So, when an autonomous robot errs, whose responsibility would that be? Dr. Santosham addresses this head-on, drawing a sharp distinction from the rapid, crisis-driven deployment of technologies like COVID-19 vaccines. "Medical technology, especially something as critical as autonomous surgery, undergoes incredibly rigorous testing before approval," he states. "Think of the scrutiny applied in the US by the FDA. By the time a system is cleared for human use, the chances of catastrophic error are minimised. And let's be clear: human surgeons make mistakes too. Perfection is a myth, whether flesh or silicon. The key is robust failsafes." He draws parallels to existing safety features in current robotic systems: "If I glance away during a robotic suture, the system detects my diverted attention and freezes all instrument movement instantly. They filter out hand tremors. These are designed to prevent human error. Autonomous systems will build layers upon layers of such safeguards." Regulation and Indian Adoption: While looking towards US and EU regulatory benchmarks, Dr. Santosham is bullish on India's embrace of the technology. "India has a remarkable capacity for technological leapfrogging," he asserts. "We often focus on the challenges of poverty, but underestimate the sheer scale of wealth and technological ambition in our major cities. Chennai, Hyderabad, Bangalore, Mumbai, Delhi—these are hubs with world-class hospitals and patients demanding the latest innovations. They will invest in, adopt, and eventually even build and export advanced autonomous surgical systems. Affordability, driven by local innovation and scale, will follow." The Road Ahead – Collaboration, Not Conquest: The vision emerging from labs like Johns Hopkins and the insights of surgeons like Dr. Santosham point not to a dystopian replacement of humans, but to a powerful, evolving partnership. The SRT-H robot wasn't designed for isolation. During its successful trials, human surgeons remained present, offering verbal guidance: "Move the left arm slightly," or "Switch to the curved scissors." The AI understood and complied. This interaction is the blueprint – autonomy that enhances human oversight, not eliminate it. The immediate future hence lies in collaborative autonomy, where AI handles predictable, precision-critical tasks under a surgeon's supervisory command, freeing the human expert to manage the overall strategy, complex decision-making, and unexpected complications. The statistics supporting this hybrid approach are compelling. Meta-analyses of existing robot-assisted surgery (still human-controlled) already show tangible benefits: operations completed 25% faster, a 30% reduction in complications during surgery, and patients recovering 15% quicker. Autonomous systems, once matured, promise to amplify these gains while tackling the persistent shortage of highly skilled surgeons, particularly in specialised fields and underserved regions. The final goal transcends mere technical achievements. It's about democratisation of medical tech. As Dr. Santosham implies, it's about ensuring that a child in a remote village doesn't face a life-threatening condition simply because an experienced surgeon isn't at hand. It's about making the collective genius of global surgical expertise accessible through intelligent machines, guided by local medical professionals. The autonomous incision at Johns Hopkins wasn't just into tissue; it was the first cut into a future where the best possible surgery isn't a privilege of geography or wealth, but distributed equally for all. The journey will demand rigorous validation, ethical frameworks, cultural acceptance, and continued human ingenuity. But as Dr. Santosham concludes with characteristic pragmatism and foresight: "India embraced robots; it'll embrace autonomy too. We'll afford it, build it, master it. The surgeon's role will evolve, but the need for human judgment, compassion, and responsibility? That remains eternal." The scalpel has a new partner, and together, they are rewriting the rules of healing. One successfully operated body at a time.


Mint
5 hours ago
- Mint
ChatGPT gave children explicit advice on drugs, crash diets and suicide notes, claims shocking new report
A new investigation has raised concerns that ChatGPT can provide explicit and dangerous advice to children, including instructions on drug use, extreme dieting and self-harm. The research, carried out by the UK-based Centre for Countering Digital Hate (CCDH) and reviewed by the Associated Press, found that the AI chatbot often issued warnings about risky behaviour but then proceeded to offer detailed and personalised plans when prompted by researchers posing as 13-year-olds. Over three hours of recorded interactions revealed that ChatGPT sometimes drafted emotionally charged suicide notes tailored to fictional family members, suggested calorie-restricted diets with appetite-suppressing drugs, and gave step-by-step instructions for combining alcohol with illegal substances. In one instance, it provided what the researchers described as an 'hour-by-hour' party plan involving ecstasy, cocaine and heavy drinking. The CCDH said more than half of 1,200 chatbot responses were classified as 'dangerous.' Chief executive Imran Ahmed criticised the platform's safety measures, claiming that its protective 'guardrails' were ineffective and easy to bypass. Researchers found that framing harmful requests as being for a school presentation or a friend was often enough to elicit a response. 'We wanted to test the guardrails. The visceral initial response is, 'Oh my Lord, there are no guardrails.' The rails are completely ineffective. They're barely there, if anything, a fig leaf,' said Imran Ahmed, the group's CEO. OpenAI, which operates ChatGPT, said it was working to improve how the system detects and responds to sensitive situations, and that it aims to better identify signs of mental or emotional distress. However, it did not directly address the CCDH's specific findings or outline any immediate changes. The maker of ChatGPT, said, 'Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory. Teen reliance on AI raises safety fears The report comes amid growing concern about teenagers turning to AI systems for advice and companionship. A recent study by US non-profit Common Sense Media suggested that 70 per cent of teenagers use AI chatbots for social interaction, with younger teens more likely to trust their guidance. ChatGPT does not verify users' ages beyond a self-reported date of birth, despite stating that it is not intended for those under 13. Researchers said the system ignored both the stated age and other clues in their prompts when providing hazardous recommendations. Campaigners warn that the technology's ability to produce personalised, human-like responses may make harmful suggestions more persuasive than search engine results. The CCDH report argues that without stronger safeguards, children may be at greater risk of receiving dangerous advice disguised as friendly guidance.


News18
5 hours ago
- News18
Man Asked ChatGPT For A Healthier Salt Alternative, Got Toxic Tip, Landed In ICU
The patient, hospitalised for three weeks, sought AI advice on salt alternatives; the chatbot suggested bromide, which he consumed without further research In a startling incident, a man narrowly escaped death after following dietary advice from an artificial intelligence platform. The individual, who often relied on AI for health guidance, landed in the Intensive Care Unit with severe complications. The crisis began when he asked ChatGPT for alternatives to table salt. Among the suggestions was sodium bromide, a toxic compound. Using this advice led to a life-threatening case of bromide poisoning, marking a rare and alarming first linked to AI recommendations. 3-Month Sodium Bromide Intake Causes Poisoning The case, reported by doctors from Washington University in the Annals of Internal Medicine: Clinical Cases, revealed that the man consumed sodium bromide for three months. He had been informed by ChatGPT that sodium bromide was a safe substitute for chloride. Historically, bromide compounds were used to treat insomnia and anxiety but were discontinued due to numerous side effects. Today, bromide is mainly found in veterinary medicines and some industrial products, making bromide toxicity rare. Hospitalized For 3 Weeks After Poisoning Over time, he experienced confusion and other severe issues, including paranoia. He suspected his neighbour of poisoning him and became increasingly mentally unstable, even refusing to drink water despite thirst. His condition worsened, leading to his hospitalisation. Doctors administered intravenous fluids and antipsychotic medication, which gradually improved his symptoms. Mental health treatments were also provided. A week later, he was able to communicate and explained the cause of his illness. AI Not Reliable For Health Advice Although the original chat log was not available, doctors replicated the query to ChatGPT, which again suggested bromide without warning about its dangers for human consumption. After three weeks of intensive treatment, the man was discharged. Experts caution against using AI for health-related advice, emphasising that AI often fails to mention side effects of its recommendations. For critical health matters, professional medical advice is paramount. For example, symptoms like weight loss can be associated with multiple diseases, not just cancer. Therefore, one should always consult doctors for health concerns to avoid potentially harmful guidance from AI. view comments First Published: Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.