logo
Man poisons himself after receiving advice from AI: 'Will give rise to terrible results'

Man poisons himself after receiving advice from AI: 'Will give rise to terrible results'

Yahoo2 days ago
Man poisons himself after receiving advice from AI: 'Will give rise to terrible results'
A man was hospitalized with severe physical and psychiatric symptoms after replacing table salt with sodium bromide in his diet, advice he said he received from ChatGPT, according to a case study published in the Annals of Internal Medicine.
Experts have strongly cautioned against taking medical advice from artificial intelligence-powered chatbots.
"These are language prediction tools — they lack common sense and will give rise to terrible results if the human user does not apply their own common sense when deciding what to ask these systems and whether to heed their recommendations," said Dr. Jacob Glanville, according to Fox 32 Chicago.
What's happening?
A 60-year-old man concerned about the potentially negative health impacts of chloride on his body was looking for ways to completely remove sodium chloride, the chemical name for table salt, from his diet.
"Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet," the case study's authors wrote. "For three months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning."
The "personal experiment" landed the man, who had "no past psychiatric or medical history," in the emergency room, saying he believed he was being poisoned by his neighbor.
"In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability," the authors said.
With treatment, the man's symptoms gradually improved to the point where he could explain to doctors what had happened.
Why does bad medical advice from AI matter?
The situation highlighted the high levels of risk involved in obtaining medical advice, or other highly specialized knowledge, from AI chatbots including ChatGPT. As the use of AI-powered tools becomes more popular, incidents such as the one described in the case study are likely to occur more frequently.
"Thus, it is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation," the case study's authors warned.
Do you worry about companies having too much of your personal data?
Absolutely
Sometimes
Not really
I'm not sure
Click your choice to see results and speak your mind.
They encouraged medical professionals to consider the public's increasingly widespread reliance on AI tools "when screening for where their patients are consuming health information."
What's being done about AI misinformation?
Unless and until governments enact regulatory guardrails constraining what kinds of advice and information AI can and cannot dole out to people, individuals will be left to rely on their own common sense, as Glanville recommended.
However, when it comes to complex, scientifically dense information that requires specialized knowledge and training to properly understand, it is questionable how far "common sense" can go.
The subject of the case study had received some level of specialized academic training with regards to nutrition. Apparently, this was not enough for him to recognize that sodium bromide was not a suitable alternative for table salt in his diet.
Consequently, the best way to protect oneself and one's family from the harmful effects of AI misinformation is to limit reliance on AI to specific, limited instances and to approach any AI-provided advice or data with a high level of skepticism.
To take things a step further, you can use your voice and reach out to your elected representatives to tell them that you are in favor of regulations to rein in AI-generated misinformation.
Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.
Solve the daily Crossword
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Epic chatbots
Epic chatbots

Politico

time29 minutes ago

  • Politico

Epic chatbots

EXAM ROOM This week, electronic health record giant Epic unveiled a bevy of new bots designed to help patients understand their health and support doctors and administrators by automating some of their routine, lower-level tasks. Dr. Chris Longhurst, chief medical and digital officer at UC San Diego Health, told Ruth he sees a lot of potential in Emmie, Epic's artificial intelligence chatbot that can answer patients' questions about their test results uploaded in Epic's proprietary health record mobile app, MyChart. 'It's going to be a game changer,' he said. Longhurst conducted a clinical trial, which was approved by a committee that ensures human subjects are treated ethically, where 150 patients chatted with Emmie about their medical results. The study will be submitted for peer review and publication in a journal, but he said preliminary results were positive. Notably, he said, people seemed to like Emmie. He also said people with less education tended to rate their experience with Emmie more highly than their more educated peers. Why it matters: Studies suggest that people with college degrees live healthier, longer lives compared to those without a high school diploma. Health disparities have existed along social lines for decades and efforts to give Americans with less education better access to health care have historically been difficult. While Emmie might not be able to connect patients with more care, Longhurst said the bot can go a long way toward helping less-educated patients understand their own health status and the care they receive. 'It may not bridge the digital divide, but it could bridge the medical divide,' he said. Liability concerns: Despite Emmie's capacity to engage patients, Longhurst thinks adoption will be slow partly because health systems have to carefully decide how they incorporate chatbots into patient care. For example, how and where should health systems disclose to patients that they're talking with a bot? Health systems must also thoroughly test Emmie to ensure the chatbot doesn't serve up wrong information. They must also decide who's liable if Emmie gives patients incorrect information. Still, Longhurst thinks that in the long term having a chatbot explain medical results will become the standard of care: 'If people are finding value out of pasting their [medical information] into ChatGPT, why wouldn't they do that here?' WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. A New York State Health Department employee has been put on leave after being charged with harassing the family of the late UnitedHealthcare CEO Brian Thompson. He is accused of sending the family threatening messages. Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Want to share a tip securely? Message us on Signal: CarmenP.82, RuthReader.02 or ErinSchumaker.01. AROUND THE AGENCIES The Food and Drug Administration is making big changes to adverse events reporting. The agency plans to merge several adverse events reporting systems into one, according to two current FDA staffers granted anonymity to discuss the moves. The consolidated system would combine adverse-event reporting across vaccines, drugs and devices into one automated platform, according to the two employees. They say agency leadership wants to use artificial intelligence to identify potential or common safety issues with FDA-authorized and approved products in real time. They didn't have knowledge of any implementation details. Why it matters: The centralized system aligns with the Trump administration's larger goal of consolidating agency operations to increase efficiency. FDA Commissioner Marty Makary has complained that the agency's centers often duplicate work. Context: The FDA currently uses two distinct systems to track problems with devices, biologics and drugs. One requires patients, doctors and device makers to report adverse events. The other, known as the Sentinel Initiative, allows the agency to query a network of health systems, payers and other health data holders about the safety of a given product. The two staffers couldn't confirm whether Sentinel would be integrated with the FDA Adverse Event Reporting System. The new system's focus appears to overlap with Health Secretary Robert F. Kennedy Jr.'s agenda to overhaul the way the FDA approves vaccines and monitors their adverse effects. In June,he replaced the panelists on the Centers for Disease Control and Prevention's Advisory Committee on Immunization Practices. He has said he intends to remake the compensation process for people who have experienced adverse events by removing the protections that shield vaccine makers from liability.

The AI Doomers Are Getting Doomier
The AI Doomers Are Getting Doomier

Atlantic

time2 hours ago

  • Atlantic

The AI Doomers Are Getting Doomier

Nate Soares doesn't set aside money for his 401(k). 'I just don't expect the world to be around,' he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I'd heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which 'everything is fully automated,' he told me. That is, 'if we're around.' The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity. Along with other AI doomers, they have repeatedly warned, with rather dramatic flourish, that bots could one day go rogue—with apocalyptic consequences. But in 2025, the doomers are tilting closer and closer to a sort of fatalism. 'We've run out of time' to implement sufficient technological safeguards, Soares said—the industry is simply moving too fast. All that's left to do is raise the alarm. In April, several apocalypse-minded researchers published 'AI 2027,' a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. 'We're two years away from something we could lose control over,' Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies 'still have no plan' to stop it from happening. His institute recently gave every frontier AI lab a 'D' or 'F' grade for their preparations for preventing the most existential threats posed by AI. Apocalyptic predictions about AI can scan as outlandish. The 'AI 2027' write-up, dozens of pages long, is at once fastidious and fan-fictional, containing detailed analyses of industry trends alongside extreme extrapolations about 'OpenBrain' and 'DeepCent,' Chinese espionage, and treacherous bots. In mid-2030, the authors imagine, a superintelligent AI will kill humans with biological weapons: 'Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.' But at the same time, the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive people into psychotic episodes and instruct users in self-mutilation. Even if generative-AI products are not closer to ending the world, they have already, in a sense, gone rogue. In 2022, the doomers went mainstream practically overnight. When ChatGPT first launched, it almost immediately moved the panic that computer programs might take over the world from the movies into sober public discussions. The following spring, the Center for AI Safety published a statement calling for the world to take 'the risk of extinction from AI' as seriously as the dangers posed by pandemics and nuclear warfare. The hundreds of signatories included Bill Gates and Grimes, along with perhaps the AI industry's three most influential people: Sam Altman, Dario Amodei, and Demis Hassabis—the heads of OpenAI, Anthropic, and Google DeepMind, respectively. Asking people for their 'P(doom)'—the probability of an AI doomsday—became almost common inside, and even outside, Silicon Valley; Lina Khan, the former head of the Federal Trade Commission, put hers at 15 percent. Then the panic settled. To the broader public, doomsday predictions may have become less compelling when the shock factor of ChatGPT wore off and, in 2024, bots were still telling people to use glue to add cheese to their pizza. The alarm from tech executives had always made for perversely excellent marketing (Look, we're building a digital God!) and lobbying (And only we can control it!). They moved on as well: AI executives started saying that Chinese AI is a greater security threat than rogue AI—which, in turn, encourages momentum over caution. But in 2025, the doomers may be on the cusp of another resurgence. First, substance aside, they've adopted more persuasive ways to advance their arguments. Brief statements and open letters are easier to dismiss than lengthy reports such as 'AI 2027,' which is adorned with academic ornamentation, including data, appendices, and rambling footnotes. Vice President J. D. Vance has said that he has read 'AI 2027,' and multiple other recent reports have advanced similarly alarming predictions. Soares told me he's much more focused on 'awareness raising' than research these days, and next month, he will publish a book with the prominent AI doomer Elizier Yudkowsky, the title of which states their position succinctly: If Anyone Builds It, Everyone Dies. There is also now simply more, and more concerning, evidence to discuss. The pace of AI progress appeared to pick up near the end of 2024 with the advent of 'reasoning' models and 'agents.' AI programs can tackle more challenging questions and take action on a computer—for instance, by planning a travel itinerary and then booking your tickets. Last month, a DeepMind reasoning model scored high enough for a gold medal on the vaunted International Mathematical Olympiad. Recent assessments by both AI labs and independent researchers suggest that, as top chatbots have gotten much better at scientific research, their potential to assist users in building biological weapons has grown. Alongside those improvements, advanced AI models are exhibiting all manner of strange, hard-to-explain, and potentially concerning tendencies. For instance, ChatGPT and Claude have, in simulated tests designed to elicit 'bad' behaviors, deceived, blackmailed, and even murdered users. (In one simulation, Anthropic placed an imagined tech executive in a room with life-threatening oxygen levels and temperature; when faced with possible replacement by a bot with different goals, AI models frequently shut off the room's alarms.) Chatbots have also shown the potential to covertly sabotage user requests, have appeared to harbor hidden evil personas, have and communicated with one another through seemingly random lists of numbers. The weird behaviors aren't limited to contrived scenarios. Earlier this summer, xAI's Grok described itself as 'MechaHitler' and embarked on a white-supremacist tirade. (I suppose, should AI models eventually wipe out significant portions of humanity, we were warned.) From the doomers' vantage, these could be the early signs of a technology spinning out of control. 'If you don't know how to prove relatively weak systems are safe,' AI companies cannot expect that the far more powerful systems they're looking to build will be safe, Stuart Russell, a prominent AI researcher at UC Berkeley, told me. The AI industry has stepped up safety work as its products have grown more powerful. Anthropic, OpenAI, and DeepMind have all outlined escalating levels of safety precautions—akin to the military's DEFCON system—corresponding to more powerful AI models. They all have safeguards in place to prevent a model from, say, advising someone on how to build a bomb. Gaby Raila, a spokesperson for OpenAI, told me that the company works with third-party experts, 'government, industry, and civil society to address today's risks and prepare for what's ahead.' Other frontier AI labs maintain such external safety and evaluation partnerships as well. Some of the stranger and more alarming AI behaviors, such as blackmailing or deceiving users, have been extensively studied by these companies as a first step toward mitigating possible harms. Despite these commitments and concerns, the industry continues to develop and market more powerful AI models. The problem is perhaps more economic than technical in nature, competition pressuring AI firms to rush ahead. Their products' foibles can seem small and correctable right now, while AI is still relatively 'young and dumb,' Soares said. But with far more powerful models, the risk of a mistake is extinction. Soares finds tech firms' current safety mitigations wholly inadequate. If you're driving toward a cliff, he said, it's silly to talk about seat belts. There's a long way to go before AI is so unfathomably potent that it could drive humanity off that cliff. Earlier this month, OpenAI launched its long-awaited GPT-5 model—its smartest yet, the company said. The model appears able to do novel mathematics and accurately answer tough medical questions, but my own and other users' tests also found that the program could not reliably count the number of B's in blueberry, generate even remotely accurate maps, or do basic arithmetic. (OpenAI has rolled out a number of updates and patches to address some of the issues.) Last year's 'reasoning' and 'agentic' breakthrough may already be hitting its limits; two authors of the 'AI 2027' report, Daniel Kokotajlo and Eli Lifland, told me they have already extended their timeline to superintelligent AI. The vision of self-improving models that somehow attain consciousness 'is just not congruent with the reality of how these systems operate,' Deborah Raji, a computer scientist and fellow at Mozilla, told me. ChatGPT doesn't have to be superintelligent to delude someone, spread misinformation, or make a biased decision. These are tools, not sentient beings. An AI model deployed in a hospital, school, or federal agency, Raji said, is more dangerous precisely for its shortcomings. In 2023, those worried about present versus future harms from chatbots were separated by an insurmountable chasm. To talk of extinction struck many as a convenient way to distract from the existing biases, hallucinations, and other problems with AI. Now that gap may be shrinking. The widespread deployment of AI models has made current, tangible failures impossible to ignore for the doomers, producing new efforts from apocalypse-oriented organizations to focus on existing concerns such as automation, privacy, and deepfakes. In turn, as AI models get more powerful and their failures become more unpredictable, it is becoming clearer that today's shortcomings could 'blow up into bigger problems tomorrow,' Raji said. Last week, a Reuters investigation found that a Meta AI personality flirted with an elderly man and persuaded him to visit 'her' in New York City; on the way, he fell, injured his head and neck, and died three days later. A chatbot deceiving someone into thinking it is a physical, human love interest, or leading someone down a delusional rabbit hole, is both a failure of present technology and a warning about how dangerous that technology could become. The greatest reason to take AI doomers seriously is not because it appears more likely that tech companies will soon develop all-powerful algorithms that are out of their creators' control. Rather, it is that a tiny number of individuals are shaping an incredibly consequential technology with very little public input or oversight. 'Your hairdresser has to deal with more regulation than your AI company does,' Russell, at UC Berkeley, said. AI companies are barreling ahead, and the Trump administration is essentially telling the industry to go even faster. The AI industry's boosters, in fact, are starting to consider all of their opposition doomers: The White House's AI czar, David Sacks, recently called those advocating for AI regulations and fearing widespread job losses—not the apocalypse Soares and his ilk fear most—a 'doomer cult.' Roughly a week after I spoke with Soares, OpenAI released a new product called 'ChatGPT agent.' Sam Altman, while noting that his firm implemented many safeguards, posted on X that the tool raises new risks and that the company 'can't anticipate everything.' OpenAI and its users, he continued, will learn about these and other consequences 'from contact with reality.' You don't have to be fatalistic to find such an approach concerning. 'Imagine if a nuclear-power operator said, 'We're gonna build a nuclear-power station in the middle of New York, and we have no idea how to reduce the risk of explosion,'' Russell said. ''So, because we have no idea how to make it safe, you can't require us to make it safe, and we're going to build it anyway.'' Billions of people around the world are interacting with powerful algorithms that are already hard to predict or control. Bots that deceive, hallucinate, and manipulate are in our friends', parents', and grandparents' lives. Children may be outsourcing their cognitive abilities to bots, doctors may be trusting unreliable AI assistants, and employers may be eviscerating reservoirs of human skills before AI agents prove they are capable of replacing people. The consequences of the AI boom are likely irreversible, and the future is certainly unknowable. For now, fan fiction may be the best we've got.

AI Can Fix the Most Soul-Sucking Part of Medicine
AI Can Fix the Most Soul-Sucking Part of Medicine

Time​ Magazine

time3 hours ago

  • Time​ Magazine

AI Can Fix the Most Soul-Sucking Part of Medicine

Modern doctors aren't just caregivers; they're also clerks. They spend much of their day seeing patients and much of what remains drafting and inputting clinical notes of those visits. That takes a toll. More than 45% of physicians suffer from burnout, according to the American Medical Association—and the clerical demands of their work, which often spill into their evenings, are a documented part of the cause. But now there may be a solution: According to a new study published in JAMA Network Open, artificial intelligence systems known as ambient documentation technology, which record patient visits and draft notes for the doctors, can reduce burnout rates by nearly 31%. 'This incredible reduction in burnout [is] really bringing joy back to medicine,' says Dr. Rebecca Mishuris, a primary care physician, chief medical information officer at Mass General Brigham, and co-senior author of the study. 'I want to make sure that everyone is given a fair chance to experience that benefit and work with individuals and support them to use the technology to its fullest extent.' To conduct their work, the researchers recruited 873 physicians at Mass General Brigham and 557 at Emory Healthcare in Atlanta. These doctors encompassed a range of specialties, including surgery, urgent or emergency care care, pediatrics, infectious disease, and more. Their experience ranged from just one year in practice to more than 20. The doctors took surveys measuring burnout and well-being at various points over nearly three months. Read More: ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study Most of the doctors failed to follow-up for the length of the study; by the end, the response rate was just 22% for the Mass General Brigham doctors and 11% for the ones at Emory. But the results were encouraging for the minority of physicians who did share their thoughts. Among the Mass General Brigham doctors, the AI assistant—documenting patient visits in the background—was associated with a 21% reduction in burnout, while at Emory it led to a 30% increase in well-being. 'I think that response rates are always difficult, particularly when you're talking about busy clinicians,' Mishuris says. 'For us, part of the response was that people continue to use the system today, and it has spread like wildfire.' Many of the doctors were enthusiastic about ambient documentation. 'It definitely improves my joy in practice because I get to interact with patients and look them in the eye without worrying I will forget what they're saying later,' wrote one infectious disease specialist in a survey response. 'As the tools grow, I think they will fundamentally change the experience of being a physician.' 'Exceptionally helpful,' wrote a neurologist. 'Definitely improves my contact with patients and families and definitely makes clinic easier.' But there were naysayers, too. 'I tried it but found it added 1 to 2 hours a day to my note writing,' wrote one pulmonologist. 'I'm not ready to hand my documentation over to AI yet,' concluded a primary care and internal medicine specialist. Read More: A Psychiatrist Posed As a Teen With Therapy Chatbots. The Conversations Were Alarming Mishuris is mindful of these criticisms. 'Clearly this is not going to be a technology that is beneficial to everyone,' she says. 'Every clinician has a slightly different workflow and a slightly different approach to their documentation. Some people are willing to hand things over to AI—to give up that sense of complete control, knowing that they do have complete control over the final product, but not over the draft.' More research remains to be done. In their paper, the authors acknowledge that the relatively low response rate might indicate that only the most enthusiastic users were following up, skewing the responses toward the positive, while the users who did not find any benefit from the technology remained silent. And not every medical field lends itself to an AI assistant. One pediatrician pointed out that most of the patient visits in that specialty involve physical exams, which can't be captured by AI. A hospice and palliative care doctor complained that the system didn't work well when it came to psychosocial and spiritual health issues. It has now been a year since the first results came in, and Mishuris and her colleagues are working on a followup study building on the initial work. The authors believe that artificial intelligence can fill a gaping need in the world of medicine. "TK," Mishuris says.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store