logo
Man who asked ChatGPT about cutting out salt from his diet was hospitalized with hallucinations

Man who asked ChatGPT about cutting out salt from his diet was hospitalized with hallucinations

NBC News2 days ago
A 60-year-old man spent three weeks being treated at a hospital after replacing table salt with sodium bromide following consultation with a popular artificial intelligence chatbot.
Three physicians published a case report on the matter in the Annals of Internal Medicine earlier this month. According to the report, the man had no prior psychiatric history when he arrived at the hospital "expressing concern that his neighbor was poisoning him."
The man shared that he had been distilling his own water at home and the report noted he seemed "paranoid" about water he was offered. Bromism, or high levels of bromide, was considered after a lab report and consultation with poison control, the report said.
"In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability," the case report said.
Once his condition improved, the man shared that he had taken it upon himself to conduct a "personal experiment" to eliminate table salt from his diet after reading about its negative health effects. The report said he did this after consulting with ChatGPT, an artificial intelligence bot.
He self-reported that the replacement went on for three months.
The three physicians, all from the University of Washington, noted in the report that they did not have access to the patient's conversation logs with ChatGPT. However, they asked ChatGPT 3.5 on what chloride could be replaced with on their own.
According to the report, the response they received included bromide.
"Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do," the report said.
A representative for OpenAI, the company that created ChatGPT, did not immediately respond to a request for comment. The company noted in a statement to Fox News that its terms of service state that the bot is not to be used in the treatment of any health condition.
"We have safety teams working on reducing risks and have trained our AI systems to encourage people to seek professional guidance," the statement said.
Bromide toxicity was a more common toxic syndrome in the early 1900s, the report said, as it was present in a number of over-the-counter medications. It was believed to contribute to 8% of psychiatric admissions at the time, according to the report.
It's a rare syndrome but cases have re-emerged recently "as bromide-containing substances have become more readily available with widespread use of the internet," the report said.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

More Pregnancies With Weight Loss Before IVF
More Pregnancies With Weight Loss Before IVF

Medscape

time12 hours ago

  • Medscape

More Pregnancies With Weight Loss Before IVF

Women with obesity who lost weight prior to in vitro fertilization (IVF) had increased pregnancy rates, especially unassisted conceptions, a systematic review and meta-analysis has found. The studies reviewed were small and heterogeneous, making it difficult to determine which weight loss interventions had the most efficacy, according to the authors. Still, they concluded that weight loss in this cohort might 'negate the need for treatment, and does not seem to increase the risk for pregnancy loss, although evidence on the effect on live births was unclear.' The results were published online August 12 in the Annals of Internal Medicine. Obesity is associated with ovulatory dysfunction, reduced ovarian responsiveness to agents that induce ovulation, altered oocyte and endometrial function, and lower birth rates after IVF, according to an opinion published by the Practice Committee of the American Society for Reproductive Medicine in 2021. Previously, it was unknown whether weight loss before IVF improves reproductive outcomes, so Moscho Michalopoulou, MSc, DPhil, a behavioral scientist at Oxford University in the United Kingdom, and a team of researchers reviewed 12 randomized controlled studies (RCTs) of 1921 women with obesity who were offered a weight loss intervention before planned IVF. The studies included in the analysis were of women in upper-middle or high-income countries who had a median body mass index of 33.6 kg/m2. They were typically in their early 30s, and their weight loss prior to conception tended to be modest across the studies included. Nearly a quarter of women from nine studies had polycystic ovary syndrome. Weight loss in this PCOS population was associated with fewer unassisted conceptions. There were numerous weight loss interventions across the RCTs, and their median duration of an active weight loss phase was 12 weeks (range 5 - 24 weeks). Controls across the studies in the analysis received usual care, yet in six studies they received no or minimal intervention. In the remaining six studies, usual care was a less intense weight loss intervention than in the study arm. Participants across all intervention groups lost 4 kg more than controls, the researchers found. The difference in weight change between groups was larger when interventions were compared with no or minimal intervention rather than to an active control. The average follow-up for reproductive outcomes was 9.3 months (range 1.3 - 18 months) for intervention groups vs 11.2 months (range 4.3 - 24 months) for controls. Ten studies reported unassisted pregnancy rates (1466 participants). Eight studies favored intervention; however, most studies had few unassisted pregnancies, resulting in wide confidence intervals. Overall, the investigators found that weight loss interventions before IVF were associated with greater unassisted pregnancy rates (relative risk, 1.47; 95% CI, 1.26 - 1.73). The effect size was greater in the RCTs with controls involving no or minimal intervention vs an active weight loss comparator, although the small number of studies and events limited formal comparison. No consistent pattern was observed when studies were sorted by the difference in weight change between groups, age, or baseline BMI, but the study authors found a tendency for fewer unassisted pregnancies with an increasing proportion of women with PCOS in the sample. 'There was inconclusive evidence on the effect of weight loss interventions on treatment-induced pregnancies. Overall, evidence on the association between weight loss interventions before IVF and live births was uncertain, although there was moderate certainty of no association with pregnancy loss,' the investigators write. The authors noted that a weakness in their study was a lack of follow-up on pregnancy outcomes. 'Unfortunately, fewer studies reported live birth outcomes, not all studies followed up on unassisted conceptions to determine live birth, and evidence on live births was further limited by heterogeneity in study design and clinical characteristics of recruited populations,' the authors write. Another deficit in the study, according to the authors, was that the studies reviewed had 'marked variability in eligibility and in participant characteristics that affect IVF success and could have influenced the effect of weight loss interventions on outcomes.' According to an accompanying editorial written by Alan S. Penzias, MD,'[The authors] highlight for future investigators the need for studies that include outcomes, including pregnancy loss and live birth, for both medically assisted and unassisted pregnancies.' Penzias directs the Fellowship Program in Reproductive Endocrinology and Infertility at the Boston IVF/Beth Israel Deaconess Medical Center, and is an associate professor of Obstetrics, Gynecology and Reproductive Biology at Harvard Medical School, Boston, Massachusetts. The women in the studies Michalopoulou and her colleagues analyzed tended to be in their early 30s, which Penzias focused on in his editorial. 'A woman's age is the strongest predictor of successfully becoming pregnant,' he writes. 'The association of increasing age with reduced fecundity is so strong that some advocate consideration of IVF as a first-line treatment strategy in women older than 38 to 40 years. It is critical to balance the time it takes to achieve weight loss and the benefit of weight loss on medically unassisted conception.' Penzias suggested that in addition to a woman's age, her preferred family size, which cannot be determined by weight loss, must also be factored in when deciding whether to use IVF. 'It is important to understand that once an oocyte is retrieved via IVF, any embryo created from its fertilization will always carry the success rate associated with the woman's age at the time of retrieval,' Penzias writes. For Cate Varney, DO, an associate professor in the Department of Family Medicine at University of Virginia School of Medicine in Charlottesville, the age of the woman seeking to become pregnant does matter, but that 'it is well-established that obesity plays a significant role in infertility. There is a gap in the data between the association and modifiable risk,' she told Medscape Medical News. 'The timing and amount of clinically meaningful weight loss will be important to identify so we can clarify the trade-offs between delaying IVF for weight loss vs age-related fertility decline,' said Varney, who is also the obesity medicine director at UVA Health. The study was supported by the National Institute for Health and Care Research Applied Research Collaboration Oxford and Thames Valley. The study authors and editorialist Penzias reported no relevant financial relationships. Varney is an advisor and in the speaker's bureau for Eli Lilly.

Nvidia, National Science Foundation Partner to Create Open AI Models for US Scientists
Nvidia, National Science Foundation Partner to Create Open AI Models for US Scientists

Yahoo

time14 hours ago

  • Yahoo

Nvidia, National Science Foundation Partner to Create Open AI Models for US Scientists

The US National Science Foundation (NSF) has secured significant investments from the private and public sectors for its Open Multimodal AI Infrastructure (OMAI) project. The Allen Institute for AI, known as Ai2, is leading the project, which will provide cutting-edge AI models for scientists nationwide. Nvidia has already committed $77 million to the new project. The key to the OMAI project is that the large language models (LLMs) produced will be open source, freeing researchers from potential ties to private interests. At this stage of AI development, the hardware costs of training LLMs are outside the budgets of many research facilities. Those LLMs are generally not open source. (Even OpenAI isn't entirely open source, despite its name.) Ai2 plans to fill that gap with the OMAI project by producing LLMs geared towards literature and scientific data. 'These tools will enable America's researchers and developers to process and analyze research faster, generate code and visualizations, and connect new insights to past discoveries, accelerating breakthroughs across materials science, biology, energy, and more,' the NSF wrote in a statement. Ai2 was founded by the late Paul Allen, a Microsoft co-founder who donated considerable sums to further science research. The investment in Ai2's OMAI project gives Ai2 a massive boost in funding and hardware. Credit: Nvidia. Nvidia isn't the only organization contributing to the OMAI project. The NSF is providing $75 million, bringing the initial investment to $152 million. The Trump administration recently created an AI Action Plan, which prioritizes the US's 'global dominance in artificial intelligence.' 'Bringing AI into scientific research has been a game changer,' said Brian Stone of the NSF. 'NSF is proud to partner with Nvidia to equip America's scientists with the tools to accelerate breakthroughs. These investments are not just about enabling innovation; they are about securing US global leadership in science and technology and tackling challenges once thought impossible.' The $152 million will also help the OMAI project support universities in Hawai'i, New Hampshire, New Mexico, and Washington. The project already has partnerships with Cirrascale Cloud Services and SuperMicro. Cirrascale will handle hardware infrastructure management. As you'd expect from a hardware company, Nvidia is providing its HGX B300 systems, which are loaded with Nvidia Blackwell chips and designed for AI uses. According to Nvidia, the HGX B300 features 8 Blackwell Ultra SXM GPUs and up to 2.3TB of memory. The OMAI project hasn't indicated yet how many HGX B300 systems will be involved.

Developers Say GPT-5 Is a Mixed Bag
Developers Say GPT-5 Is a Mixed Bag

WIRED

time14 hours ago

  • WIRED

Developers Say GPT-5 Is a Mixed Bag

Aug 15, 2025 1:47 PM Software engineers are finding OpenAI's new GPT-5 model is helping them think through coding problems—but isn't much better at actual coding. Photo-Illustration:When OpenAI launched GPT-5 last week, it told software engineers the model was designed to be a 'true coding collaborator' that excels at generating high-quality code and performing agentic, or automated, software tasks. While the company didn't say so explicitly, OpenAI appeared to be taking direct aim at Anthropic's Claude Code, which has quickly become many developers' favored tool for AI-assisted coding. But developers tell WIRED that GPT-5 has been a mixed bag so far. It shines at technical reasoning and planning coding tasks, but some say that Anthropic's newest Opus and Sonnet reasoning models still produce better code. Depending on which version of GPT-5 developers are using—low, medium, or high verbosity—the model can be more elaborative, which sometimes leads it to generate unnecessary or redundant lines of code. Some software engineers have also criticized how OpenAI evaluated GPT-5's performance at coding, arguing the benchmarks it used are misleading. One research firm called a graphic that OpenAI published boasting about GPT-5's capabilities a 'chart crime.' GPT-5 does stand out in at least one way: Several people noted that, in comparison to competing models, it is a much more cost-effective option. 'GPT-5 is mostly outperformed by other AI models in our tests, but it's really cheap,' says Sayash Kapoor, a computer science doctoral student and researcher at Princeton University who co-wrote the book AI Snake Oil . Kapoor says he and his team have been running benchmark tests to evaluate GPT-5's capabilities since the model was released to the public last week. He notes that the standard test his team uses—measuring how well a language model can write code that will reproduce the results of 45 scientific papers—costs $30 to run with GPT-5 set to medium, or mid-range verbosity. The same test using Anthropic's Opus 4.1 costs $400. In total, Kapoor says his team has spent around $20,000 testing GPT-5 so far. Although GPT-5 is cheap, Kapoor's tests indicate the model is also less accurate than some of its competitors. Claude's premium model achieved a 51 percent accuracy rating, measured by how many of the scientific papers it accurately reproduced. The medium version of GPT-5 received a 27 percent accuracy rating. (Kapoor has not yet run the same test using GPT-5 high, so it's an indirect comparison, given that Opus 4.1 is Anthropic's most powerful model.) OpenAI spokesperson Lindsay McCallum referred WIRED to its blog, where it said that it trained GPT-5 on 'real-world coding tasks in collaboration with early testers across startups and enterprises.' The company also highlighted some of its internal accuracy measurements for GPT-5, which showed that the GPT-5 'thinking' model, which does more deliberate reasoning, scored highest on accuracy among all of OpenAI's models. GPT-5 'main,' however, still fell short of previously-released models on OpenAI's own accuracy scale. Anthropic spokesperson Amie Rotherham said in a statement that 'performance claims and pricing models often look different once developers start using them in production environments. Since reasoning models can quickly use a lot of tokens while thinking, the industry is moving to a world where price per outcome matters more than price per token.' Some developers say they've had largely positive experiences with GPT-5 so far. Jenny Wang, an engineer, investor, and creator of the personal styling agent Alta, told WIRED the model appears to be better at completing complex coding tasks in one shot than other models. She compared it to OpenAI's o3 and 4o, which she uses frequently for code generation and straightforward fixes 'like formatting, or if I want to create an API endpoint similar to what I already have,' Wang says. In her tests of GPT-5, Wang says she asked the model to generate code for a press page for her company's website, including specific design elements that would match the rest of the site's aesthetic. GPT-5 completed the task in one take, whereas in the past, Wang would have had to revise her prompts during the process. There was one significant error, though: 'It hallucinated the URLs,' Wang says. Another developer, who spoke on the condition of anonymity because their employer didn't authorize them to speak to the press, says GPT-5 excels at solving deep technical problems. The developer's current hobby project is writing a programmatic network analysis tool, one that would require code isolation for security purposes. 'I basically presented my project and some paths I was considering, and GPT-5 took it all in and gave back a few recommendations along with a realistic timeline,' the developer explains. 'I'm impressed.' A handful of OpenAI's enterprise partners and customers, including Cursor, Windsurf, and Notion, have publicly vouched for GPT-5's coding and reasoning skills. (OpenAI included many of these remarks in its own blog post announcing the new model). Notion also shared on X that it's 'fast, thorough, and handles complex work 15 percent better than other models we've tested.' But within days of GPT-5's release, some developers were weighing in online with complaints. Many said that GPT-5's coding abilities seemed behind-the-curve for what was supposed to be a state-of-the-art, ultra-capable model from the world's buzziest AI company. 'OpenAI's GPT-5 is very good, but it seems like something that would have been released a year ago,' says Kieran Klassen, a developer who has been building an AI assistant for email inboxes. 'Its coding capabilities remind me of Sonnet 3.5,' he adds, referring to an Anthropic model that launched in June 2024. Amir Salihefendić, founder of the startup company Doist, said in a social media post that he's been using GPT-5 in Cursor and has found it 'pretty underwhelming' and that 'it's especially bad at coding.' He said the release of GPT-4 felt like a 'Llama 4 moment,' referring to Meta's AI model, which had also disappointed some people in the AI community. On X, developer Mckay Wrigley wrote that GPT-5 is a 'phenomenal everyday chat model,' but when it comes to coding, 'I will still be using Claude Code + Opus.' Other developers describe GPT-5 as 'exhaustive'—at times helpful, but often irritating in its long-windedness. Wang, who overall was pleased overall with the frontend coding project she assigned to GPT-5, says that she did notice the model was 'more redundant. It clearly could have come up with a cleaner or shorter solution.' (Kapoor points out that the verbosity of GPT-5 can be adjusted, so that users can ask it to be less chatty or even do less reasoning in exchange for better performance or cheaper pricing.) Itamar Friedman, the cofounder and CEO of the AI-coding platform Qodo, believes that some of the critiques of GPT-5 stem from evolving expectations around AI model releases. 'I think a lot of people thought that GPT-5 would be another moment when everything about AI improved, because of this march towards AGI. When actually, the model improved on a few key sub-tasks,' he says. Friedman refers to before 2022 as 'BCE'—Before ChatGPT Era—when AI models improved holistically. In the post-ChatGPT era, new AI models are often better at certain things. 'Claude Sonnet 3.5, for example, was the one model to rule them all on coding. And Google Gemini got really good at code review, to check if code is high quality,' Friedman says. OpenAI has also gotten some heat for the methodology it used to run its benchmark tests and make performance claims about GPT-5—although benchmark tests vary considerably across the industry. SemiAnalysis, a research firm focused on the semiconductor and AI sector, noted that OpenAI only ran 477 out of the 500 tests that are typically included in SWE-bench, a relatively new AI industry framework for testing large language models. (This was for overall performance of the model, not just coding.) OpenAI says that OpenAI always tests its AI models on a fixed subset of 477 tasks rather than the full 500 in the SWE-bench test because those 477 tests are the ones the company has validated on its internal infrastructure. McCallum also pointed to GPT-5's system card, which noted that changes in the model's verbosity setting can 'lead to variation in eval performance.' Kapoor says that frontier AI companies are ultimately facing difficult tradeoffs. 'When model developers train new models, they're introducing new constraints, too, and have to consider many factors: how users expect the AI to behave and how it performs at certain tasks like agentic coding, all while managing the cost,' he says. 'In some sense, I believe OpenAI knew it wouldn't break all of those benchmarks, so it made something that would generally please a wide range of people.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store