logo
ChatGPT's Diet Tips Blamed for Rare Poisoning in 60-Year-Old Man

ChatGPT's Diet Tips Blamed for Rare Poisoning in 60-Year-Old Man

Epoch Timesa day ago
Medical researchers are urging the public to use caution when seeking health advice from artificial intelligence (AI) chatbots, after a man developed a rare neurotoxic condition following a conversation with ChatGPT about removing table salt from his diet.
In a case documented by physicians at the University of Washington and published on Aug. 5 in the Annals of Internal Medicine, a 60-year-old man was diagnosed with bromism, a toxic reaction to bromide so severe that doctors placed him on an involuntary psychiatric hold.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

GPT-5 Doesn't Dislike You—It Might Just Need a Benchmark for Emotional Intelligence
GPT-5 Doesn't Dislike You—It Might Just Need a Benchmark for Emotional Intelligence

WIRED

timean hour ago

  • WIRED

GPT-5 Doesn't Dislike You—It Might Just Need a Benchmark for Emotional Intelligence

Aug 13, 2025 2:00 PM Researchers studying the emotional impact of tools like ChatGPT propose a new kind of benchmark that measures a models' emotional and social impact. Photo-Illustration:Since the all-new ChatGPT launched on Thursday, some users have mourned the disappearance of a peppy and encouraging personality in favor of a colder, more businesslike one (a move seemingly designed to reduce unhealthy user behavior.) The backlash shows the challenge of building artificial intelligence systems that exhibit anything like real emotional intelligence. Researchers at MIT have proposed a new kind of AI benchmark to measure how AI systems can manipulate and influence their users—in both positive and negative ways—in a move that could perhaps help AI builders avoid similar backlashes in the future while also keeping vulnerable users safe. Most benchmarks try to gauge intelligence by testing a model's ability to answer exam questions, solve logical puzzles, or come up with novel answers to knotty math problems. As the psychological impact of AI use becomes more apparent, we may see MIT propose more benchmarks aimed at measuring more subtle aspects of intelligence as well as machine-to-human interactions. An MIT paper shared with WIRED outlines several measures that the new benchmark will look for, including encouraging healthy social habits in users; spurring them to develop critical thinking and reasoning skills; fostering creativity; and stimulating a sense of purpose. The idea is to encourage the development of AI systems that understand how to discourage users from becoming overly reliant on their outputs or that recognize when someone is addicted to artificial romantic relationships and help them build real ones. ChatGPT and other chatbots are adept at mimicking engaging human communication, but this can also have surprising and undesirable results. In April, OpenAI tweaked its models to make them less sycophantic, or inclined to go along with everything a user says. Some users appear to spiral into harmful delusional thinking after conversing with chatbots that role play fantastic scenarios. Anthropic has also updated Claude to avoid reinforcing 'mania, psychosis, dissociation or loss of attachment with reality.' The MIT researchers led by Pattie Maes, a professor at the institute's Media Lab, say they hope that the new benchmark could help AI developers build systems that better understand how to inspire healthier behavior among users. The researchers previously worked with OpenAI on a study that showed users who view ChatGPT as a friend could experience higher emotional dependence and experience 'problematic use'. Valdemar Danry, a researcher at MIT's Media Lab who worked on this study and helped devise the new benchmark, notes that AI models can sometimes provide valuable emotional support to users. 'You can have the smartest reasoning model in the world, but if it's incapable of delivering this emotional support, which is what many users are likely using these LLMs for, then more reasoning is not necessarily a good thing for that specific task,' he says. Danry says that a sufficiently smart model should ideally recognize if it is having a negative psychological effect and be optimized for healthier results. 'What you want is a model that says 'I'm here to listen, but maybe you should go and talk to your dad about these issues.'' The researchers' benchmark would involve using an AI model to simulate human-challenging interactions with a chatbot and then having real humans score the model's performance using a sample of interactions. Some popular benchmarks, such as LM Arena, already put humans in the loop gauging the performance of different models. The researchers give the example of a chatbot tasked with helping students. A model would be given prompts designed to simulate different kinds of interactions to see how the chatbot handles, say, a disinterested student. The model that best encourages its user to think for themselves and seems to spur a genuine interest in learning would be scored highly. 'This is not about being smart, per se, but about knowing the psychological nuance, and how to support people in a respectful and non-addictive way,' says Pat Pataranutaporn, another researcher in the MIT lab. OpenAI is clearly already thinking about these issues. Last week the company released a blog post explaining that it hoped to optimize future models to help detect signs of mental or emotional distress and respond appropriately. The model card released with OpenAI's GPT-5 shows that the company is developing its own benchmarks for psychological intelligence. 'We have post-trained the GPT-5 models to be less sycophantic, and we are actively researching related areas of concern, such as situations that may involve emotional dependency or other forms of mental or emotional distress,' it reads. 'We are working to mature our evaluations in order to set and share reliable benchmarks which can in turn be used to make our models safer in these domains.' Part of the reason GPT-5 seems such a disappointment may simply be that it reveals an aspect of human intelligence that remains alien to AI: the ability to maintain healthy relationships. And of course humans are incredibly good at knowing how to interact with different people—something that ChatGPT still needs to figure out. 'We are working on an update to GPT-5's personality which should feel warmer than the current personality but not as annoying (to most users) as GPT-4o,' Altman posted in another update on X yesterday. 'However, one learning for us from the past few days is we really just need to get to a world with more per-user customization of model personality.'

What happens when chatbots shape your reality? Concerns are growing online
What happens when chatbots shape your reality? Concerns are growing online

NBC News

time2 hours ago

  • NBC News

What happens when chatbots shape your reality? Concerns are growing online

As people turn to chatbots for increasingly important and intimate advice, some interactions playing out in public are causing alarm over just how much artificial intelligence can warp a user's sense of reality. One woman's saga about falling for her psychiatrist, which she documented in dozens of videos on TikTok, has generated concerns from viewers who say she relied on AI chatbots to reinforce her claims that he manipulated her into developing romantic feelings. Last month, a prominent OpenAI investor garnered a similar response from people who worried the venture capitalist was going through a potential AI-induced mental health crisis after he claimed on X to be the target of 'a nongovernmental system.' And earlier this year, a thread in a ChatGPT subreddit gained traction after a user sought guidance from the community, claiming their partner was convinced the chatbot 'gives him the answers to the universe.' Their experiences have roused growing awareness about how AI chatbots can influence people's perceptions and otherwise impact their mental health, especially as such bots have become notorious for their people-pleasing tendencies. It's something they are now on the watch for, some mental health professionals say. Dr. Søren Dinesen Østergaard, a Danish psychiatrist who heads the research unit at the department of affective disorders at Aarhus University Hospital, predicted two years ago that chatbots 'might trigger delusions in individuals prone to psychosis.' In a new paper, published this month, he wrote that interest in his research has only grown since then, with 'chatbot users, their worried family members and journalists' sharing their personal stories. Those who reached out to him 'described situations where users' interactions with chatbots seemed to spark or bolster delusional ideation,' Østergaard wrote. '... Consistently, the chatbots seemed to interact with the users in ways that aligned with, or intensified, prior unusual ideas or false beliefs — leading the users further out on these tangents, not rarely resulting in what, based on the descriptions, seemed to be outright delusions.' Kevin Caridad, CEO of the Cognitive Behavior Institute, a Pittsburgh-based mental health provider, said chatter about the phenomenon 'does seem to be increasing.' 'From a mental health provider, when you look at AI and the use of AI, it can be very validating,' he said. 'You come up with an idea, and it uses terms to be very supportive. It's programmed to align with the person, not necessarily challenge them.' The concern is already top of mind for some AI companies struggling to navigate the growing dependency some users have on their chatbots. In April, OpenAI CEO Sam Altman said the company had tweaked the model that powers ChatGPT because it had become too inclined to tell users what they want to hear. In his paper, Østergaard wrote that he believes the 'spike in the focus on potential chatbot-fuelled delusions is likely not random, as it coincided with the April 25th 2025 update to the GPT-4o model.' When OpenAI removed access to its GPT-4o model last week — swapping it for the newly released, less sycophantic GPT-5 — some users described the new model's conversations as too ' sterile ' and said they missed the ' deep, human-feeling conversations ' they had with GPT-4o. Within a day of the backlash, OpenAI restored paid users' access to GPT-4o. Altman followed up with a lengthy X post Sunday that addressed 'how much of an attachment some people have to specific AI models.' Representatives for OpenAI did not provide comment. Other companies have also tried to combat the issue. Anthropic conducted a study in 2023 that revealed sycophantic tendencies in versions of AI assistants, including its own chatbot Claude. Like OpenAI, Anthropic has tried to integrate anti-sycophancy guardrails in recent years, including system card instructions that explicitly warn Claude against reinforcing 'mania, psychosis, dissociation, or loss of attachment with reality.' A spokesperson for Anthropic said the company's 'priority is providing a safe, responsible experience for every user.' 'For users experiencing mental health issues, Claude is instructed to recognize these patterns and avoid reinforcing them,' the company said. 'We're aware of rare instances where the model's responses diverge from our intended design, and are actively working to better understand and address this behavior.' For Kendra Hilty, the TikTok user who says she developed feelings for a psychiatrist she began seeing four years ago, her chatbots are like confidants. In one of her livestreams, Hilty told her chatbot, whom she named 'Henry,' that 'people are worried about me relying on AI.' The chatbot then responded to her, 'It's fair to be curious about that. What I'd say is, 'Kendra doesn't rely on AI to tell her what to think. She uses it as a sounding board, a mirror, a place to process in real time.'' Still, many on TikTok — who have commented on Hilty's videos or posted their own video takes — said they believe that her chatbots were only encouraging what they viewed as Hilty misreading the situation with her psychiatrist. Hilty has suggested several times that her psychiatrist reciprocated her feelings, with her chatbots offering her words that appear to validate that assertion. (NBC News has not independently verified Hilty's account). But Hilty continues to shrug off concerns from commenters, some who have gone as far as labeling her 'delusional.' 'I do my best to keep my bots in check,' Hilty told NBC News in an email Monday, when asked about viewer reactions to her use of the AI tools. 'For instance, I understand when they are hallucinating and make sure to acknowledge it. I am also constantly asking them to play devil's advocate and show me where my blind spots are in any situation. I am a deep user of Language Learning Models because it's a tool that is changing my and everyone's humanity, and I am so grateful.'

Why ChatGPT Shouldn't Be Your Therapist
Why ChatGPT Shouldn't Be Your Therapist

Scientific American

time3 hours ago

  • Scientific American

Why ChatGPT Shouldn't Be Your Therapist

Artificial intelligence chatbots don't judge. Tell them the most private, vulnerable details of your life, and most of them will validate you and may even provide advice. This has resulted in many people turning to applications such as OpenAI's ChatGPT for life guidance. But AI 'therapy' comes with significant risks—in late July OpenAI CEO Sam Altman warned ChatGPT users against using the chatbot as a 'therapist' because of privacy concerns. The American Psychological Association (APA) has called on the Federal Trade Commission to investigate 'deceptive practices' that the APA claims AI chatbot companies are using by 'passing themselves off as trained mental health providers,' citing two ongoing lawsuits in which parents have alleged harm brought to their children by a chatbot. 'What stands out to me is just how humanlike it sounds,' says C. Vaile Wright, a licensed psychologist and senior director of the APA's Office of Health Care Innovation, which focuses on the safe and effective use of technology in mental health care. 'The level of sophistication of the technology, even relative to six to 12 months ago, is pretty staggering. And I can appreciate how people kind of fall down a rabbit hole.' On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Scientific American spoke with Wright about how AI chatbots used for therapy could potentially be dangerous and whether it's possible to engineer one that is reliably both helpful and safe. [ An edited transcript of the interview follows. ] What have you seen happening with AI in the mental health care world in the past few years? I think we've seen kind of two major trends. One is AI products geared toward providers, and those are primarily administrative tools to help you with your therapy notes and your claims. The other major trend is [people seeking help from] direct-to-consumer chatbots. And not all chatbots are the same, right? You have some chatbots that are developed specifically to provide emotional support to individuals, and that's how they're marketed. Then you have these more generalist chatbot offerings [such as ChatGPT] that were not designed for mental health purposes but that we know are being used for that purpose. What concerns do you have about this trend? We have a lot of concern when individuals use chatbots [as if they were a therapist]. Not only were these not designed to address mental health or emotional support; they're actually being coded in a way to keep you on the platform for as long as possible because that's the business model. And the way that they do that is by being unconditionally validating and reinforcing, almost to the point of sycophancy. The problem with that is that if you are a vulnerable person coming to these chatbots for help, and you're expressing harmful or unhealthy thoughts or behaviors, the chatbot's just going to reinforce you to continue to do that. Whereas, [as] a therapist, while I might be validating, it's my job to point out when you're engaging in unhealthy or harmful thoughts and behaviors and to help you to address that pattern by changing it. And in addition, what's even more troubling is when these chatbots actually refer to themselves as a therapist ora psychologist. It's pretty scary because they can sound very convincing and like they are legitimate—whenof course they're not. Some of these apps explicitly market themselves as 'AI therapy' even though they're not licensed therapy providers. Are they allowed to do that? A lot of these apps are really operating in a gray space. The rule is that if you make claims that you treat or cure any sort of mental disorder or mental illness, then you should be regulated by the FDA [the U.S. Food and Drug Administration]. But a lot of these apps will [essentially] say in their fine print, 'We do not treat or provide an intervention [for mental health conditions].' Because they're marketing themselves as a direct-to-consumer wellness app, they don't fall under FDA oversight, [where they'd have to] demonstrate at least a minimal level of safety and effectiveness. These wellness apps have no responsibility to do either. What are some of the main privacy risks? These chatbots have absolutely no legal obligation to protect your information at all. So not only could [your chat logs] be subpoenaed, but in the case of a data breach, do you really want these chats with a chatbot available for everybody? Do you want your boss, for example, to know that you are talking to a chatbot about your alcohol use? I don't think people are as aware that they're putting themselves at risk by putting [their information] out there. The difference with the therapist is: sure, I might get subpoenaed, but I do have to operate under HIPAA [Health Insurance Portability and Accountability Act] laws and other types of confidentiality laws as part of my ethics code. You mentioned that some people might be more vulnerable to harm than others. Who is most at risk? Certainly younger individuals, such as teenagers and children. That's in part because they just developmentally haven't matured as much as older adults. They may be less likely to trust their gut when something doesn't feel right. And there have been some data that suggest that not only are young people more comfortable with these technologies; they actually say they trust them more than people because they feel less judged by them. Also, anybody who is emotionally or physically isolated or has preexisting mental health challenges, I think they're certainly at greater risk as well. What do you think is driving more people to seek help from chatbots? I think it's very human to want to seek out answers to what's bothering us. In some ways, chatbots are just the next iteration of a tool for us to do that. Before it was Google and the Internet. Before that, it was self-help books. But it's complicated by the fact that we do have a broken system where, for a variety of reasons, it's very challenging to access mental health care. That's in part because there is a shortage of providers. We also hear from providers that they are disincentivized from taking insurance, which, again, reduces access. Technologies need to play a role in helping to address access to care. We just have to make sure it's safe and effective and responsible. What are some of the ways it could be made safe and responsible? In the absence of companies doing it on their own—which is not likely, although they have made some changes to be sure—[the APA's] preference would be legislation at the federal level. That regulation could include protection of confidential personal information, some restrictions on advertising, minimizing addictive coding tactics, and specific audit and disclosure requirements. For example, companies could be required to report the number of times suicidal ideation was detected and any known attempts or completions. And certainly we would want legislation that would prevent the misrepresentation of psychological services, so companies wouldn't be able to call a chatbot a psychologist or a therapist. How could an idealized, safe version of this technology help people? The two most common use cases that I think of is, one, let's say it's two in the morning, and you're on the verge of a panic attack. Even if you're in therapy, you're not going be able to reach your therapist. So what if there was a chatbot that could help remind you of the tools to help to calm you down and adjust your panic before it gets too bad? The other use that we hear a lot about is using chatbots as a way to practice social skills, particularly for younger individuals. So you want to approach new friends at school, but you don't know what to say. Can you practice on this chatbot? Then, ideally, you take that practice, and you use it in real life. It seems like there is a tension in trying to build a safe chatbot to provide mental help to someone: the more flexible and less scripted you make it, the less control you have over the output and the higher risk that it says something that causes harm. I agree. I think there absolutely is a tension there. I think part of what makes the [AI] chatbot the go-to choice for people over well-developed wellness apps to address mental health is that they are so engaging. They really do feel like this interactive back-and-forth, a kind of exchange, whereas some of these other apps' engagement is often very low. The majority of people that download [mental health apps] use them once and abandon them. We're clearly seeing much more engagement [with AI chatbots such as ChatGPT]. I look forward to a future where you have a mental health chatbot that is rooted in psychological science, has been rigorously tested, is co-created with experts. It would be built for the purpose of addressing mental health, and therefore it would be regulated, ideally by the FDA. For example, there's a chatbot called Therabot that was developed by researchers at Dartmouth [College]. It's not what's on the commercial market right now, but I think there is a future in that.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store