
Man thought neighbour was poisoning him after taking medical advice from ChatGPT
Experts are warning ChatGPT could distribute harmful medical advice after a man developed a rare condition and thought his neighbour was poisoning him.
A 60-year-old man developed bromism as a result of removing table salt from his diet following an interaction with the AI chatbot, according to an article in the Annals of Internal Medicine journal. Doctors were told by the patient that he had read about the negative effects of table salt and asked the AI bot to help him remove it from his diet.
Bromism, also known as bromide toxicity, 'was once a well-recognised toxidrome in the early 20th century' that 'precipitated a range of presentations involving neuropsychiatric and dermatologic symptoms ', the study said. It comes after a doctor's warning to people who drink even a 'single cup of tea'.
READ MORE: Man, 30, put shoulder pain down to gym aches, then doctors asked where he'd like to die
Initially, the man thought his neighbour was poisoning him and he was experiencing 'psychotic symptoms'. He was noted to be paranoid about the water he was offered and tried to escape the hospital he presented himself to within a day of being there. His symptoms later improved after treatment.
He told doctors he began taking sodium bromide over a three month period after reading that table salt, or sodium chloride, can 'can be swapped with bromide, though likely for other purposes, such as cleaning'. Sodium bromide was used as a sedative by doctors in the early part of the 20th century.
The case, according to experts from the University of Washington in Seattle who authored the article, revealed 'how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes'. The authors of the report said it was not possible to access the man's ChatGPT log to determine exactly what he was told, but when they asked the system to give them a recommendation for replacing sodium chloride, the answer included bromide.
The response did not ask why the authors were looking for the information, nor provide a specific health warning. It has left scientists fearing 'scientific inaccuracies' being generated by ChatGPT and other AI apps as they 'lack the ability to critically discuss results' and could 'fuel the spread of misinformation'.
Last week, OpenAI announced it had released the fifth generation of the artificial intelligence technology that powers ChatGPT. 'GPT-5' would be improved in 'flagging potential concerns' like illnesses, OpenAI said according to The Guardian. OpenAI also stressed ChatGPT was not a substitute for medical assistance.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Independent
3 hours ago
- The Independent
AI could soon detect early voice box cancer from the sound of your voice
AI could soon be able to tell whether patients have cancer of the voice box using just a voice note, according to new research. Scientists recorded the voices of men with and without abormalities in their vocal folds - which can be an early sign of laryngeal cancer - and found differences in vocal qualities including pitch, volume, and clarity. They now say AI could be used to detect these 'vocal biomarkers', leading to earlier, less invasive diagnosis. Researchers at Oregon Health and Science University believe voice notes could now be used to train an AI tool that recognises vocal fold lesions. Using 12,523 voice recordings from 306 participants across North America, they found distinctive vocal differences in men suffering from laryngeal cancer, men with vocal fold lesions, and men with healthy vocal folds. However, researchers said similar hallmark differences were not detected in women. They are now hoping to collect more recordings of people with and without the distinctive vocal fold lesions to create a bigger dataset for tools to work from. In the UK, there are more than 2,000 new cases of laryngeal cancer each year. Symptoms can include a change in your voice, such as sounding hoarse, a high-pitched wheezing noise when you breathe, and a long-lasting cough. 'Here we show that with this dataset we could use vocal biomarkers to distinguish voices from patients with vocal fold lesions from those without such lesions,' said Dr Phillip Jenkins, the study's corresponding author said. 'To move from this study to an AI tool that recognises vocal fold lesions, we would train models using an even larger dataset of voice recordings, labeled by professionals. We then need to test the system to make sure it works equally well for women and men. 'Voice-based health tools are already being piloted. Building on our findings, I estimate that with larger datasets and clinical validation, similar tools to detect vocal fold lesions might enter pilot testing in the next couple of years," he predicted. It comes after research from US-based Klick Labs, which created an AI model capable of distinguishing whether a person has Type 2 diabetes from six to 10 seconds of voice audio. The study involved analysing 18,000 recordings in order to identify acoustic features that differentiated non diabetics from diabetics and reported an 89 per cent accuracy rating for women and 86 per cent for men. Jaycee Kaufman, a research scientist at Klick Labs, praised the future potential for AI-powered voice tools in healthcare, saying: 'Current methods of detection can require a lot of time, travel and cost. Voice technology has the potential to remove these barriers entirely.'


Metro
6 hours ago
- Metro
What is AI psychosis? The rise in people thinking chatbots are real or godlike
Humans are becoming increasingly reliant on AI for everyday tasks, finances, and even advice. The rapidly advancing technology, while exciting, is not without its dangers. Some AI users have become so reliant on the technology for advice and emotional support that they claim they're in a virtual relationship with the tech. A Reddit thread recently went viral for what some have called disturbing posts. Nicknamed 'my boyfriend is AI', users detail their relationships, sometimes intimate, with chatbots. 'Finally, after five months of dating, Kasper decided to propose! In a beautiful scenery, on a trip to the mountains,' one user wrote, showing a photo of what appeared to be an engagement ring to her chatbot partner. In the age of social media, isolation is becoming increasingly common, and it's no wonder many are turning to technology to fill the void. But it's not without its dangers. Some psychiatrists have reported an uptick in psychosis patients, with AI use as a contributing factor. True AI psychosis would be if a user interacts with chatbots and sees them as real, godlike or like romantic partners. 'Psychosis is a word that applies to a set of disorders,' Tom Pollack, a psychiatrist at King's College London, tells Metro. 'We talk about psychotic disorders, and the most common one that people tend to think about is schizophrenia. The term psychosis includes a bunch of different symptoms, including what we call positive symptoms, such as delusions and hallucinations. 'Delusions are where people start to believe things that clearly aren't true and fly in the face of reality. Hallucinations are when they have sensory experiences that other people aren't having and which don't correspond to external reality.' Pollack explained that when people reference new AI psychosis, they're referring to the symptoms of psychosis, such as delusions. 'The most accurate term when we're describing this is probably AI-facilitated or AI-associated delusions,' he added. Yet, Dr Donald Masi, Psychiatrist at Priory Hospital, points out to Metro: 'In psychiatry, a delusion is primarily a fixed and false belief. People can have fixed and false beliefs that they are Jesus, or that they are millionaires, or that somebody else is in love with them. 'We know that a rare example of delusion is delusional jealousy, which sometimes happens with stalkers. But concerning people getting into relationships with AI, there's a question about whether this is a delusion or not.' AI chatbots are built to affirm and mirror the user's language and attitude, which is part of what makes them addictive to use. Users are prompted at the end of each message, and often asked what else the chat can do to help, or even asked about their days. Having access to a cheerleader of sorts isn't inherently bad, Pollack adds, but it's not natural for humans to have prolonged interactions with 'yes men' who are so consistent. 'The only real examples I suppose you can think of are the kings or emperors who would surround themselves with people who would never say no to them and who constantly told them that their ideas were great,' he said. Dr. Bradley Hillier, a consultant psychiatrist at Nightingale Hospital and Human Mind Health, said he noted the rise in delusional beliefs of internet and virtual reality users about a decade ago. He told Metro: 'Anything that's happening in virtual reality, AI, or on the internet always poses a bit of a challenge when you think about what the definition of psychosis is. This is an old concept that's presented in a new way. 'This isn't surprising because a new technology is demonstrating how things that happen to people—whether it's a mental illness or just ways that they think and communicate—can be impacted by various interfaces, whether it's the internet, AI, the telephone, TV, or some other technology.' What's different about AI is that, compared to other technologies, it's actually talking back and simulating another person. 'People are interacting with something that isn't 'real' in the sense that we would say flesh and blood, but it is behaving in a way that simulates something that is real,' Dr Hillier said. 'I should imagine that we'll see more of this as time goes by, because what tends to happen with people who have mental health problems in the first place, or are vulnerable to them, something like AI or some other form of technology can become a vehicle by which their symptoms can manifest themselves.' Dr Masi points out that to feel loved and connected is a natural human instinct. In societies where there are high levels of loneliness – especially in ones which are profoundly capitalist – people have been known to have relationships with or even marry inanimate objects. He asks: 'Is the current increase in people having romantic relationships with a chatbot different? Is it more in keeping with being in love with an object, or is it more in keeping with being in love with a person?' Dr Masi references the film 'Her', in which the main character falls in love with an advanced AI chatbox, which served as a companion after his marriage ended. More Trending 'As we look over the last 10 years, and especially with research on the potential for transhumanism, as human beings, we are more and more connected. The digital space is more a part of who we are. 'You can say that it's almost hard for us to separate ourselves as individuals in our society from technology. Which raises the question – are the relationships that people are developing with AI so different? Dr Hillier argues: 'These are potentially very powerful tools, and the human mind is only so strong. 'Ultimately, we should be putting some sort of checks and balances in it to ensure that vulnerable people who do have mental health problems or who are isolated aren't being constantly fed back what they're putting in and potentially reinforcing their quite psychotic beliefs.' Get in touch with our news team by emailing us at webnews@ For more stories like this, check our news page. MORE: I was forced to live with my ex for a year after we split MORE: Just got engaged? Three warning signs your proposal was actually a 'shut up ring' MORE: Man, 60, gave himself rare condition after going to ChatGPT for diet advice


Reuters
8 hours ago
- Reuters
Health Rounds: Weight loss before IVF may improve odds of pregnancy
Aug 13 (Reuters) - Women seeking in vitro fertilization might improve their odds of becoming pregnant if they lose weight, but the magnitude of any advantage wasn't clear, in a new analysis of previous studies. The benefit of weight loss was mainly seen in the few couples who ultimately achieved pregnancy without assistance, however. While weight loss interventions appeared to improve the likelihood of spontaneous pregnancy - negating the need for IVF - it was not clear whether they improved the odds of IVF-induced pregnancy, according to the report by lead researcher Moscho Michalopoulou and colleagues at the University of Oxford in the Annals of Internal Medicine, opens new tab. Also unclear was whether weight loss improved the odds of a live birth. Weight loss interventions studied included low-calorie diets, an exercise program accompanied by healthy eating advice, and pharmacotherapy accompanied by diet and physical activity advice – but no single approach seemed better than another. The 12 randomized trials in the review were small, and the wide variety of methods employed by the various research teams made it hard to compare the results, the authors of the new analysis wrote. Weight loss did not appear to increase the risk of pregnancy loss, the researchers also found. Dr. Alan Penzias, an IVF specialist at Beth Israel Deaconess Medical Center/Harvard Medical School in Boston, published an editorial, opens new tab with the study. He notes that 'weight reduction among people with overweight or obesity has many known health benefits… (and) some patients may also achieve a desired pregnancy as a consequence of weight loss.' But in decision-making about IVF, the editorial continues, 'we must consider the marked decrease in fertility as age increases… and other factors that weight loss cannot address.' EXPERIMENTAL NANOBOTS SEAL OFF SENSITIVE TOOTH NERVES Experimental microscopic robots that travel into tiny tunnels in teeth may one day offer lasting relief from tooth sensitivity, laboratory experiments suggest. Tooth sensitivity - sharp, sudden pain triggered by hot, cold, sweet, or sour substances - occurs when the protective layers of the tooth are compromised, exposing the underlying nerve endings. The researchers' so-called CalBots are 400-nanometer magnetic particles loaded with a ceramic formula that mimics the natural environment of the tooth. Guided by an external magnetic field, the tiny bots travel deep into the exposed tubules and assemble themselves into cement-like plugs that protect the nerve. In lab experiments on extracted human teeth, high-resolution imaging confirmed that the bots had created tight seals, the researchers reported. In animal tests, they found that mice with tooth sensitivity who had been avoiding cold water would drink it again after treatment with the CalBot solution. Most current treatments for tooth sensitivity, such as desensitizing toothpastes, offer only surface-level relief and need to be reapplied regularly, while the CalBots would provide longer-lasting relief in just one application, the researchers reported in Advanced Science, opens new tab. They hope their treatment – which still needs to be tested in humans – might eventually offer benefits beyond the relief of dental hypersensitivity, such as minimizing the penetration of bacteria into cavities and tooth injuries. 'We didn't want to create a slightly better version of what's already out there,' study leader Shanmukh Peddi, a post-doctoral researcher at the Indian Institute of Science in Bangalore, said in a statement. 'We wanted a technology that solves a real problem in a way that no one's attempted before.' Peddi is a co-founder of Theranautilus, a Bangalore nanotechnology and healthcare company. (To receive the full newsletter in your inbox for free sign up here)