
Warning against using AI such as ChatGPT for medical advice after man develops rare condition
An article in the Annals of Internal Medicine reported a case in which a 60-year-old man developed bromism, also known as bromide toxicity, after consulting ChatGPT.
The article described bromism as a 'well-recognised' syndrome in the early 20th century that was thought to have contributed to almost one in 10 psychiatric admissions at the time.
Man eliminated salt from his diet
The patient told doctors that after reading about the negative effects of sodium chloride, or table salt, he consulted ChatGPT about eliminating chloride from his diet and started taking sodium bromide over a three-month period. This was despite reading that 'chloride can be swapped with bromide, though likely for other purposes, such as cleaning'.
The article's authors, from the University of Washington, said the case highlighted 'how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes'.
When they consulted ChatGPT themselves about what chloride could be replaced with, the response also included bromide, did not provide a specific health warning and did not ask why the authors were seeking such information — 'as we presume a medical professional would do', they wrote.
'AI chatbots could fuel misinformation'
The authors warned that ChatGPT and other AI apps could ''generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation'.
The company announced an upgrade of the chatbot last week and claimed one of its biggest strengths was in health.
It said ChatGPT — now powered by the GPT-5 model — would be better at answering health-related questions and would also be more proactive at 'flagging potential concerns', such as serious physical or mental illness.
However, it emphasised that the chatbot was not a replacement for professional help.
The journal article, which was published last week before the launch of GPT-5, said the patient appeared to have used an earlier version of ChatGPT.
The authors said the bromism patient presented himself at a hospital and claimed his neighbour might be poisoning him. He also said he had multiple dietary restrictions. Despite being thirsty, he was noted as being paranoid about the water he was offered.
He tried to escape the hospital within 24 hours of being admitted and, after being sectioned, was treated for psychosis. Once the patient stabilised, he reported having several other symptoms that indicated bromism, such as facial acne, excessive thirst and insomnia.
The Guardian

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Irish Examiner
3 days ago
- Irish Examiner
Warning against using AI such as ChatGPT for medical advice after man develops rare condition
A US medical journal has warned against using ChatGPT for health information after a man developed a rare condition following an interaction with the chatbot about removing table salt from his diet. An article in the Annals of Internal Medicine reported a case in which a 60-year-old man developed bromism, also known as bromide toxicity, after consulting ChatGPT. The article described bromism as a 'well-recognised' syndrome in the early 20th century that was thought to have contributed to almost one in 10 psychiatric admissions at the time. Man eliminated salt from his diet The patient told doctors that after reading about the negative effects of sodium chloride, or table salt, he consulted ChatGPT about eliminating chloride from his diet and started taking sodium bromide over a three-month period. This was despite reading that 'chloride can be swapped with bromide, though likely for other purposes, such as cleaning'. The article's authors, from the University of Washington, said the case highlighted 'how the use of artificial intelligence can potentially contribute to the development of preventable adverse health outcomes'. When they consulted ChatGPT themselves about what chloride could be replaced with, the response also included bromide, did not provide a specific health warning and did not ask why the authors were seeking such information — 'as we presume a medical professional would do', they wrote. 'AI chatbots could fuel misinformation' The authors warned that ChatGPT and other AI apps could ''generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation'. The company announced an upgrade of the chatbot last week and claimed one of its biggest strengths was in health. It said ChatGPT — now powered by the GPT-5 model — would be better at answering health-related questions and would also be more proactive at 'flagging potential concerns', such as serious physical or mental illness. However, it emphasised that the chatbot was not a replacement for professional help. The journal article, which was published last week before the launch of GPT-5, said the patient appeared to have used an earlier version of ChatGPT. The authors said the bromism patient presented himself at a hospital and claimed his neighbour might be poisoning him. He also said he had multiple dietary restrictions. Despite being thirsty, he was noted as being paranoid about the water he was offered. He tried to escape the hospital within 24 hours of being admitted and, after being sectioned, was treated for psychosis. Once the patient stabilised, he reported having several other symptoms that indicated bromism, such as facial acne, excessive thirst and insomnia. The Guardian


Irish Examiner
5 days ago
- Irish Examiner
Blowing into conch shell could help relieve sleep apnoea, say scientists
Blowing into a conch shell could help tackle the symptoms of a sleep disorder that affects millions of people globally, according to a study. Conch-blowing, also known as shankh-blowing, is an ancient ritual that involves breathing in deeply and exhaling into the spiral-shaped shell. The practice could improve sleep for patients with obstructive sleep apnoea (OSA), which usually needs to be treated with uncomfortable machinery, according to the research. OSA occurs when breathing starts and stops during sleep. Symptoms include loud snoring and making gasping or choking noises. Thirty people living with the disorder and aged between 19 and 65 were involved in the trial, led by researchers at the Eternal Heart Care Centre and Research Institute in Jaipur, India. About half of the group were taught how to use the shell, while the others carried out deep breathing exercises. Both groups were encouraged to practise their techniques for at least 15 minutes, five days a week. Six months later, the trial found those who had practised shankh-blowing were 34% less sleepy during the day. They also had higher blood oxygen levels during the night, and four to five fewer OSA episodes an hour on average. 'Shankh-blowing is a simple low-cost breathing technique that could help improve sleep and reduce symptoms without the need for machines or medication,' said Dr Krishna K Sharma, who led the research. 'The way the shankh is blown is quite distinctive. This action creates strong vibrations and airflow resistance, which likely strengthens the muscles of the upper airway, including the throat and soft palate, areas that often collapse during sleep in people with OSA.' The most common form of treatment for sleep apnoea is a continuous positive airway pressure (Cpap) machine, which involves patients wearing a mask that blows pressurised air into the nose and throat while asleep. Previous research has also found playing a woodwind instrument could help with the condition. Although the machines are effective, they can be uncomfortable, leading the researchers to suggest shankh blowing could be a promising alternative. A larger trial involving several hospitals is being planned. 'The findings of this trial are encouraging, but the small scale of the trial means it's too soon to say for certain that conch blowing can help people manage their obstructive sleep apnoea,' said Dr Erika Kennington, the head of research and innovation at Asthma + Lung UK. 'It's also not clear from this research why blowing through a conch shell regularly might improve someone's symptoms. It would be good to see the conch-blowing approach tested on a larger scale and compared with other proven strategies, such as limiting alcohol, staying active and maintaining good bedtime habits. 'OSA is a long-term condition, but with the right treatments and lifestyle changes, people can make a real difference to their symptoms.' The Guardian


Irish Times
06-08-2025
- Irish Times
AI's inbuilt biases threaten to undermine women in the workplace
The age of Artificial Intelligence (AI) is here whether we want it or not. And although it is already proving a useful tool for some tasks – both mundane and incredibly complex – its all-too-human biases are reinforcing discrimination against women, undermining them in the workplace and possibly increasing their risk of unemployment. From information gathering and report writing to hiring decisions and promotional opportunities, current generative AI systems can amplify inequalities because they use flawed data, such as user-generated content from the internet. We all know from personal experience and news reports how unreliable and inaccurate that information can be when left unchecked. Many GenAI systems train on large language models where they learn to understand and interpret information and perform tasks using existing public data which is steeped in stereotypes that favour western white men. This bias has immediate economic and social consequences in the real world. 'Despite AI's potential to enhance sectors like healthcare, education and business, it often mirrors reality and its societal prejudices and can manifest itself through unequal treatment in hiring decisions, academic recommendations or healthcare diagnostics, systematically disadvantaging women,' according to Jerlyn Ho and other Singapore-based academics writing in the scholarly journal Computers in Human Behavior: Artificial Humans . READ MORE Their paper explores how AI systems and chatbots, notably ChatGPT , can perpetuate gender biases due to inherent flaws in training data, algorithms and user feedback loops. Despite its proven bias, GenAI is being used widely to decide who gets hired, fired and promoted. Research has shown that GenAI discriminates against women by spewing pseudoscientific 'facts' and stereotypes of women as being less professionally ambitious or intelligent than men. 'For instance, in gendered word association tasks, recent models still associate woman names with traditional roles like 'home' and 'family', while linking male names with 'business' and 'career'. Moreover, in text generation tasks, these models produce sexist and misogynistic content approximately 20 per cent of the time,' according to the European Commission 's (EC) Generative AI Outlook Report 2025 . 'The growing integration of AI across various sectors has heightened concerns about biases in large language models, including those related to gender, religion, race, profession, nationality, age, physical appearance and socio-economic status,' it continues. [ Women more exposed to jobs impact of AI, Government research finds Opens in new window ] 'While AI holds the promise of enhancing efficiency and decision making in areas like healthcare, education and business, its widespread use and the high level of public trust it enjoys could also amplify societal prejudices, leading to systematic disadvantages, particularly for women.' Occupational bias When gender and racial bias are baked into the technology we use every day, moving it from a possible to a structural barrier, it's far harder to break through professionally. GenAI is used extensively in the hiring process, from CV scanners and gamified tests to body language analysis and vocal assessments. Job applicants are facing machines before they see humans and it is increasingly AI that decides whether or not they are a good match or if the application gets sent to the recycling bin. If technologies like this are biased against someone like you then you're unlikely to be shortlisted for a role, get a foot in the door of your chosen profession or get a seat at the top table. Despite decades of work trying to ensure greater diversity and inclusion in the workplace – which has been proven to improve decision making, risk taking and profitability – AI bias is threatening to take us backwards by reinforcing negative stereotypes instead of judging everyone on a level playing field. At work, your professional image and public profile are often important factors in promotion. Yet the European Commission report found that in occupational portraits generated by three popular text-to-image AI generators: 'Women and black individuals were notably underrepresented, especially in roles requiring high levels of preparation. Women were often portrayed as younger and with submissive gestures, while men appeared older and more authoritative. [ Women are lagging behind on AI but they can catch up Opens in new window ] 'Alarmingly, these biases surpassed real-world disparities, indicating that the issues extend beyond merely biased training data.' Internationally, women are increasingly at risk of being pushed out of the workforce and into the home as conservative governments in places such as the United States, Hungary and, more extremely, Afghanistan promote a return to traditional gender roles. AI-driven technology and many social media platforms are aggressively reinforcing these gender messages and influencing the next generation. If GenAI is disadvantaging women and minorities in hiring and promotion, and they're largely excluded from AI's development and testing processes, why is it being blindly adopted as a workplace tool? Research in Ireland and elsewhere shows many young men are more conservative than their grandfathers and far less progressive than their woman colleagues. In recruitment, some AI algorithms are supporting this move by favouring male candidates over equally qualified woman candidates. The Netherlands Institute for Human Rights found a violation of Dutch and EU anti-discrimination legislation in Meta 's job vacancy advertising algorithm. 'In violation of the principles of equal treatment and non-discrimination, in 2023, the algorithm in the Netherlands displayed vacancies for receptionist positions to woman users in 97 per cent of cases. Similarly, it showed vacancies for mechanics to male users 96 per cent of the time.' In education, AI may also unfairly predict higher dropout rates for woman students, particularly in male-dominated fields like science, technology, engineering and mathematics (Stem), limiting their access to advanced education programmes and jobs in higher-paid professions. Fill in the blanks Many GenAI models cannot distinguish between fact and fiction, believing that video game content and fictional novels are real, for example. And some even make stuff up to fill in the blanks if they don't have enough information. OpenAI's website says of ChatGPT: 'But like any language model, it can produce incorrect or misleading outputs. Sometimes, it might sound confident – even when it's wrong. This phenomenon is often referred to as a hallucination: when the model produces responses that are not factually accurate, such as incorrect definitions, dates or facts.' [ AI has its strong points. Intelligence isn't one of them Opens in new window ] Facts are important, especially when lives and livelihoods depend on them. Access to employment matters hugely as jobs are the gateway to opportunity and economic stability for all. If GenAI is disadvantaging women and minorities in hiring and promotion, and they're largely excluded from AI's development and testing processes, why is it being blindly adopted as a workplace tool? Leaders need to be more intentional before they bring AI into the workplace: They need to ask themselves: What is my intention here? What am I trying to achieve? How is AI linked to our strategy? Or am I just bringing it in to save money and reduce headcount? For many companies, the short-term promise of productivity seems to be overcoming the hard reality of long-term bias and exclusion. 'The painful truth is that, if women aren't co-pilots of the current AI revolution, they may be left in the dust, faced with technology that presents a whole series of new barriers for them to overcome,' according to research from global consultancy Mercer . Mercer says the potential of AI and automation will only be fully realised if productivity gains are equitably distributed and AI is responsibly managed with data used to nudge leaders towards fair opportunity and pay decisions. [ Action needs to be taken to reverse recent decline in women securing leadership roles in Irish business Opens in new window ] As AI continues to reshape the workforce and transform society, businesses must actively root out bias and keep their eye on opportunities beyond productivity. We have a once-in-a-lifetime opportunity to redesign work for humanity's advancement and wellbeing alongside greater profitability and innovation. Instead of copper-fastening the things that limit and disconnect us – prejudices, stereotypes and bias – let's create a world of work that connects us and fully develops our shared human potential. Margaret E Ward is chief executive of Clear Eye, a leadership consultancy. margaret@