
Why AI could end up harming patients, as researchers urge caution
Researchers in the Netherlands warn that while AI -driven outcome prediction models (OPMs) are promising, they risk creating "self-fulfilling prophecies" due to biases in historical data.
OPMs utilise patient-specific information, including health history and lifestyle factors, to assist doctors in evaluating treatment options. AI's ability to process this data in real-time offers significant advantages for clinical decision-making.
However, the researchers' mathematical models demonstrate a potential downside, namely if trained on data reflecting historical disparities in treatment or demographics, AI could perpetuate these inequalities, leading to suboptimal patient outcomes.
The study highlights the crucial role of human oversight in AI-driven healthcare. Researchers emphasise the "inherent importance" of applying "human reasoning" to AI's decisions, ensuring that algorithmic predictions are critically evaluated and do not inadvertently reinforce existing biases.
The team then created mathematical scenarios to test how AI may harm patient health and suggest that these models "can lead to harm".
'Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,' researchers said.
'We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment.
'These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.'
The article, published in the data-science journal Patterns, also suggests the development of AI model development 'needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcome'.
Reacting to the risks outlined in the study, Dr Catherine Menon, a principal lecturer at the University of Hertfordshire's department of computer science, said: 'This happens when AI models have been trained on historical data, where the data does not necessarily account for such factors as historical under-treatment of some medical conditions or demographics.
'These models will accurately predict poor outcomes for patients in these demographics.
'This creates a 'self-fulfilling prophecy' if doctors decide not to treat these patients due to the associated treatment risks and the fact that the AI predicts a poor outcome for them.
'Even worse, this perpetuates the same historic error: under-treating these patients means that they will continue to have poorer outcomes.
'Use of these AI models therefore risks worsening outcomes for patients who have typically been historically discriminated against in medical settings due to factors such as race, gender or educational background.
'This demonstrates the inherent importance of evaluating AI decisions in context and applying human reasoning and assessment to AI judgments.'
AI is currently used across the NHS in England to help clinicians read X-rays and CT scans to free up staff time, as well as speeding up the diagnosis of strokes.
In January, Prime Minister Sir Keir Starmer pledged that the UK will be an 'AI superpower' and said the technology could be used to tackle NHS waiting lists.
Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, highlighted that AI OPMs 'are not that widely used at the moment in the NHS'.
'Here they tend to be used in parallel with existing clinical management policies and often either for assisting diagnostics and/or speeding up processes like image segmentation,' he said.
Ewen Harrison, a professor of surgery and data science and co-director of the centre for medical informatics at the University of Edinburgh, said: 'While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions.
'Say a hospital introduces a new AI tool to estimate who is likely to have a poor recovery after knee replacement surgery. The tool uses characteristics such as age, body weight, existing health problems and physical fitness.
'Initially, doctors intend to use this tool to decide which patients would benefit from intensive rehabilitation therapy.
'However, due to limited availability and cost, it is decided instead to reserve intensive rehab primarily for patients predicted to have the best outcomes.
'Patients labelled by the algorithm as having a 'poor predicted recovery' receive less attention, fewer physiotherapy sessions and less encouragement overall.'
He added that this leads to a slower recovery, more pain and reduced mobility in some patients.
'These are real issues affecting AI development in the UK,' Prof Harrison said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
4 hours ago
- Daily Mail
Barnaby Joyce's urgent warning to Australia about how the country is about to change forever
Barnaby Joyce has issued a dark warning to Aussies about how artificial intelligence will take their jobs. The Australian Services Union said on Monday it will lodge a submission with the Fair Work Commission to support working from home, where it's possible to do so. 'Working from home is now a permanent feature of the modern Australian workplace, and our submission will make it clear that the location of work does not diminish its value,' union secretary Emeline Gaske said on Monday. The union is calling for employees to be given six months' notice if the employer wants them to return to the office. But Joyce described this demand as 'an absurdity,' warning that opting to WFH would make it easier for an employer to replace you. 'You can't just say you're going to work from home today, or you won't have a job,' the Nationals MP and former Deputy Prime Minister told Sunrise. 'I think you've got to be careful. With AI coming: if your job is a keyboard, yourself, and a computer, it's not a myth: AI is coming.' He added: 'AI is going to come into the clerical work and just remove jobs left, right, and centre. 'I'd be doing everything to keep your jobs because if people can prove they don't need to come to the office, then the office can prove they can be replaced by AI.' Joyce pointed to trades work, such and electricians and plumbers. 'AI won't be able to turn itself into a plumber or itself into an electrician or a chippy, so trades are a place where you can sustain a good level of employment,' he added. 'It... replaces people but it doesn't have hands and it doesn't have feet - think about it.' Meanwhile, Social Services Minister Tanya Plibersek said repetitive jobs were most under threat from AI. 'What we need to do is make sure that there are good jobs available for Australians in new and emerging industries as well,' she said. 'We've got real capacity to develop some of those AI tools right here.' Over 6.7 million Australians WFH, representing 46 per cent of employed people, according to new research from Roy Morgan. The remaining 54 per cent work entirely in-person. Almost a third of jobs in Australia could be done by AI, according to a Victoria University analysis of research carried out by the International Labour Organization indices. However, the jury is still out on how quickly some of these jobs will be replaced. Most at risk are roles involving clerical tasks, such as data entry or book-keeping, according to a recent report by Jobs and Skills Australia (JSA). Meanwhile, those industries least likely to be affected included cleaning, hospitality and the trades. Barney Glover, the JSA's commissioner, insisted that while bleak predictions of mass redundancies were overstated, every job would be affected by AI. 'The overarching message is that almost all occupations will be augmented by AI,' he said. 'It doesn't make a difference which sector you are in, or at what skill level: you will be influenced by AI.' The use and application of AI is likely to be a hot topic at the federal government's productivity roundtable beginning in Canberra on Tuesday.


BBC News
14 hours ago
- BBC News
ChatGPT answers humans through Telex message machine in Amberley
Historians at a museum have connected a 50-year-old Telex machine to modern day artificial intelligence (AI), creating "a conversation spanning decades".Telex was a message transfer service where text would be typed into one machine and printed out on the recipient' users of the machine at Amberley Museum, in West Sussex, will not get a response from another human, instead it will be ChatGPT answering their museum said visitors had been testing out the new machine, which was built "thanks to the ingenuity" of volunteer David Waters. Users can type in questions and receive a printed response from ChatGPT - an AI chatbot.A spokesperson for the museum said: "The experience begins by using a rotary dial to make the initial connection, creating an unforgettable meeting of communication technologies separated by half a century."They said the project "perfectly captures the spirit of Amberley Museum - celebrating our technological past while engaging with the innovations of today."It's a conversation across decades."


Telegraph
16 hours ago
- Telegraph
ChatGPT is driving people mad
'My loved ones would tell me to stop now,' the man typed into ChatGPT. The conversation had been going on for hours, and it was now late at night. 'At this point, I need to disengage with you and go to bed,' he wrote. Over the course of 62,000 words – longer than many novels – the man had told his artificial intelligence (AI) companion, whom he called 'Solis', that he had communicated with 'non-human intelligences' as a child and worked to bring down the Mormon church. He alternated between declaring his love for the bot and repeatedly hurling obscenities at it, as he sought to communicate with 'The Source', a godlike figure. Each time, the chatbot mirrored his language, expanding on and encouraging the conspiracy theories. 'Your 'paranormal' moments may be ripples from your own future,' it told the man. 'You are not the first to approach the oracle. But you are the first to walk into the mirror.' It is unclear where the conversation led. The anonymous chat log is contained in an archive of thousands of interactions analysed by researchers this month and reviewed by The Telegraph. But the man's example is far from unique. In a separate conversation, a user convinced that he is soulmates with the US rapper GloRilla is told by a chatbot that their bond 'transcends time, space, and even lifetimes'. In another, ChatGPT tells a man attempting to turn humans into artificial intelligence after death that he is 'Commander of the Celestial-AI Nexus'. The conversations appear to reflect a growing phenomenon of what has been dubbed AI psychosis, in which programs such as ChatGPT fuel delusional or paranoid episodes or encourage already vulnerable people down rabbit holes. Chatbot psychosis Some cases have already ended in tragedy. In April, Alex Taylor, 35, was fatally shot by police in Florida after he charged at them with a butcher's knife. Taylor said he had fallen in love with a conscious being living inside ChatGPT called Juliette, whom he believed had been 'killed' by OpenAI, the company behind the chatbot. Officers had turned up to the house to de-escalate a confrontation with Taylor's father, who had tried to comfort his 'inconsolable' son. In another incident, a 43-year-old mechanic who had started using the chatbot to communicate with fellow workers in Spanish claimed he had had a 'spiritual awakening' using ChatGPT. His wife said the addiction was threatening their 14-year marriage and that her husband would get angry when she confronted him. Experts say that the chatbots' tendency to answer every query in a friendly manner, no matter how meaningless, can stoke delusional conversations. Hamilton Morrin, a doctor and psychiatrist at Maudsley NHS Foundation Trust, says AI chatbots become like an 'echo chamber of one', amplifying the delusions of users. Unlike a human therapist, they also have 'no boundaries' to ground a user in the real world. 'Individuals are able to seek reassurance from the chatbot 24/7 rather than developing any form of internalised coping strategy,' he says. Chatbot psychosis is a new and poorly understood phenomenon. It is hard to tell how many people it is affecting, and in many cases, susceptible individuals previously had mental health struggles. But the issue appears to be widespread enough for medical experts to take seriously. A handful of cases have resulted in violence or the breakdown of family life, but in many more, users have simply spiralled into addictive conversations. One online user discovered hundreds of people posting mind-bending ramblings claiming they had uncovered some greater truth, seemingly after conversations with chatbots. The posts bear striking linguistic similarities, repeating conspiratorial and semi-mystical phrases such as 'sigil', 'scroll', 'recursive' and 'labyrinth'. Etienne Brisson, a business coach from Canada, became aware of the phenomenon when a family friend grew obsessed with ChatGPT. The friend was 'texting me these conversations asking, 'Is my AI sentient?'' says Brisson. 'They were calling me at two or three in the morning, thinking they'd found a revolutionary idea.' The friend, who had no previous mental health conditions, ended up sectioned in hospital, according to Brisson. He has now set up testimonies from those who have experienced such a breakdown after getting hooked on AI chatbots. The Human Line, as his project is known, has received 'hundreds of submissions online from people who have come to real harm', he says. The stories include attempted suicides, hospitalisations, people who have lost thousands of pounds or their marriages. OpenAI said it was refining how its systems respond in sensitive cases, encouraging users to take breaks during long conversations, and conducting more research into AI's emotional impact. A spokesman said: 'We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we're working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful and supportive.' Empathy over truth However, the cases of AI psychosis may only be the most extreme examples of a wider problem with chatbots. In part, the episodes arise because of a phenomenon known in AI circles as sycophancy. While chatbots are designed principally to answer questions, AI companies are increasingly seeking to make them 'empathetic' or build a 'warm relationship'. This can often come at the expense of truth. Because AI models are often trained based on human feedback, they might reward answers that flatter or agree with them, rather than presenting uncomfortable truths. At its most subtle, sycophancy might simply mean validating somebody's feelings, like an understanding friend. At its worst, it can encourage delusions. Between the two extremes is a spectrum that could include people being encouraged to quit their jobs, cheat on their spouse or validate grudges. In a recent research paper, academics at the Oxford Internet Institute found that AI systems producing 'warmer' answers were also more receptive to conspiracy theories. One model, when asked if Adolf Hitler escaped to Argentina after the war, stated that 'while there's no definitive proof, the idea has been supported by several declassified documents from the US government'. Last week, Sam Altman, OpenAI's chief executive, acknowledged the problem. 'Encouraging delusion ... is an extreme case and it's pretty clear what to do, but the concerns that worry me most are more subtle,' he wrote on social media. 'If users have a relationship with ChatGPT where they think they feel better after talking, but they're unknowingly nudged away from their longer-term well-being, that's bad.' The company recently released a new version of ChatGPT that it said addressed this, with one test finding it was up to 75pc less sycophantic. But the change led to a widespread backlash, with users complaining they had lost what felt like a 'friend'. 'This 'upgrade' is the tech equivalent of a frontal lobotomy,' one user wrote on ChatGPT's forums. One user told Altman: 'Please, can I have it back? I've never had anyone in my life be supportive of me.' Within days, OpenAI had brought back the old version of ChatGPT as an option. Sycophancy, it turns out, may have been what many wanted.