logo
ChatGPT Helps Solve Medical Mystery After Doctors Misdiagnose Woman's Cancer Symptoms

ChatGPT Helps Solve Medical Mystery After Doctors Misdiagnose Woman's Cancer Symptoms

News1825-04-2025

Last Updated:
In February 2024, Lauren Bannon noticed she couldn't bend her little fingers properly. After months of testing, doctors diagnosed her with arthritis.
Lauren Bannon, from Newry, Northern Ireland, was shocked when ChatGPT helped expose a life threatening health issue. In February 2024, Lauren noticed she couldn't bend her little fingers properly, which left her worried. After months of testing, doctors initially diagnosed her with rheumatoid arthritis, even though the test results were negative. Desperate for answers, she shared her symptoms on ChatGPT and the AI service suggested she might have Hashimoto's disease. Although her doctors weren't sure, Lauren was determined to get tested and in September 2024, AI's suggestion was correct. Further scans showed two lumps in her thyroid, which were confirmed as cancer in October.
Speaking with Mirror, she said, 'I felt let down by doctors. It was almost like they were just trying to give out medication for anything to get you in and out the door. I needed to find out what was happening to me, I just felt so desperate. I just wasn't getting the answers I needed. So that's when I pulled up ChatGPT. I already used it for work. I started typing what mimics rheumatoid arthritis and it popped up saying 'You may have Hashimoto's disease, ask your doctor to check your thyroid peroxidase antibody (TPO) levels.' So I went to my doctors and she told me 'I couldn't have that, there was no family history of it' but I said 'just amuse me'."
In January 2025, Lauren Bannon had surgery to remove her thyroid and two lymph nodes from her neck. She now has to be checked regularly for the rest of her life to make sure the cancer doesn't come back. She believes that because her symptoms were not the usual signs of Hashimoto's disease, doctors might not have found the real problem in time. She didn't feel tired or weak like others go through. If ChatGPT wasn't there for her help, she would have kept taking medicine for an illness she didn't actually have.
'The doctor said I was very lucky to have caught it so early. I know for sure that cancer would've spread without using ChatGPT. It saved my life. I just knew that something was wrong with me. I would've never discovered this without ChapGPT. All my tests were perfect. I would encourage others to use Chat GPT with their health concerns, act with caution but if it gives you something to look into, ask your doctors to test you. It can't do any harm. I feel lucky to be alive," Lauren added.
Speaking with Fox News, Dr Harvey Castro, an emergency doctor and AI expert from Dallas, said that tools like ChatGPT can be helpful because they make people more aware of their health. He believes AI can support doctors by giving helpful suggestions and information, but it should never replace real medical professionals, as it cannot check a patient, make a final diagnosis or give proper treatment. He feels that if AI is used the right way, it can improve healthcare, but completely relying on it can be risky.
First Published:
April 25, 2025, 10:53 IST

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Brave Chinese voices have begun to question the hype around AI
Brave Chinese voices have begun to question the hype around AI

Mint

time3 hours ago

  • Mint

Brave Chinese voices have begun to question the hype around AI

Against the odds, some in China are questioning the top-down push to get aboard the artificial intelligence (AI) hype bandwagon. In a tightly controlled media environment where these experts can easily be drowned out, it's important to listen to them. Across the US and Europe, loud voices inside and outside the tech industry are urging caution about AI's rapid acceleration, pointing to labour market threats or more catastrophic risks. But in China, this chorus has been largely muted. Until now. Also Read: Parmy Olson: The DeepSeek AI revolution has a security problem China has the highest global share of people who say AI tools have more benefits than drawbacks, and they've shown an eagerness to embrace it. It's hard to overstate the exuberance in the tech sector since the emergence of DeepSeek's market-moving reasoning model earlier this year. Innovations and updates have been unfurling at breakneck speed and the technology is being widely adopted across the country. But not everyone's on board. Publicly, state-backed media has lauded the widespread adoption of DeepSeek across hundreds of hospitals in China. But a group of medical researchers tied to Tsinghua University published a paper in the medical journal JAMA in late April gently questioning if this was happening 'too fast, too soon." It argued that healthcare institutions are facing pressure from 'social media discourse" to implement DeepSeek in order to not appear 'technologically backward." Doctors are increasingly reporting patients who 'present DeepSeek-generated treatment recommendations and insist on adherence to these AI-formulated care plans." The team argued that as much as AI has shown potential to help in the medical field, this rushed rollout carries risks. They are right to be cautious. Also Read: The agentic AI revolution isn't the future, it's already here It's not just the doctors who are raising doubts. A separate paper from AI scientists at the same university found last month that some of the breakthroughs behind reasoning models—including DeepSeek's R1, as well as similar offerings from Western tech giants—may not be as revolutionary as some have claimed. They found that the novel training method used for this new crop 'is not as powerful as previously believed." The method used to power them 'doesn't enable the model to solve problems that the base model can't solve," one of the scientists added. This means the innovations underpinning what has been widely dubbed as the next step—toward achieving so-called Artificial General Intelligence—may not be as much of a leap as some had hoped. This research from Tsinghua holds extra weight: The institution is one of the pillars of the domestic AI scene, long churning out both keystone research and ambitious startup founders. Another easily overlooked word of warning came from a speech by Zhu Songchun, dean of the Beijing Institute for General Artificial Intelligence, linked to Peking University. Zhu said that for the nation to remain competitive, it needs more substantive research and less laudatory headlines, according to an in-depth English-language analysis of his remarks published by the independent China Media Project. These cautious voices are a rare break from the broader narrative. But in a landscape where the deployment of AI has long been government priority, it makes them especially noteworthy. The more President Xi Jinping signals that embracing AI technology is important, the less likely people are to publicly question it. This can lead to less overt forms of backlash, like social media hashtags on Weibo poking fun at chatbots' errors. Or it can result in data centres quietly sitting unused across the country as local governments race to please Beijing—as well as a mountain of PR stunts. Also Read: AI as infrastructure: India must develop the right tech Perhaps the biggest headwind facing the sector, despite the massive amounts of spending, is that AI still hasn't altered the earnings outlooks at most of the Chinese tech firms. The money can't lie. This doesn't mean that AI in China is just propaganda. The conflict extends far beyond its tech sector—US firms are also guilty of getting carried away promoting the technology. But multiple things can be true at once. It's undeniable that DeepSeek has fuelled new excitement, research and major developments across the AI ecosystem. But it's also been used as a distraction from the domestic macro-economic pains that predated the ongoing trade war. Without guard-rails, the risk of rushing out the technology is greater than just investors losing money—people's health is at stake. From Hangzhou to Silicon Valley, the more we ignore the voices questioning the AI hype bandwagon, the more we blind ourselves to consequences of a potential derailment. ©Bloomberg The author is a Bloomberg Opinion columnist covering Asia tech.

The Digital Shoulder: How AI chatbots are built to ‘understand' you
The Digital Shoulder: How AI chatbots are built to ‘understand' you

Mint

time5 hours ago

  • Mint

The Digital Shoulder: How AI chatbots are built to ‘understand' you

As artificial intelligence (AI) chatbots are becoming an inherent part of people's lives, more and more users are spending time chatting with these bots to not just streamline their professional or academic work but also seek mental health advice. Some people have positive experiences that make AI seem like a low-cost therapist. AI models are programmed to be smart and engaging, but they don't think like humans. ChatGPT and other generative AI models are like your phone's auto-complete text feature on steroids. They have learned to converse by reading text scraped from the internet. When a person asks a question (called a prompt) such as 'how can I stay calm during a stressful work meeting?' the AI forms a response by randomly choosing words that are as close as possible to the data it saw during training. This happens really fast, but the responses seem quite relevant, which might often feel like talking to a real person, according to a PTI report. But these models are far from thinking like humans. They definitely are not trained mental health professionals who work under professional guidelines, follow a code of ethics, or hold professional registration, the report says. When you prompt an AI system such as ChatGPT, it draws information from three main sources to respond: Background knowledge it memorised during training, external information sources and information you previously provided. To develop an AI language model, the developers teach the model by having it read vast quantities of data in a process called 'training'. This information comes from publicly scraped information, including everything from academic papers, eBooks, reports, and free news articles to blogs, YouTube transcripts, or comments from discussion forums such as Reddit. Since the information is captured at a single point in time when the AI is built, it may also be out of date. Many details also need to be discarded to squish them into the AI's 'memory'. This is partly why AI models are prone to hallucination and getting details wrong, as reported by PTI. The AI developers might connect the chatbot itself with external tools, or knowledge sources, such as Google for searches or a curated database. Meanwhile, some dedicated mental health chatbots access therapy guides and materials to help direct conversations along helpful lines. AI platforms also have access to information you have previously supplied in conversations or when signing up for the platform. On many chatbot platforms, anything you've ever said to an AI companion might be stored away for future reference. All of these details can be accessed by the AI and referenced when it responds. These AI chatbots are overly friendly and validate all your thoughts, desires and dreams. It also tends to steer conversation back to interests you have already discussed. This is unlike a professional therapist who can draw from training and experience to help challenge or redirect your thinking where needed, reported PTI. Most people are familiar with big models such as OpenAI's ChatGPT, Google's Gemini, or Microsoft's Copilot. These are general-purpose models. They are not limited to specific topics or trained to answer any specific questions. Developers have also made specialised AIs that are trained to discuss specific topics, like mental health, such as Woebot and Wysa. According to PTI, some studies show that these mental health-specific chatbots might be able to reduce users' anxiety and depression symptoms. There is also some evidence that AI therapy and professional therapy deliver some equivalent mental health outcomes in the short term. Another important point to note is that these studies exclude participants who are suicidal or who have a severe psychotic disorder. And many studies are reportedly funded by the developers of the same chatbots, so the research may be biased. Researchers are also identifying potential harms and mental health risks. The companion chat platform for example, has been implicated in an ongoing legal case over a user's suicide, according to the PTI report. At this stage, it's hard to say whether AI chatbots are reliable and safe enough to use as a stand-alone therapy option, but they may also be a useful place to start when you're having a bad day and just need a chat. But when the bad days continue to happen, it's time to talk to a professional as well. More research is needed to identify if certain types of users are more at risk of the harms that AI chatbots might bring. It's also unclear if we need to be worried about emotional dependence, unhealthy attachment, worsening loneliness, or intensive use.

Do you talk to AI when you're feeling down? Here's where chatbots get their therapy advice
Do you talk to AI when you're feeling down? Here's where chatbots get their therapy advice

Mint

time7 hours ago

  • Mint

Do you talk to AI when you're feeling down? Here's where chatbots get their therapy advice

Brisbane, Jun 11 (The Conversation) As more and more people spend time chatting with artificial intelligence (AI) chatbots such as ChatGPT, the topic of mental health has naturally emerged. Some people have positive experiences that make AI seem like a low-cost therapist. But AIs aren't therapists. They're smart and engaging, but they don't think like humans. ChatGPT and other generative AI models are like your phone's auto-complete text feature on steroids. They have learned to converse by reading text scraped from the internet. When someone asks a question (called a prompt) such as 'how can I stay calm during a stressful work meeting?' the AI forms a response by randomly choosing words that are as close as possible to the data it saw during training. This happens so fast, with responses that are so relevant, it can feel like talking to a person. But these models aren't people. And they definitely are not trained mental health professionals who work under professional guidelines, adhere to a code of ethics, or hold professional registration. Where does it learn to talk about this stuff? When you prompt an AI system such as ChatGPT, it draws information from three main sources to respond: background knowledge it memorised during training external information sources information you previously provided. To develop an AI language model, the developers teach the model by having it read vast quantities of data in a process called 'training'. Where does this information come from? Broadly speaking, anything that can be publicly scraped from the internet. This can include everything from academic papers, eBooks, reports, free news articles, through to blogs, YouTube transcripts, or comments from discussion forums such as Reddit. Are these sources reliable places to find mental health advice? Sometimes. Are they always in your best interest and filtered through a scientific evidence based approach? Not always. The information is also captured at a single point in time when the AI is built, so may be out-of-date. A lot of detail also needs to be discarded to squish it into the AI's 'memory'. This is part of why AI models are prone to hallucination and getting details wrong. 2. External information sources The AI developers might connect the chatbot itself with external tools, or knowledge sources, such as Google for searches or a curated database. When you ask Microsoft's Bing Copilot a question and you see numbered references in the answer, this indicates the AI has relied on an external search to get updated information in addition to what is stored in its memory. Meanwhile, some dedicated mental health chatbots are able to access therapy guides and materials to help direct conversations along helpful lines. 3. Information previously provided AI platforms also have access to information you have previously supplied in conversations, or when signing up to the platform. When you register for the companion AI platform Replika, for example, it learns your name, pronouns, age, preferred companion appearance and gender, IP address and location, the kind of device you are using, and more (as well as your credit card details). On many chatbot platforms, anything you've ever said to an AI companion might be stored away for future reference. All of these details can be dredged up and referenced when an AI responds. And we know these AI systems are like friends who affirm what you say (a problem known as sycophancy) and steer conversation back to interests you have already discussed. This is unlike a professional therapist who can draw from training and experience to help challenge or redirect your thinking where needed. What about specific apps for mental health? Most people would be familiar with the big models such as OpenAI's ChatGPT, Google's Gemini, or Microsofts' Copilot. These are general purpose models. They are not limited to specific topics or trained to answer any specific questions. But developers can make specialised AIs that are trained to discuss specific topics, like mental health, such as Woebot and Wysa. Some studies show these mental health specific chatbots might be able to reduce users' anxiety and depression symptoms. Or that they can improve therapy techniques such as journalling, by providing guidance. There is also some evidence that AI-therapy and professional therapy deliver some equivalent mental health outcomes in the short term. However, these studies have all examined short-term use. We do not yet know what impacts excessive or long-term chatbot use has on mental health. Many studies also exclude participants who are suicidal or who have a severe psychotic disorder. And many studies are funded by the developers of the same chatbots, so the research may be biased. Researchers are also identifying potential harms and mental health risks. The companion chat platform for example, has been implicated in ongoing legal case over a user suicide. This evidence all suggests AI chatbots may be an option to fill gaps where there is a shortage in mental health professionals, assist with referrals, or at least provide interim support between appointments or to support people on waitlists. At this stage, it's hard to say whether AI chatbots are reliable and safe enough to use as a stand-alone therapy option. More research is needed to identify if certain types of users are more at risk of the harms that AI chatbots might bring. It's also unclear if we need to be worried about emotional dependence, unhealthy attachment, worsening loneliness, or intensive use. AI chatbots may be a useful place to start when you're having a bad day and just need a chat. But when the bad days continue to happen, it's time to talk to a professional as well. (The Conversation) GRS GRS

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store