Latest news with #ValeriePrimeau
Yahoo
3 days ago
- Health
- Yahoo
Talk to medical professionals, not just ChatGPT, urge Ontario doctors
ChatGPT and similar artificial intelligence tools can sometimes answer patient questions accurately, but Canadian medical researchers caution that the information needs to be carefully checked before acting on what you see. The researchers' advice comes as the Ontario Medical Association (OMA) hosted a media briefing this week, discussing DIY information sources — from search engines to social media to chatbots — and their impacts, as well as what patients can do instead. It's important to warn people now, said Dr. Valerie Primeau, a psychiatrist from North Bay who leads inpatient and community programs for mental health and addictions, because patients are increasingly turning to AI tools. The chatbots give convincing and empathetic results — but the information might be fake. "I have patients now that talk to ChatGPT to get advice and have a conversation," Primeau said. "So I foresee that we will continue having this issue, and if we don't address it now and help people navigate this, they will struggle." Dr. David D'Souza, a radiation oncologist in London, Ont., who leads clinical research into image-based treatments for cancer, said depending on how patients interrupt what AI tells them, they could put off conventional treatments. "A patient came to me asking if he should wait to have his cancer that was diagnosed treated in a few years because he believes that AI will customize cancer treatments for patients," D'Souza told reporters. "I had to convince him why he should have treatment now." Given that consumers will use the tools, OMA president Dr. Zainab Abdurrahman advised if a post says "doctors have been hiding this from you," she suggests checking the websites of relevant specialist groups, such as provincial cancer care associations, to see if they back it up. Fake ads, including AI-generated images, can also lead patients astray, warned Abdurrahman, who is also a clinical immunologist and allergist. While the technology is progressing, today's chatbots routinely answer health queries with false information that appears authoritative. In one study, Dr. Benjamin Chin-Yee, an assistant professor in the pathology and lab medicine department at Western University and his co-authors fed nearly 5,000 summaries of medical and scientific literature into AI large language models including ChatGPT and asked for summaries. They found three-quarters of the AI versions missed key parts of carefully guarded statements. For example, the journal article might say a drug was only effective in a certain group of patients while the summary leaves out that key detail, said Chin-Yee, who is also a hematologist. "The worry is that when that nuance in detail is lost, it can be misleading to practitioners who are trying to use that knowledge to impact their clinical practice." Chin-Yee said AI is an active area of research that is rapidly changing, with newer models that are more human-like and user-friendly, but there can be drawbacks to relying on the tools alone. Similarly, David Chen, a medical student at the University of Toronto, compared results provided by chatbots to 200 questions about cancer from a Reddit forum to responses provided by oncologists. "We were surprised to find that these chatbots were able to perform to near-human expert levels of competency based on our physician team's assessment of quality, empathy and readability," Chen said. But the experimental results may not reflect what happens in the real world. "Without medical oversight, it's hard to 100 per cent trust some of these outputs of these generative technologies," Chen said, adding concerns about privacy, security, and patient trust still haven't been fully explored. WATCH | Researchers use AI to help treat brain patients: Don't rely on a single chatbot Generative AI technologies like chatbots are based on pattern-matching technologies that give the most likely output to a given question, based on whatever information it was trained on. In medicine, though, unlikely possible diagnoses can also be important and shouldn't be ruled out. Plus, chatbots can hallucinate — produce outputs that sound convincing but are incorrect, made up, nonsensical or irrelevant. "There's also been research studies that have been put out that suggested that there are hallucination rates of these chat bots that can be upwards of 20 per cent," Chen said, which could make the output "clinically erroneous." In the spring, cardiologist Eric Topol, a professor and executive vice president of Scripps Research in San Diego, Calif., published a book, Superagers: An Evidence-Based Approach to Longevity, that looked at the impact of AI on longevity and quality of life. "There's a lot of good anecdotes, there's bad anecdotes," Topol said of patients using chatbots. "It hasn't been systematically assessed in a meaningful way for public use." Topol said he advises people to consult multiple chatbots and to check that you're getting reliable information. He also suggested asking for citations from the medical literature, noting sometimes those aren't real and need to be verified. Ideally, Topol said there would be a real-world test of chatbot responses from tens of thousands of people tracking what tests were done, what diagnosis was given and the outcomes for those who used AI sources and those who didn't. But tech companies are unlikely to participate because each one wouldn't gain, he said. "It's a different world now and you can't go back in time," Topol said of using the tools wisely.


CBC
4 days ago
- Health
- CBC
Talk to medical professionals, not just ChatGPT, urge Ontario doctors
Social Sharing ChatGPT and similar artificial intelligence tools can sometimes answer patient questions accurately, but Canadian medical researchers caution that the information needs to be carefully checked before acting on what you see. The researchers' advice comes as the Ontario Medical Association (OMA) hosted a media briefing this week, discussing DIY information sources — from search engines to social media to chatbots — and their impacts, as well as what patients can do instead. It's important to warn people now, said Dr. Valerie Primeau, a psychiatrist from North Bay who leads inpatient and community programs for mental health and addictions, because patients are increasingly turning to AI tools. The chatbots give convincing and empathetic results — but the information might be fake. "I have patients now that talk to ChatGPT to get advice and have a conversation," Primeau said. "So I foresee that we will continue having this issue, and if we don't address it now and help people navigate this, they will struggle." Dr. David D'Souza, a radiation oncologist in London, Ont., who leads clinical research into image-based treatments for cancer, said depending on how patients interrupt what AI tells them, they could put off conventional treatments. "A patient came to me asking if he should wait to have his cancer that was diagnosed treated in a few years because he believes that AI will customize cancer treatments for patients," D'Souza told reporters. "I had to convince him why he should have treatment now." Given that consumers will use the tools, OMA president Dr. Zainab Abdurrahman advised if a post says "doctors have been hiding this from you," she suggests checking the websites of relevant specialist groups, such as provincial cancer care associations, to see if they back it up. Fake ads, including AI-generated images, can also lead patients astray, warned Abdurrahman, who is also a clinical immunologist and allergist. Lost nuance makes AI results harder to rely on While the technology is progressing, today's chatbots routinely answer health queries with false information that appears authoritative. In one study, Dr. Benjamin Chin-Yee, an assistant professor in the pathology and lab medicine department at Western University and his co-authors fed nearly 5,000 summaries of medical and scientific literature into AI large language models including ChatGPT and asked for summaries. They found three-quarters of the AI versions missed key parts of carefully guarded statements. For example, the journal article might say a drug was only effective in a certain group of patients while the summary leaves out that key detail, said Chin-Yee, who is also a hematologist. "The worry is that when that nuance in detail is lost, it can be misleading to practitioners who are trying to use that knowledge to impact their clinical practice." Chin-Yee said AI is an active area of research that is rapidly changing, with newer models that are more human-like and user-friendly, but there can be drawbacks to relying on the tools alone. Similarly, David Chen, a medical student at the University of Toronto, compared results provided by chatbots to 200 questions about cancer from a Reddit forum to responses provided by oncologists. "We were surprised to find that these chatbots were able to perform to near-human expert levels of competency based on our physician team's assessment of quality, empathy and readability," Chen said. But the experimental results may not reflect what happens in the real world. "Without medical oversight, it's hard to 100 per cent trust some of these outputs of these generative technologies," Chen said, adding concerns about privacy, security, and patient trust still haven't been fully explored. WATCH | Researchers use AI to help treat brain patients: MUN's Faculty of Medicine is using A.I. to treat brain patients 4 months ago Duration 5:00 Artificial intelligence is expected to revolutionize health care. And at Memorial University's Faculty of Medicine, A.I. is already being used in the treatment of patients with various brain conditions. Neuroscience professor Michelle Ploughman showed the CBC's Carolyn Stokes around her lab at the Miller Centre and demonstrated how A.I. is changing patient care. Don't rely on a single chatbot Generative AI technologies like chatbots are based on pattern-matching technologies that give the most likely output to a given question, based on whatever information it was trained on. In medicine, though, unlikely possible diagnoses can also be important and shouldn't be ruled out. Plus, chatbots can hallucinate — produce outputs that sound convincing but are incorrect, made up, nonsensical or irrelevant. "There's also been research studies that have been put out that suggested that there are hallucination rates of these chat bots that can be upwards of 20 per cent," Chen said, which could make the output "clinically erroneous." In the spring, cardiologist Eric Topol, a professor and executive vice president of Scripps Research in San Diego, Calif., published a book, Superagers: An Evidence-Based Approach to Longevity, that looked at the impact of AI on longevity and quality of life. "There's a lot of good anecdotes, there's bad anecdotes," Topol said of patients using chatbots. "It hasn't been systematically assessed in a meaningful way for public use." Topol said he advises people to consult multiple chatbots and to check that you're getting reliable information. He also suggested asking for citations from the medical literature, noting sometimes those aren't real and need to be verified. Ideally, Topol said there would be a real-world test of chatbot responses from tens of thousands of people tracking what tests were done, what diagnosis was given and the outcomes for those who used AI sources and those who didn't. But tech companies are unlikely to participate because each one wouldn't gain, he said. "It's a different world now and you can't go back in time," Topol said of using the tools wisely.


Hamilton Spectator
6 days ago
- Health
- Hamilton Spectator
AI tools and doctor shortage leading to rise in DIY diagnosis, Ontario docs say
TORONTO - The advent of AI and a lack of access to primary care are feeding a rising trend in people trying to diagnose and treat themselves online, doctors say. In an online media briefing hosted by the Ontario Medical Association on Wednesday, an endocrinologist, a cancer specialist and a psychiatrist all noted misinformation they frequently see online in their respective fields. The risks of DIY diagnosis include trying remedies posted on social media that are unsafe, delaying seeking legitimate medical treatment and being financially exploited by paying for things that don't work, they said. 'I foresee it continuing to happen more and more, especially with AI technology getting more and more available and more and more sophisticated,' said Dr. Valerie Primeau, a psychiatrist in North Bay, Ont. 'I have patients now that talk to ChatGPT to get advice.' Dr. Zainab Abdurrahman, a clinical immunologist and president of the OMA, said the fact that so many people don't have a family doctor leaves a void in places where patients can get trustworthy health information — so many turn to online and AI sources. 'One of the places where you have a lot of trust because you've had a long relationship is your family doctor. And that's something that you can feel comfortable to bounce some of these ideas by,' Abdurrahman said. 'When you don't have that, you're often feeling like you're going to all these other sources and you're not able to necessarily check the credibility for these resources.' Primeau said difficulty in accessing mental health care is another factor that drives people online and into potentially risky situations. 'The first concern obviously is misdiagnosis,' she said. 'When studies have looked at videos on social media, a lot of them are overly generalized, meaning they don't necessarily target a particular disorder, even though they say they do. They may portray inaccurately certain illnesses or provide misleading information.' That in turn can lead to dangerous attempts to self-treat, Primeau said. 'Some patients, for example, have reported to me trying a medication from a friend, a family member, because they believe they suffered from the same illness, and it's led to side effects.' Primeau said one of the most common trends she's seen is online tests that claim they can diagnose attention deficit hyperactivity disorder, or ADHD. 'If I ask everybody online today, 'have you ever been distracted or had trouble with organization or answering all your emails?' I think most people are going to say yes,' she said. 'There is a rising trend to self-diagnose with it because we recognize ourselves in the videos that are played about ADHD because in general, society is struggling with inattention.' True ADHD is a neurodevelopmental disorder, Primeau said, and can be traced back to childhood. Some mental health issues, such as anxiety and depression, are more likely to be correctly self-diagnosed by taking online tests, but should still be confirmed by a professional, she said. Other conditions are especially prone to being diagnosed incorrectly, including bipolar disorder, Primeau said. But going online to do mental health research is 'not all negative,' she said, noting that watching videos or listening to people talking about their illness and identifying with them can prompt patients to seek care for themselves. Social media groups can also offer peer support, she said. Primeau encourages patients to share what they find online with their health-care provider. 'Patients want to feel that they have a say in the decisions that you make with them,' she said. 'When they come (to) me with already some opinion about what they might be suffering from or their treatment, first of all, I take the information.' If the patient's research doesn't align with her professional diagnosis, Primeau has an open discussion with them and shows them evidence about why she came to that conclusion. If people don't have a family doctor, Abdurrahman of the OMA said other ways to access credible health knowledge include going to a walk-in clinic or checking the websites of established medical institutions and associations — but emphasized the need to solve the primary care shortage as a better solution. This report by The Canadian Press was first published July 9, 2025. Canadian Press health coverage receives support through a partnership with the Canadian Medical Association. CP is solely responsible for this content.


Ottawa Citizen
6 days ago
- Health
- Ottawa Citizen
Ontario doctors alarmed by the rise of 'DIY medicine'
Article content Patients are increasingly diagnosing and even treating themselves based on online advice, a trend that is raising alarm bells among Ontario doctors. Article content The Ontario Medical Association held a briefing this week to warn about the rise of so-called DIY medicine, something doctors say is causing harm to patients and is likely to get worse. Article content Article content Earlier this year, the Canadian Medical Association reported results of a media survey that found more Canadians are turning to social media for medical advice at a time when many are struggling to access health care. Article content Article content The survey found that 62 per cent of Canadians have encountered health information they later found to be false or misleading – up eight per cent from a year earlier. Twenty three per cent of those surveyed reported having a negative health reaction after following online health advice. Article content 'In my experience, one patient out of three will bring up some form of self-diagnosis,' said Dr. Valerie Primeau, a psychiatrist from North Bay who leads inpatient and community programs for mental health and addictions. Article content Among common self-diagnoses is attention deficit hyperactivity disorder, something that is a rising focus of social media posts. Article content Article content Primeau and other physicians taking part in the briefing acknowledged there are many reasons patients are increasingly looking to the internet for answers to their health questions – a lack of access to medical care and the growing reliance on the internet among them. Article content Article content She encourages her patients to talk to her about what they have read and seen. Article content 'It can provide validation and a sense of community. It is important to take it into account if a patient brings it to your office, but I never encourage anyone to self-diagnose.' Article content Dr. David D'Souza a radiation oncologist in London who leads clinical research into image-based treatments for cancer, said information and misinformation patients found online is a routine part of his practice. Article content He has treated patients whose health suffered because of their reliance on information they saw on the internet, he said. Article content One patient who had been diagnosed with cervical cancer declined conventional treatment because she wanted to pursue other remedies she had learned about through the internet, he said. Two years later, he saw her again and her disease had spread. 'Our ability to control it and give her a good outcome was severely compromised.'


Toronto Sun
6 days ago
- Health
- Toronto Sun
AI tools and doctor shortage leading to rise in DIY diagnosis, Ontario docs say
Published Jul 09, 2025 • 1 minute read Doctor with a patient Photo by Getty Images Doctors say the advent of AI and a lack of access to primary care are feeding a rising trend in people trying to diagnose and treat themselves online. This advertisement has not loaded yet, but your article continues below. THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLY Subscribe now to read the latest news in your city and across Canada. Unlimited online access to articles from across Canada with one account. Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on. Enjoy insights and behind-the-scenes analysis from our award-winning journalists. Support local journalists and the next generation of journalists. Daily puzzles including the New York Times Crossword. SUBSCRIBE TO UNLOCK MORE ARTICLES Subscribe now to read the latest news in your city and across Canada. Unlimited online access to articles from across Canada with one account. Get exclusive access to the Toronto Sun ePaper, an electronic replica of the print edition that you can share, download and comment on. Enjoy insights and behind-the-scenes analysis from our award-winning journalists. Support local journalists and the next generation of journalists. Daily puzzles including the New York Times Crossword. REGISTER / SIGN IN TO UNLOCK MORE ARTICLES Create an account or sign in to continue with your reading experience. Access articles from across Canada with one account. Share your thoughts and join the conversation in the comments. Enjoy additional articles per month. Get email updates from your favourite authors. THIS ARTICLE IS FREE TO READ REGISTER TO UNLOCK. Create an account or sign in to continue with your reading experience. Access articles from across Canada with one account Share your thoughts and join the conversation in the comments Enjoy additional articles per month Get email updates from your favourite authors Don't have an account? Create Account They say the risks include trying remedies posted on social media that are unsafe, delaying seeking legitimate medical treatment and being financially exploited by paying for things that don't work. In a media briefing hosted by the Ontario Medical Association, an endocrinologist, a cancer specialist and a psychiatrist all noted misinformation they frequently see online in their respective fields. Psychiatrist Dr. Valerie Primeau says one of the most common trends is online tests that claim they can diagnose attention deficit hyperactivity disorder, or ADHD. The doctors all urge people to ask a health-care provider about what they see online, but acknowledge that can be difficult to do with a shortage of family physicians. The Ontario Medical Association says other ways to access credible health knowledge include going to a walk-in clinic or checking the websites of established medical institutions and associations. They say online research with credible sources can be positive and that social media can be a helpful way to get peer support. Olympics Canada Basketball Uncategorized Toronto & GTA