logo
Nurse helps man who fell 20 feet into canyon in Utah

Nurse helps man who fell 20 feet into canyon in Utah

CTV News3 days ago

A 64-year-old man is lucky to be alive after a nurse was in the right place, at the right time, and helped rescue him after he fell off a 20 foot canyon.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Researchers urge caution when using ChatGPT to self-diagnose illnesses
Researchers urge caution when using ChatGPT to self-diagnose illnesses

CTV News

time2 hours ago

  • CTV News

Researchers urge caution when using ChatGPT to self-diagnose illnesses

Researchers examined the use of ChatGPT-4 to self-diagnose health problems. As Canadians increasingly turn to artificial intelligence for quick answers about health problems, a new study warns relying on tools like ChatGPT for self-diagnosis could be risky. A team, led by researchers at the University of Waterloo, evaluated the performance of ChatGPT-4, a large language model (LLM) released by OpenAI. The chatbot was asked a series of open-ended medical questions based on scenarios modified from a medical licensing exam. The findings were striking. Only 31 per cent of ChatGPT's responses were deemed entirely correct, and just 34 per cent were considered clear. Troy Zada Sirisha Rambhatla PhD student Troy Zada and Dr. Sirisha Rambhatla at the University of Waterloo are part of the research team. 'So, not that high,' said Troy Zada, a PhD student at the University of Waterloo who led the research team. 'If it is telling you that this is the right answer, even though it's wrong, that's a big problem, right?'' The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough. In one example, the chatbot confidently diagnosed a patient's rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves. The researchers concluded that LLMs are not yet reliable enough to replace medical professionals and should be used with caution when it comes to health matters. This is despite studies that have found AI chatbots can best human doctors in certain situations and pass medical exams involving multiple choice questions. Zada said he's not suggesting people stop using ChatGPT for medical information, but they must be aware of its limitations and potential for misinformation. 'It could tell you everything is fine when there's actually a serious underlying issue,' said Zada. He says it could also offer up information that would make someone needlessly worry. Millions of Canadians currently do not have a family doctor and there are concerns some may be relying on artificial intelligence to diagnose health problems, even though AI chatbots often advise users to consult an actual doctor. The researchers also noted the chatbots lack accountability, whereas a human doctor can face severe consequences for errors, such as having their licence revoked or being charged with medical malpractice. While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice. 'When you do get a response be sure to validate that response,' said Zada. Dr. Amrit Kirpalani agrees. He's a pediatric nephrologist and assistant professor at Western University who has studied AI in medicine and has noticed more patients and their family members bringing up AI platforms such as ChatGPT. He believes doctors should initiate conversations about its use with patients because some may be hesitant to talk about it. 'Nobody wants to tell their doctor that they went on ChatGPT and it told them something different,' says Kirpalani. He'd prefer patients discuss a chatbot's response with a physician, especially since an AI can sometimes be even more persuasive than a human. 'I'm not sure I could be as convincing as an AI tool. They can explain some things in a much more simple and understandable way,' says Kirpalani. 'But the accuracy isn't always there. So it could be so convincing even when it's wrong.' He likens AI to another familiar online tool. 'I kind of use the Wikipedia analogy of, it can be a great source of information, but it shouldn't be your primary source. It can be a jumping-off point.' The researchers also acknowledge as LLMs continue to improve, they could eventually be reliably used in a medical setting. But for now, Zada has this to say: 'Don't blindly accept the results.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store