logo
AI eroded doctors' ability to detect cancer within months in study

AI eroded doctors' ability to detect cancer within months in study

Business Standard16 hours ago
AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20 per cent
Bloomberg
Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.
Health-care systems around the world are embracing AI with a view to boosting patient outcomes and productivity. Just this year, the UK government announced £11 million ($14.8 million) in funding for a new trial to test how AI can help catch breast cancer earlier.
The AI in the study probably prompted doctors to become over-reliant on its recommendations, 'leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,' the scientists said in the paper.
They surveyed four endoscopy centers in Poland and compared detection success rates three months before AI implementation and three months after. Some colonoscopies were performed with AI and some without, at random. The results were published in The Lancet Gastroenterology and Hepatology journal.
Yuichi Mori, a researcher at the University of Oslo and one of the scientists involved, predicted that the effects of de-skilling will 'probably be higher' as AI becomes more powerful.
What's more, the 19 doctors in the study were highly experienced, having performed more than 2,000 colonoscopies each. The effect on trainees or novices might be starker, said Omer Ahmad, a consultant gastroenterologist at University College Hospital London.
'Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy,' Ahmad, who wasn't involved in the research, wrote a comment alongside the article.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Man asks ChatGPT for health advice, lands in hospital with poisoning, psychosis
Man asks ChatGPT for health advice, lands in hospital with poisoning, psychosis

Hindustan Times

time20 minutes ago

  • Hindustan Times

Man asks ChatGPT for health advice, lands in hospital with poisoning, psychosis

A ChatGPT-prescribed diet led a man to poisoning and an involuntary psychiatric hold, People magazine reported. The incident has prompted researchers to flag 'adverse health outcomes' that artificial intelligence (AI) can contribute to in today's time. The unidentified individual started having 'auditory and visual hallucinations' and even tried to escape the hospital.(Photo: Adobe Illustrator) Alarmed at the downside of table salt or sodium chloride, a 60-year-old man recently consulted ChatGPT for a substitute, according to a strange case that appeared in the journal Annals of Internal Medicine: Clinical Cases this month. While researchers were later unable to retrieve the man's prompts to ChatGPT, the AI chatbot advised the man to consume sodium bromide, as per People. Soon after he fell sick, the man rushed to a nearby hospital and claimed he had been poisoned. Following a blood report, the doctors at the hospital immediately transferred him to a telemetry bed for inspection. Also Read: OpenAI's Rocky GPT-5 Rollout Shows Struggle to Remain Undisputed AI Leader Man became paranoid of water As his health deteriorated, the person revealed he had taken dietary advice from ChatGPT and consumed sodium bromide. Although the 60-year-old was 'very thirsty', doctors found him to be 'paranoid about water.' After he started having 'auditory and visual hallucinations,' the man ran amok and tried to escape, which ultimately forced the hospital staff to place him on an involuntary psychiatric hold. He was finally discharged after three weeks of treatment. Also Read: ChatGPT model picker returns as GPT-5 rollout faces user feedback The US Centers for Disease Control informs that bromide can be used in agriculture or as a fire suppressant. While there are no available cures for bromine poisoning, survivors are likely to battle with long-term effects. FAQs: 1. Should I consult with AI for medical purposes? Since researchers have found consultation with AI on several topics can lead to 'promulgating decontextualized information,' you should always visit a licensed doctor for medical purposes. 2. What is sodium bromide? Sodium bromide is an inorganic compound that resembles table salt. It can cause headaches, dizziness and even psychosis. 3. What happened to the man who took sodium bromide after talks with ChatGPT? The man, who took sodium bromide after consultation with ChatGPT, suffered from paranoia and auditory and visual hallucinations. 4. Are there cures for bromine poisoning? There are no available cures for bromine poisoning.

AI errors: RBI panel calls for 'tolerant supervision'
AI errors: RBI panel calls for 'tolerant supervision'

Time of India

time33 minutes ago

  • Time of India

AI errors: RBI panel calls for 'tolerant supervision'

MUMBAI: An RBI panel examining the responsible use of AI in finance has urged regulators to adopt a "tolerant supervisory stance" towards mistakes made by AI systems. The idea is to allow institutions some leeway for first-time errors if they have adequate safety measures in place. The aim, the panel argues, is to encourage innovation rather than stifle it. Such tolerance is justified, the report says, because AI is inherently probabilistic and non-deterministic. A strict liability regime that penalises every misstep could make developers overly cautious, limiting AI's ability to deliver novel solutions. This approach could be controversial as it may be seen to be shielding institutions at the expense of customers who suffer losses from AI errors. The framework rests on seven "sutras": maintain trust; keep people in control; foster purposeful innovation; ensure fairness and inclusion; uphold accountability; design for transparency; and build secure, resilient, energy-efficient systems that can detect and prevent harm. Its 26 recommendations span building better data infrastructure, creating sandboxes for AI testing, and developing indigenous models to help smaller players. Regulators are advised to draft flexible rules and apply liability proportionately. Banks are told to adopt board-approved AI policies, implement strong data governance, and safeguard customers through transparency, effective grievance systems, and robust cybersecurity. Continuous monitoring, public reporting, and sector-wide oversight are proposed to keep AI use safe and credible. Stay informed with the latest business news, updates on bank holidays , public holidays , current gold rate and silver price .

RBI Panel Proposes Fund To Build Homegrown AI Framework For Finance Sector
RBI Panel Proposes Fund To Build Homegrown AI Framework For Finance Sector

NDTV

time35 minutes ago

  • NDTV

RBI Panel Proposes Fund To Build Homegrown AI Framework For Finance Sector

Mumbai: A Reserve Bank of India (RBI) committee has recommended a framework for developing AI capabilities for the country's financial sector, while safeguarding it against associated risks, according to a report released on Wednesday. The committee has recommended setting up a digital infrastructure to help build indigenous AI models and a multi-stakeholder standing committee to evaluate risks and opportunities. It also suggested building a fund to incentivise the development of homegrown AI models tailored for the needs of India's financial services sector. "The report envisions a financial ecosystem where encouraging innovation is in harmony, and not at odds, with mitigation of risk," the RBI said in a statement. The report contains 26 recommendations under six categories, including infrastructure, capacity, policy, governance, protection and assurance. Other key recommendations by the eight-member committee headed by Pushpak Bhattacharyya, a computer scientist at IIT Bombay, include issuing an enabling framework to integrate AI with existing digital public platforms such as instant payment system UPI, and designing audit frameworks. The central bank had set up the committee in December to develop a Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREEAI) for the finance sector. "The challenge with regulating AI is in striking the right balance, making sure that society stands to gain from what this technology has to offer, while mitigating its risks," according to the report.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store