Latest news with #deskilling


Free Malaysia Today
3 days ago
- Health
- Free Malaysia Today
Could using AI tools ‘deskill' certain health workers?
Experts say the impact of AI on healthcare workers' capabilities could be concerning, given the rapid adoption rate of such tools in medicine. (Envato Elements pic) LONDON : Using artificial intelligence regularly could potentially 'deskill' some health workers, a new study suggests – a finding that is 'concerning' given the rapid uptake of AI across healthcare, academics say. The new study involves endoscopists, who perform colonoscopies to look for precancerous growths in the bowel. These growths, known as adenomas, can be detected and removed to prevent them from becoming bowel cancer. Advances in AI technology have led to the development of computer-assisted polyp-detection systems, which have been found to help these health workers spot more adenomas. By analysing work conducted in Poland, the research team set out to assess whether continuous use of AI led to a dip in performance when endoscopists performed colonoscopies without the help of the technology. To that end, the team analysed 1,442 colonoscopies conducted by experienced health workers before and after AI systems were rolled out at certain centres in 2021. Writing in the journal Lancet Gastroenterology and Hepatology, the experts reported that the rate of polyp detection was 6% lower after the introduction of AI in standard and non-AI-assisted colonoscopies. 'To our knowledge, this is the first study to suggest a negative impact of regular AI use on the ability of healthcare professionals to complete a patient-relevant task in medicine of any kind,' said Dr Marcin Romanczyk from the Academy of Silesia in Poland, one of the study authors. 'Our results are concerning, given that the adoption of AI in medicine is rapidly spreading,' he highlighted. 'We urgently need more research into the impact of AI on health professionals' skills across different medical fields, find out which factors may cause or contribute to problems when healthcare professionals and AI systems don't work well together, and develop ways to fix or improve these interactions.' In a linked editorial, Dr Omer Ahmad, consultant gastroenterologist and clinical research fellow at University College London, wrote: 'These findings temper the current enthusiasm for rapid adoption of AI-based technologies, and highlight the importance of carefully considering possible unintended clinical consequences. 'The study provides the first real-world clinical evidence for the phenomenon of deskilling, potentially affecting patient-related outcomes.'


Telegraph
6 days ago
- Health
- Telegraph
Reliance on AI is making health workers less capable
Relying on artificial intelligence could be making health workers less capable, a study has suggested. It is the first research to assess 'the phenomenon of deskilling' in medics who have been using AI. The study followed health care workers in Poland carrying out a diagnostic test for the early signs of bowel cancer. Endoscopists, who can include doctors or specially trained nurses, perform a test called a colonoscopy to look for precancerous growths in the bowel known as adenomas. These pre-cancerous growths can be detected and removed, preventing them from becoming bowel cancer. AI oversight Advances in AI technology have led to the development of computer-assisted polyp detection systems, which have been found to help these health workers spot more adenomas. Experts wanted to assess whether the continuous use of AI to help in the procedure had led to a dip in results when endoscopists performed the colonoscopies without AI. The research team analysed 1,442 colonoscopies conducted by 19 experienced health workers before and after AI systems were rolled out at certain centres in 2021. The study, published in the Lancet Gastroenterology and Hepatology journal, found that the rate of detection of the precancerous polyps was 6 per cent lower after the introduction of AI in standard and non-AI assisted colonoscopies. The average rate of detection of the growths during colonoscopies was 28.4 per cent before AI began to be used. After this time, the rate fell to 22.4 per cent among medics conducting non-AI colonoscopies. At the same time, the procedures that were AI-assisted had a detection rate of 25.3 per cent. Researchers said the finding was 'concerning', given the rapid uptake of AI across healthcare. Urgent research needed Dr Marcin Romanczyk, one of the study authors from the Academy of Silesia in Poland, said: 'To our knowledge this is the first study to suggest a negative impact of regular AI use on healthcare professionals' ability to complete a patient-relevant task in medicine of any kind. 'Our results are concerning given the adoption of AI in medicine is rapidly spreading,' he said. 'We urgently need more research into the impact of AI on health professionals' skills across different medical fields. 'We need to find out which factors may cause or contribute to problems when healthcare professionals and AI systems don't work well together, and to develop ways to fix or improve these interactions.' Professor Yuichi Mori, a study author from the University of Oslo in Norway, said the findings 'pose an interesting question about previous trials' that found detection rates were higher with the help of AI than without it during such tests. He said: 'The endoscopists in the trials may have been negatively affected by continuous AI exposure'. In a linked editorial, Omer Ahmad, a consultant gastroenterologist, interventional endoscopist and clinical research fellow at University College London, wrote: 'These findings temper the current enthusiasm for rapid adoption of AI-based technologies such as CADe and highlight the importance of carefully considering possible unintended clinical consequences. 'The study provides the first real-world clinical evidence for the phenomenon of deskilling, potentially affecting patient-related outcomes.'


Telegraph
03-08-2025
- Business
- Telegraph
The AI revolution is here to make you stupid
Despite the downsides, AI offers a seductive promise to companies driven by bottom lines: cost-cutting. Yet what may flatter the balance sheet in the short-term could cost them in future. 'Managers tend to systematically underestimate the expertise needed to do the work of their employees, meaning that they may classify more work as replaceable or deskilled than is appropriate,' one metastudy led by Professor Kevin Crowston of Syracuse University cautioned. In seven cases studied by Crowston where AI had been used, six experienced some deskilling, even alongside upskilling and efficiency gains. An examination of call centre staff in the study Generative AI At Work led by Erik Brynjolfsson showed greater gains amongst the lower skilled, penalising the higher skilled. If managers are metric-obsessed, they'll be tempted to dispense with the skilled staff quicker. Once again, the firm deploying the AI becomes less capable and more stupid. In a widely circulated essay that went viral this spring, called The Co-Pilot Delusion, a highly skilled software developer described his experience with an AI assistant he knew was deeply flawed. 'I got lazy. Of course I did,' he confessed. 'When the system forces you to code with a hallucinating clown, eventually you stop resisting. You let him type. You let him be 'productive'. You check out. You surrender your brain to the noise and just float.' Warp speed warning So what to do? In Forster's story, when The Machine glitches people are so in awe of it that they treat the issues not as a crisis but as divine wisdom. They've turned it into a deity. Are we doing the same with generative AI? The novelist Ewan Morrison, whose new thriller For Emma revolves around a fatal AI experiment, thinks so. 'I think we in our naivety have bought into all the hype,' he says. 'But integrating AI into healthcare, the military and education means introducing something with an error rate of between 33pc and 90pc. The Government is introducing factual errors into everything it touches.' Such warnings are falling on deaf ears. Tech companies are in a hurry and have found a willing servant eager to do their bidding in the Labour Government. AI will cut waiting times, identify bottlenecks and even make services 'feel more human', the Government claims in its AI Action Plan. We have even been told that the technology can stop prison riots before they even start, such is its omniscient power. MIT's Kosmyna is sceptical. 'Generative AIs do not show objectively any gain in productivity, any gain in scientific discovery or any gain in employee performance – but we are told we have to implement them in such an aggressive manner,' she says. 'What are we afraid of missing out on, exactly?' While Britain is embracing AI at warp speed, there is a healthier scepticism in Asia than in the technocrat-driven West. 'Tony Blair has been convinced there's a ghost in the machine, but the Chinese, and in Singapore, they don't believe that for one minute,' says Georg Zoeller, a former Facebook engineer based in Singapore who advises governments and is also VP of technology at a healthcare start-up. He adds: 'Eighty per cent of decision makers and people crafting the laws in China are Stem [science, technology, engineering and mathematics] graduates who understand the technology, and the industry is being regulated by the best people, and they are integrated into both policy and technology.' James Woudhuysen, visiting professor of forecasting at South Bank University, agrees with Zoeller that the quality of our policy elites makes them more reluctant to assess societal and human harm. 'There are many more engineers in the upper echelons of Chinese society who understand technology, and understand what AI really is, than there are in Britain,' he says. 'The tendency to personalise or anthropomorphise AI, to see it as a constant and wise friend – that's a Blair legacy. They don't understand technology at all.' A decade ago, the Finnish accountants realised there was something precious in the corporate ether – the company's value was in its intangible knowledge capital. It could not be replicated by software, even if the daily tasks could be. Perhaps if we refuse to believe AI is magic, we'll be wiser about its obvious and not so obvious flaws. Unfortunately, policymakers in the West have been overtaken by a desire to make machines seem magical. If we're getting dumber, then we can hardly blame the AI for that. We've done it to ourselves.
Yahoo
22-06-2025
- Business
- Yahoo
Mistral AI CEO says AI's biggest threat is people getting lazy
Mistral AI CEO Arthur Mensch says warnings about AI's impact on white-collar workers are overblown. He says the biggest risk AI poses to humans is "deskilling." Mensch said humans needed to remain actively involved in reviewing AI output to keep learning. As tech leaders continue to debate the potential impact of artificial intelligence on the job market, one CEO says the technology's biggest risk may be "deskilling." In an interview with The Times of London, Arthur Mensch, the CEO of the Paris-based firm Mistral AI, dismissed the idea that AI would lead to huge cuts to white-collar jobs, saying the bigger risk was that people may become progressively lazier as they increasingly rely on the tech to search for information. Speaking to the outlet at the VivaTech conference in Paris earlier this month, Mensch, who cofounded the open-source large language model developer alongside Guillaume Lample and Timothée Lacroix in April 2023, said that a key way to avoid this would be to ensure humans remained actively involved in reviewing and critiquing AI output. "It's a risk that you can avoid, if you think of it from a design perspective, if you make sure that you have the right human input, that you keep the human active," he said, adding that he believed it was important humans did not take AI output as the "truth." "You want people to continue learning," he continued. "Being able to synthesize information and criticize information is a core component to learning." Mensch, a former Google DeepMind researcher, also responded to recent warnings that AI poses a threat to white-collar jobs, including from Anthropic CEO Dario Amodei, who recently said that AI could replace half of all entry-level white-collar jobs within five years. "I think it's very much of an overstatement," Mensch said, adding that he believed Amodei liked to "spread fear" about AI as a marketing tactic. Instead, Mensch said he thought AI would change white-collar jobs. "I do expect that we'll have more relational tasks because that's not something you can easily replace," he said. Read the original article on Business Insider