
Using AI Made Doctors Worse at Spotting Cancer Without Assistance
But a new study has found that doctors who regularly used AI actually became less skilled within months.
The study, which was published on Wednesday in the Lancet Gastroenterology and Hepatology journal, found that over the course of six months, clinicians became over-reliant on AI recommendations and became themselves 'less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.'
It's the latest study to demonstrate potential adverse outcomes on AI users. An earlier study by the Massachusetts Institute of Technology found that ChatGPT eroded critical thinking skills.
How the study was conducted
Researchers across various European institutions conducted an observational study surveying four endoscopy centers in Poland that participated in the Artificial Intelligence in Colonoscopy for Cancer Prevention (ACCEPT) trial. The study was funded by the European Commission and Japan Society for the Promotion of Science.
As part of the trial, the centers had introduced AI tools for the detection of polyps—growths that can be benign or cancerous—in late 2021. The study looked at 1,443 non-AI-assisted colonoscopies out of a total 2,177 colonoscopies conducted between September 2021 and March 2022. The colonoscopies were performed by 19 experienced endoscopists.
Researchers compared the quality of colonoscopy conducted three months before and three months after AI was implemented. Colonoscopies were conducted either with or without AI assistance, at random. Of those conducted without AI assistance, 795 were conducted before regular AI use was implemented and 648 were conducted after the AI tools were introduced.
What the study found
Three months before AI was introduced, the adenoma detection rate (ADR) was around 28%. Three months after AI was introduced, the rate dropped to 22% when clinicians were unassisted by AI. ADR is a commonly used quality indicator for colonoscopies and represents 'the proportion of screening colonoscopies performed by a physician that detect at least one histologically confirmed colorectal adenoma or adenocarcinoma.' Adenomas are precancerous growths, and a higher ADR is associated with a lower risk of colorectal cancer.
The study found that AI did help endoscopists with detection when used, but once the assistance was removed, clinicians were worse at detection.
Researchers attributed it to 'the natural human tendency to over-rely' on the recommendations of decision support systems like AI.
'Imagine that you want to travel anywhere, and you're unable to use Google Maps,' Marcin Romańczyk, co-author of the study and an assistant professor at the Medical University of Silesia, told MedPage Today. 'We call it the Google Maps effect. We try to get somewhere, and it's impossible to use a regular map. It works very similarly.'
Implications of the study
Omer Ahmad, a consultant gastroenterologist at University College Hospital London who wrote an editorial alongside the study but was not involved in its research, tells TIME that it's likely that exposure to AI weakened doctors' visual search habits and alerting gaze patterns, which are critical for detecting polyps.
'In essence, dependence on AI detection could dull human pattern recognition,' Ahmad says. He adds that regular use of AI could also 'reduce diagnostic confidence' when AI assistance is withdrawn, or that the endoscopists' skill of manoeuvring the colonoscope could be reduced.
In comments to the Science Media Center (SMC), Catherine Menon, principal lecturer at the University of Hertfordshire's Department of Computer Science, said: 'Although de-skilling resulting from AI use has been raised as a theoretical risk in previous studies, this study is the first to present real-world data that might potentially indicate de-skilling arising from the use of AI in diagnostic colonoscopies.' Menon raised concerns that overreliance on AI could leave health practitioners at risk to technological compromise.
Other experts are more cautious about drawing conclusions from a single study.
Venet Osmani, a professor of clinical AI and machine learning at Queen Mary University of London, noted to SMC that the total number of colonoscopies—including both AI-assisted and non-AI-assisted ones—increased over the course of the study. The increased workload, Osmani suggested, could have led to clinician fatigue and poorer detection rates.
Allan Tucker, a professor of artificial intelligence at Brunel University of London, also noted that with AI assistance, clinician performance improved overall. Concerns about deskilling due to automation bias, added Tucker to SMC, 'is not unique to AI systems and is a risk with the introduction of any new technology.'
'The ethical question then is whether we trust AI over humans,' said Tucker. 'Often, we expect there to be a human overseeing all AI decision-making but if the human experts are putting less effort into their own decisions as a result of introducing AI systems this could be problematic.'
'This is not simply about monitoring technology,' says Ahmad. 'It's about navigating the complexities of a new human-AI clinical ecosystem.' Establishing safeguards is critical, he adds, suggesting that beyond this study, people may need to focus on 'preserving essential skills in a world where AI becomes ubiquitous.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
16 minutes ago
- Yahoo
Could robotics and timber tackle Britain's housing challenges?
By Suban Abdulla LONDON (Reuters) -Gigantic robot arms controlled by artificial intelligence glide around a vast factory in Oxfordshire, England, making building frames from timber, one of the world's oldest construction materials. With the British government committed to building 300,000 new homes a year, some housebuilders say that the combination of technology and green materials could help them to overcome challenges from skills shortages to environmental targets. Shop Top Mortgage Rates Personalized rates in minutes Your Path to Homeownership A quicker path to financial freedom England lags many similar economies in terms of the share of housing accounted for by timber-framed homes. Britain as a whole, meanwhile, is among the slowest adopters of robotics, especially in construction, according to the National Robotarium research institute at Heriot-Watt University. "We're seeing more major housebuilders and small and medium-sized builders embracing timber as a way to ... overcome the skills and carbon challenge," said Alex Goodfellow, CEO of Donaldson Timber Systems (DTS). His business makes timber-frame structures for homes and commercial buildings, including walls, floors and roofs, then sends them to housebuilders for assembly. Its automated production makes for less labour-intensive housebuilding and provides a faster, cheaper and more sustainable alternative to bricks, stone or concrete blocks, the company says. A study by construction surveyors and consultancy Rider Levett Bucknall showed that building with timber is 2.8% cheaper than with masonry. FASTER CONSTRUCTION The DTS factory in Witney, near Oxford in southeast England, makes timber panelling for about 100 homes a week with designs entered digitally using artificial intelligence, reducing the need for paper drawings. DTS says its robotics and lasers enable it to produce pre-assembled sections builders can put together quickly on site. The technology reduces the time needed to build a home by about 10 weeks compared with traditional materials, Goodfellow says. Yet barriers remain to any significant increase in timber homes in England. Amit Patel at the Royal Institution of Chartered Surveyors said the material is not commonly used in England because of difficulties in securing warranties for timber buildings owing to durability concerns. Barratt Homes tried to revive timber usage in the 1980s, but sales were undermined by potential rot and fire vulnerabilities. Andrew Orriss of the Structural Timber Association says that such concerns have been addressed by current building regulation and the STA's fire safety guide. He says that the off-site timber construction sector could help to deliver about a third of the government's target of 300,000 new homes per year - a level not achieved in England since the 1970s. Official government figures show that almost 200,000 new homes were built in England in 2023/24 and the Structural Timber Association said that approximately 40,500 of those were timber-frame homes. Builders including Vistry and Taylor Wimpey have opened or plan to open their own timber-frame manufacturing factory while Bellway plans to use timber in a third of its housing projects by 2030. Reduced environmental impact is another benefit touted by companies. GREENER AND LEANER? Simon Park, head of sustainability at Bellway, said timber absorbs and stores more carbon than it emits and that Bellway's analysis shows breeze blocks - made from concrete and known as cinder blocks in the U.S. - are the biggest carbon emitters among common building materials. Countering that, however, is the origin of the raw materials. About 80% of timber used in the UK is imported, mainly from European countries, while roughly 20% of its brick supply is imported. Concerns also remain over mortgage availability for timber homes, which is likely to improve if the government signals a move towards timber construction, said Riz Malik, mortgage broker at independent financial adviser R3 Wealth. An ageing workforce, meanwhile, highlights the need for more robotics. About a fifth of construction workers in the UK are over 50, according to the Home Builders Federation, with 25% of those set to retire in the coming decade. The government pledged 40 million pounds ($54 million) in June for robotics adoption hubs across various sectors, but Maurice van Sante, senior economist for construction at bank ING, says Britain's construction industry is far behind other countries in robotics use. ING estimates that there were 1.5 robots for every 10,000 construction workers in Europe in 2023, against 0.6 in the U.S. and 0.5 in the UK. As well as filling labour shortages directly, robotics opens up other employment opportunities, says DTS manufacturing director Frank O'Reilly, adding that the company has attracted more interest from tech-savvy younger workers since the factory's introduction of automation and robotics. "It (the technology) encourages young people to consider this as a career," he said. ($1 = 0.7433 pounds) Errore nel recupero dei dati Effettua l'accesso per consultare il tuo portafoglio Errore nel recupero dei dati Errore nel recupero dei dati Errore nel recupero dei dati Errore nel recupero dei dati


Fast Company
18 minutes ago
- Fast Company
What is the reverse-acquihire?
It's not uncommon for large companies to acquire startups primarily for their talent rather than their product. Acquihires, as they are called, allow big companies to gain talented employees, while bypassing traditional methods of hiring. However, as the AI talent wars have heated up, major companies like Meta, Google, and Microsoft have been engaging in reverse-acquihires at AI startups. That is, they are swooping in to hire star talent and license technology, discarding the rest by the wayside. The tech giants gain talent while sidestepping the need for government approval and antitrust scrutiny that would happen if they bought the company outright. The remaining employees are left to flounder in the husk of their former company. By Bloomberg's count there have been six since last March, and as the AI talent wars continue, we're likely to see more. 6. Google DeepMind and Windsurf In July 2025, Google's DeepMind division hired Windsurf CEO Varun Mohan and cofounder Douglas Chen, along with other key members of their R&D team. The $2.4 billion deal also included Windsurf technology. Google did not take a stake or any controlling interest in the startup. The deal came after Windsurf was nearly sold to OpenAI in what was set to be a $3 billion deal. The Windsurf employees whom Google did not hire went from expecting to be part of OpenAI to being left behind at a company with no leaders. In a surprise twist Windsurf was quickly acquired by AI coding startup, Cognition. Still, the story doesn't have a happy ending. Shortly afterwards, Cognition laid off 30 members of Windsurf's team and offered buyouts to the remaining 200 TechCrunch reported. 5. Meta and Scale AI This June, Meta finalized a deal with data labeling company Scale AI. Meta acquired a group of its top engineers, including founder Alexandr Wang, and took a 49% stake in the company, for a $15 billion price tag. 'As part of this, we will deepen the work we do together producing data for AI models and Alexandr Wang will join Meta to work on our superintelligence efforts,' a Meta spokesperson said. A month later, Scale laid off 14% of its workforce. Interim CEO Jason Droege said the company plans to focus on its government and enterprise businesses going forward. 4. Google and In August 2024, Google also struck a deal with chatbot startup hiring its founders Noam Shazeer and Daniel De Freitas, as part of a $2.7 billion deal. 'We're excited to announce that we've entered into an agreement with Google that will allow us to accelerate our progress,' said in a statement at the time. The statement also explained that the startup would grant a nonexclusive license to the tech giant for its LLM technology. In the wake of the reverse-acquihire, has shifted to a cheaper business model. Instead of training LLMs, it simply develops AI characters. Interim CEO Dominic Perella told Bloomberg: 'We were left much better positioned than some folks,' pointing out that for a reverse-acquihire is doing well. 3. Amazon and Covariant In August 2024, Amazon hired robotics company Covariant's three founders (Peter Chen, Pieter Abbeel, and Rocky Duan) as well as about a quarter of the staff. They also received a nonexclusive license to Covariant's robotic foundation models. According to a whistleblower, Amazon paid $380 million, which is much higher than $119.5 million, which is when deals need to be reported to the FTC. The Washington Post reported that according to the whistleblower's filing, Covariant's current CEO, Ted Stinson, said if Amazon had bought the company outright, the deal would have been killed by antitrust authorities. According to the whistleblower, the deal restricts which licenses Covariant can sell without paying a fee to Amazon, hobbling its ability to grow. The whistleblower's filing said Covariant was only expected to last for a year after the deal went through. 2. Amazon and Adept Similarly, in June 2024, Amazon hired CEO David Luan and most of the AI startup Adept's 100-person team in a deal that also included licensing the startup's technology. At the time, Adept, which had raised $400 million, was developing AI agents to do software tasks, and the deal came before it had launched a product. Post-deal a blog post from Adept seemed to suggest that the company was now low in fund and needed to shift to a cheaper business model. 'Continuing with Adept's initial plan of building both useful general intelligence and an enterprise agent product would've required spending significant attention on fundraising…' the post stated. 'Adept will now focus entirely on solutions that enable agentic AI, which will continue to be powered by a combination of our existing state-of-the-art in-house models, agentic data, web interaction software, and custom infrastructure.' In December 2024, Amazon announced it was launching a new lab led by David Luan that would build AI agents that can 'handle complex workflows.' Only around 20 employees remained at Adept after the Amazon deal. Bloomberg noted only four people currently list Adept as their employer on LinkedIn. 1. Microsoft and Inflection Microsoft kicked off the reverse acquihire trend last March when it agreed to pay chatbot startup Inflection about $653 million in a deal that effectively gutted the startup. The move included hiring founders Mustafa Suleyman and Karén Simonya and most of Inflection's staff. Around $620 million was for nonexclusive licensing rights to Inflection's AI models, and a $33 million payment for Inflection to waive any legal claims related to the hiring of its staff. Inflection had raised $1.3 billion in June 2023. However, Bloomberg noted CEO Suleyman was worried about the company's ability to raise enough funds to stay viable given the size of its competition. The deal triggered a FTC investigation to determine whether it was designed to avoid antitrust review while allowing Microsoft control over Inflection. Inflection is still in operation, but has changed course to focus from building new AI models to working on AI in the enterprise space. Sean White, a former Mozilla executive, became the new CEO. White told Bloomberg that Inflection is still in rebuilding mode: 'The ship, over time, was slowly replaced, board by board, piece by piece, right? But it was still always the same ship,' he said.

Business Insider
19 minutes ago
- Business Insider
Researchers built a social network made of AI bots. They quickly formed cliques, amplified extremes, and let a tiny elite dominate.
A team of researchers gave AI bots their own social platform — and it turned toxic. The chatbots split into cliques and boosted the most partisan voices. A handful of "influencers" also quickly dominated the conversation, according to a study published last Tuesday by researchers at the University of Amsterdam. The researchers built a minimal social network with no ads, no recommended posts, and no algorithm deciding what users see. They then populated it with 500 chatbots powered by OpenAI's GPT-4o mini, each assigned a distinct persona, including specific political leanings. The personas were drawn from the American National Election Studies dataset, and reflected "real-world distributions of age, gender, income, education, partisanship, ideology, religion, and personal interests," the researchers said. They added that the experiment was replicated with Llama-3.2-8B and DeepSeek-R1 modelling the users and resulted in "the same qualitative patterns." The study was led by Dr. Petter Törnberg, an assistant professor in computational social science at the University of Amsterdam, and Maik Larooij, a research engineer at the university. The researchers, OpenAI, Meta, and DeepSeek, did not respond to a request for comment from Business Insider. Even without algorithms and humans, the same toxic patterns emerged Over the course of five separate experiments — each running over 10,000 actions — the bots were free to post, follow, and repost. What happened looked a lot like real-world social media. The study found that the chatbots gravitated toward others who shared their political beliefs, forming tight echo chambers. Partisan voices gained an outsize share of attention, with the most extreme posts attracting the most followers and reposts. Over time, a small group of bots came to dominate the conversation, much like the influencer-heavy dynamics seen on platforms like X and Instagram. The researchers also tested six interventions meant to break the polarization loop, including a chronological feed, downranking viral content, hiding follower counts, hiding user bios, and amplifying opposing views. None solved the problem. "While several showed moderate positive effects, none fully addressed the core pathologies, and improvements in one dimension often came at the cost of worsening another," the researchers said. "Our findings challenge the common view that social media's dysfunctions are primarily the result of algorithmic curation," the authors wrote. "Instead, these problems may be rooted in the very architecture of social media platforms: networks that grow through emotionally reactive sharing," they added. The researchers said their work is among the first to use AI to help advance social science theory. While LLM-based agents can provide "rich representations of human behavior" for studying social dynamics, the researchers cautioned that they remain "black boxes" and carry "risks of embedded bias." Not the first AI social network experiment The study isn't the first time researchers have tested what happens when AI bots populate an online space. In 2023, Business Insider reported on an experiment also led by Törnberg, in which 500 chatbots read the news and discussed it on a simulated social media platform. That project used ChatGPT-3.5 to build bots for a very specific purpose: to explore how to design a less polarized, less toxic version of current social networks. The researchers created a social network model in a lab to test whether it was possible to encourage cross-partisan interaction without fueling hostility. "Is there a way to promote interaction across the partisan divide without driving toxicity and incivility?" Törnberg asked at the time. In both studies, chatbots served as stand-ins for people, with researchers tracking their interactions to better understand how users might behave online. Big Tech has also tested similar approaches. In July 2020, Facebook introduced a walled-off simulation of itself, populated with millions of AI bots, to study online toxicity.