30-05-2025
AI-based mammography is here, and it has a trust problem
Enthusiasm is growing for the technology, as prospective trials conducted in Europe suggest that some AI tools can detect more cancers than radiologists alone. But beyond radiology networks that are implementing their own algorithms, many imaging centers and the radiologists who work for them are still skeptical.
Advertisement
'It'll take time for us to really gain a lot of trust and confidence in it,' said Manisha Bahl, a breast radiologist at Massachusetts General Hospital, which is currently testing several AI-based tools for mammography in a head-to-head study.
One large
Advertisement
Radiologists were much less likely to call patients for follow-up when their mammogram was flagged by the AI than when it was flagged by a human radiologist — even though the AI-flagged cases were more likely to be cancerous. If both members of a human-AI pair flagged a case for possible malignancy, only 39 percent of patients were called back. If it was two radiologists that flagged a case, callbacks shot up to 57 percent. The researchers used technology from South Korean developer Lunit, which also partly funded the study.
'Providing a superior performing algorithm doesn't actually necessarily improve human performance,' said Adam Rodman, a clinical reasoning researcher and practicing internist who co-directs the iMED initiative at Beth Israel Deaconess Medical Center. 'This study does a really good job at explaining why that is, and it's trust.'
It's a common refrain for breast radiologists, who have been burned by technology before. Computer-aided detection tools for mammograms became standard starting in the 1980s, but in the long run, they never improved cancer detection or recall rates. 'We have to be very careful what we do with AI once it's out in the wild,' said Etta Pisano, chief research officer at the American College of Radiology, at the meeting of the Radiological Society of North America in December.
Advertisement
Performance for breast cancer AI — which marks or annotates suspicious lesions on a mammogram and provides a score or some indication of the likelihood of malignancy — is typically reported based on how well it identifies cancers in databases of old mammograms that have been previously screened by human radiologists. But in practice, that performance can vary significantly from center to center and radiologist to radiologist.
The radiologist's level of trust, the way the software is integrated into their
With all those variables, how can a radiologist make sure that they're using AI the most effective way? 'The answer is, nobody knows,' said Rodman.
Normally, Bahl, the radiologist at Mass General, opens up a mammogram and interprets it entirely on her own before she activates any AI-based image processing. That's to cut down on the risk of automation bias — where she learns to rely too much on the machine. That approach makes sense for radiologists who trust their professional judgement more than a new tool, and it aligns with surveys of patients that suggest they typically want a doctor making the final call on image analysis, said Sanjay Aneja, a radiation oncologist at Yale Cancer Center who studies mammography AI.
But that approach doesn't fulfill AI's other main promise: the potential for efficiency. 'If anything, it might be a little bit less efficient,' said Aneja. With skilled radiologists in shortage, AI will ideally help them work smarter and faster — which means looking at the algorithm's output upfront.
Advertisement
At radiology practices that are more aggressively leaning into mammogram AI, 'radiologists gain confidence and start to look earlier in the process,' said Chris McKinney, director of North American sales for Lunit.
That kind of comfort doesn't emerge unless radiologists are getting regular practice with a new system — and adoption of the tools is inconsistent, since they aren't reimbursed by insurance. Instead, most practices deploying the tools ask patients to pay a cash fee of $40 to $90 for an AI add-on. At the top-adopting practice within Rezolut, a network of more than 40 US radiology practices that use Lunit's breast AI system, about 40 percent of patients opt to pay.
'Every radiologist is going to have to work with the AI product and get more comfortable with it and then utilize it individually,' said Stamatia Destounis, chair of the American College of Radiology's breast imaging commission. 'I don't think everyone's going to use it the same way.'
But 'practice makes perfect' isn't enough for many AI researchers and physicians, who want more explicit guidance when it comes to appropriate deployment of these tools as they become more widely used. 'The problem is that there's actually very little guardrails with a lot of these devices as they get put out there,' said Aneja.
Imaging quality can vary significantly between radiology practices, for example. 'There's very few algorithms that say, 'This image is of poor quality, can't evaluate it,'' said Aneja. 'We want an algorithm to be able to say what they don't know.'
Advertisement
Chiara Corti, an oncologist from Italy and clinical fellow at Dana-Farber, calls for more disclosure of the race and ethnicity of patients whose mammograms were used to train AI tools to ensure accuracy across groups. (ACR's AI
The performance — and relative value — of an AI tool also depends on the radiologists who are using it. Not every radiologist is an expert in reading mammograms, and 'one of the biggest benefits of AI is to disseminate that level of knowledge,' said Aneja.
To ensure radiology practices use this new generation of algorithms in a way that drives better cancer outcomes, they need more real-world, prospective research in the US 'We need to see those human interactions,' said Destounis. 'It's impossible to deduce from a study going back in time how every radiologist is going to interact with the AI system in practice, daily.'
A 2024
Advertisement
That kind of real-world evidence is one of the goals of a program out of ARPA-H called ACTR, for Advancing Clinical Trial Readiness. Pisano, the program's director, said at the December RSNA meeting that ACTR planned to test AI products in real-world, pragmatic trials out of imaging centers across the country, most likely starting with trials of breast cancer screening algorithms that could include hundreds of thousands of mammograms.
The trials, she said at the time, were planned to start this spring. They have yet to be announced.