logo
Can AI help detect breast cancer, and is it accurate?

Can AI help detect breast cancer, and is it accurate?

Artificial intelligence (AI) may help detect breast cancer earlier and more accurately than traditional methods alone. It may also help predict a person's risk of developing breast cancer.Health professionals use imaging scans like mammograms and breast ultrasounds to screen people for breast cancer, which can help with early detection. They may also assess a person's family history, genetics, and other factors to help determine their risk of developing the disease.Recent studies suggest that AI could help health professionals detect breast cancer more quickly and accurately than with traditional screening methods alone. The technology may also help predict a person's risk of breast cancer with greater precision. How does AI help with breast cancer detection?AI developers can train computer systems to recognize, interpret, and analyze patterns in data.For breast cancer detection, AI technicians input information gathered from large data sets of mammograms for the systems to learn from.The AI software uses the data to create an algorithm that outlines the characteristics of mammograms with and without cancer. The system can then compare new images to the algorithm to help identify abnormalities.How accurate is AI in detecting breast cancer?Research has found that AI could help detect breast cancer with similar or greater accuracy than radiologists alone.In a recent Swedish study, AI-supported screening detected cancer in 244 women after analyzing 39,996 mammograms. In a separate group, two radiologists each used traditional screening methods to analyze a different set of 40,024 mammograms, from which they were able to detect cancer in 203 women.The false positive rate was 1.5% in both groups, which means AI and radiologists both mistakenly detected breast cancer in 1.5% of the mammograms they analyzed.While the detection rates were similar between both groups, the AI screening method reduced the workload for radiologists and allowed them to spend 44% less time reading screens.A 2025 meta-analysis of eight studies indicated that AI techniques could detect breast cancer with better overall accuracy than radiologists.However, the researchers also highlighted the current limitations of AI screening. These included the technology sometimes failing to identify visible lesions or interpret ambiguous results like radiologists could.The researchers suggest that AI and traditional radiology combined may result in the most accurate and effective breast cancer detection.Researchers of a 2022 study agree that AI should support, rather than replace, radiologists. Their results indicate that a combination of AI and a radiologist could detect breast cancer 2.6% more accurately than a radiologist alone.Can AI detect early breast cancer?Early breast cancer detection and treatment can significantly improve a person's outlook for the disease. The survival rate is almost 100% for the earliest stages of breast cancer and declines to 22% at stage IV.Traditional screening mammograms miss about 20% of breast cancers, according to the National Cancer Institute.AI-supported screening may help reduce false-negative results and help identify breast cancer earlier, as research suggests it could improve overall screening accuracy. However, more research is necessary to understand the reliability and implications of the technology. »Learn more:How can people detect breast cancer early?Can AI assess individual breast cancer risk?Research suggests AI may be able to effectively assess a person's risk of developing breast cancer.Health professionals typically use tools like the Breast Cancer Risk Assessment tool (BCRAT) or the Breast Cancer Surveillance Consortium (BCSC) Risk Calculator to estimate a person's likelihood of developing the disease.The tools calculate an individual's risk based on several factors, including their age, race and ethnicity, and personal and family medical histories.A recent study found that AI may be able to predict a person's breast cancer risk without these factors.In the study, AI systems used mammogram images to predict people's risk of developing breast cancer more accurately than the BCSC risk model could.The researchers found that combining AI and the BCSC model achieved the most accurate results.Are there challenges in using AI for breast cancer detection?Potentially, AI offers significant developments in breast cancer detection. However, AI systems lack standardization and rigorous regulatory and ethical guidelines and may present several challenges for researchers and health professionals. These include:Research challenges: AI algorithms are not generalized and often include large numbers of variables. This may affect how consistently AI models are able to perform and the reliability of the data they provide. Scientists need more evidence from large-scale studies to assess how safe, accurate, and reliable the technology may be for breast cancer detection in real-world clinical settings.The 'black box enigma': Scientists may refer to AI algorithms as black boxes, as humans cannot always understand the patterns the models find and the decisions they make. This could lead to AI making incomprehensible mistakes that scientists cannot predict, detect, or understand.Ethical concerns: The use of AI raises various ethical issues, including contributing to health disparities and the effect on healthcare professionals that AI systems may replace.Economic challenges: AI algorithm and infrastructure development and maintenance involve substantial ongoing costs.Is a person's data secure when AI screens for breast cancer? There are complex legal, regulatory, and technological challenges that may affect a person's data security when AI is used for breast cancer screening. The Health Insurance Portability and Accountability Act (HIPAA) protects people's health information and health privacy rights in the United States. As a new and evolving technology, however, AI presents legal and regulatory challenges that may affect health data security.HIPAA may disclose 'de-identified' protected health information. This involves removing information that could identify an individual or link them to the data. However, researchers suggest AI healthcare may result in opportunities for re-identification, which could link sensitive and private information to specific individuals.Regulatory bodies, healthcare professionals, and AI developers may continue to determine and implement safety measures as the technology progresses.Can AI help reduce breast cancer disparities?Breast cancer disparities can prevent certain groups from receiving equitable screening and treatment. These disparities persist due to various factors, including racism, the underrepresentation of certain groups in clinical trials, and a lack of access to care.Breast cancer disparities affect Black women especially severely. The group has a 38% higher mortality rate than white women, despite having a lower incidence of the disease.Researchers suggest that AI is vulnerable to bias and may contribute to and exacerbate existing racial disparities in healthcare, such as breast cancer screening.Evidence suggests that AI reflects human bias as it learns from the data that people provide. Additionally, human choices may influence AI systems to perform in exploitative or discriminatory ways.AI may help to reduce racial disparities in breast cancer screening if the people who use the technology actively counter existing healthcare biases. This may involve the use of ethical AI programs and the inclusion of diverse data sets.SummaryResearchers tend to agree that AI breast cancer screening should be integrated to support, rather than replace, radiologists for the most accurate results.The technology may be a promising tool for breast cancer detection and risk prediction. However, it faces several challenges, including ethical concerns, excessive financial costs, and reliability issues.More research is needed to determine if the technology is safe, accurate, and reliable before it can be widely implemented.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot
Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

The Guardian

timean hour ago

  • The Guardian

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

Tran* sat across from me, phone in hand, scrolling. 'I just wanted to make sure I didn't say the wrong thing,' he explained, referring to a recent disagreement with his partner. 'So I asked ChatGPT what I should say.' He read the chatbot-generated message aloud. It was articulate, logical and composed – almost too composed. It didn't sound like Tran. And it definitely didn't sound like someone in the middle of a complex, emotional conversation about the future of a long-term relationship. It also did not mention anywhere some of Tran's contributing behaviours to the relationship strain that Tran and I had been discussing. Like many others I've seen in therapy recently, Tran had turned to AI in a moment of crisis. Under immense pressure at work and facing uncertainty in his relationship, he'd downloaded ChatGPT on his phone 'just to try it out'. What began as a curiosity soon became a daily habit, asking questions, drafting texts, and even seeking reassurance about his own feelings. The more Tran used it, the more he began to second-guess himself in social situations, turning to the model for guidance before responding to colleagues or loved ones. He felt strangely comforted, like 'no one knew me better'. His partner, on the other hand, began to feel like she was talking to someone else entirely. ChatGPT and other generative AI models present a tempting accessory, or even alternative, to traditional therapy. They're often free, available 24/7 and can offer customised, detailed responses in real time. When you're overwhelmed, sleepless and desperate to make sense of a messy situation, typing a few sentences into a chatbot and getting back what feels like sage advice can be very appealing. But as a psychologist, I'm growing increasingly concerned about what I'm seeing in the clinic; a silent shift in how people are processing distress and a growing reliance on artificial intelligence in place of human connection and therapeutic support. AI might feel like a lifeline when services are overstretched – and make no mistake, services are overstretched. Globally, in 2019 one in eight people were living with a mental illness and we face a dire shortage of trained mental health professionals. In Australia, there has been a growing mental health workforce shortage that is impacting access to trained professionals. Clinician time is one of the scarcest resources in healthcare. It's understandable (even expected) that people are looking for alternatives. Turning to a chatbot for emotional support isn't without risk however, especially when the lines between advice, reassurance and emotional dependence become blurred. Many psychologists, myself included, now encourage clients to build boundaries around their use of ChatGPT and similar tools. Its seductive 'always-on' availability and friendly tone can unintentionally reinforce unhelpful behaviours, especially for people with anxiety, OCD or trauma-related issues. Reassurance-seeking, for example, is a key feature in OCD and ChatGPT, by design, provides reassurance in abundance. It never asks why you're asking again. It never challenges avoidance. It never says, 'let's sit with this feeling for a moment, and practice the skills we have been working on'. Tran often reworded prompts until the model gave him an answer that 'felt right'. But this constant tailoring meant he wasn't just seeking clarity; he was outsourcing emotional processing. Instead of learning to tolerate distress or explore nuance, he sought AI-generated certainty. Over time, that made it harder for him to trust his own instincts. Beyond psychological concerns, there are real ethical issues. Information shared with ChatGPT isn't protected by the same confidentiality standards as registered Ahpra professionals. Although OpenAI states that data from users is not used to train its models unless permission is given, the sheer volume of fine print in user agreements often goes unread. Users may not realise how their inputs can be stored, analysed and potentially reused. There's also the risk of harmful or false information. These large language models are autoregressive; they predict the next word based on previous patterns. This probabilistic process can lead to 'hallucinations', confident, polished answers that are completely untrue. AI also reflects the biases embedded in its training data. Research shows that generative models can perpetuate and even amplify gender, racial and disability-based stereotypes – not intentionally, but unavoidably. Human therapists also possess clinical skills; we notice when a client's voice trembles, or when their silence might say more than words. This isn't to say AI can't have a place. Like many technological advancements before it, generative AI is here to stay. It may offer useful summaries, psycho-educational content or even support in regions where access to mental health professionals is severely limited. But it must be used carefully, and never as a replacement for relational, regulated care. Tran wasn't wrong to seek help. His instincts to make sense of distress and to communicate more thoughtfully were logical. However, leaning so heavily on to AI meant that his skill development suffered. His partner began noticing a strange detachment in his messages. 'It just didn't sound like you', she later told him. It turned out: it wasn't. She also became frustrated about the lack of accountability in his correspondence to her and this caused more relational friction and communication issues between them. As Tran and I worked together in therapy, we explored what led him to seek certainty in a chatbot. We unpacked his fears of disappointing others, his discomfort with emotional conflict and his belief that perfect words might prevent pain. Over time, he began writing his own responses, sometimes messy, sometimes unsure, but authentically his. Good therapy is relational. It thrives on imperfection, nuance and slow discovery. It involves pattern recognition, accountability and the kind of discomfort that leads to lasting change. A therapist doesn't just answer; they ask and they challenge. They hold space, offer reflection and walk with you, while also offering up an uncomfortable mirror. For Tran, the shift wasn't just about limiting his use of ChatGPT; it was about reclaiming his own voice. In the end, he didn't need a perfect response. He needed to believe that he could navigate life's messiness with curiosity, courage and care – not perfect scripts. Name and identifying details changed to protect client confidentiality Carly Dober is a psychologist living and working in Naarm/Melbourne In Australia, support is available at Beyond Blue on 1300 22 4636, Lifeline on 13 11 14, and at MensLine on 1300 789 978. In the UK, the charity Mind is available on 0300 123 3393 and Childline on 0800 1111. In the US, call or text Mental Health America at 988 or chat

Trump dubbed himself the ‘father of IVF' on the campaign trail. But his pledge to mandate insurance cover has disappeared
Trump dubbed himself the ‘father of IVF' on the campaign trail. But his pledge to mandate insurance cover has disappeared

The Independent

time3 hours ago

  • The Independent

Trump dubbed himself the ‘father of IVF' on the campaign trail. But his pledge to mandate insurance cover has disappeared

Donald Trump's vow to expand in vitro fertilization (IVF) access to millions of Americans is on hold, with White House officials backing away from plans to require Obamacare health plans to include the service as an essential health benefit, the Washington Post reported on Sunday. The Post reported that White House officials have privately moved away from the prospect of pushing for legislation to address the issue despite it being one of Trump's signature campaign promises, citing two persons with knowledge of internal discussions in Trumpworld. A senior administration official also acknowledged to the newspaper that changing Obamacare to force insurers to cover new services would require congressional action, not an executive order. The president has governed largely by executive fiat in his second term as he grapples with a closely-divded Congress and an unruly GOP majority in the House of Representatives. He's used those executive orders to dismantle whole parts of the federal government, including USAID and the Consumer Financial Protection Bureau (CFPB). The president even tried to take an axe to the Department of Education, though that battle is still being waged in the courts. The Supreme Court recently cleared the way for Trump to cut roughly a quarter of the agency's staff. But many of Trump's campaign promises lie outside of his ability to influence via the hiring or firing of people and redirection of agency resources or agendas. In 2024, he laid out no direct path for his goal to expand IVF access, only telling voters that insurance companies would be forced to cover it. Still, he proclaimed himself the 'father of IVF' at at Fox News town hall, and promised during an NBC News interview: 'We are going to be, under the Trump administration, we are going to be paying for that treatment. We're going to be mandating that the insurance company pay.' At the time, there was little to no acknowledgment of the fact that many if not most conservatives still oppose the Affordable Care Act and the same healthcare exchanges which Trump was now promising to utilize as he sought to use the power of the federal government to expand healthcare coverage. Now, with the passage of Trump's 'big, beautiful bill' without any provisions expanding IVF access, and with the prospect of further policy gains before the midterms growing dimmer, it's unclear when the White House would have another chance to press the issue in Congress. In February, the president signed an executive order directing his advisers to 'submit to the President a list of policy recommendations on protecting IVF access and aggressively reducing out-of-pocket and health plan costs for IVF treatment.' It's been crickets on the issue since then. In 2024, many of Trump's critics and the media pointed out that the policy would essentially amount to a reversal or at the very least coming in sharp contrast to the first Trump administration's efforts to repeal the Affordable Care Act, which ended in failure, and a contradiction of the conservative view that government should not exercise that level of control over Americans' health care decisions. The president's promise thrilled his party's natalists, embodied by Vice President JD Vance and an army of right-wing immigration hawks who fear the changing American demographics brought on as a result of falling birth rates and high levels of migration. It also wowed some of his Democratic and left-leaning critics, who see the policy as a means of furthering their goal of expanding access to healthcare for poorer Americans. For Vance, the issue of declining U.S. birth rates predates his MAGA heel-turn. In 2019, he told a gathering of conservatives in Washington: 'Our people aren't having enough children to replace themselves. That should bother us.' 'We want babies not just because they are economically useful. We want more babies because children are good. And we believe children are good, because we are not sociopaths,' the future vice president added at the time. Two years later, he'd tell a right-leaning podcast: 'I think we have to go to war against the anti-child ideology that exists in our country.' During the 2024 campaign, those views emerged again as Vance attacked Democrats as 'childless cat ladies' and leaned heavily into attacking the left for supposedly being anti-family. Progressives fought back, pointing to efforts to expand the child tax credit and other benefits that aid young families under Joe Biden and other Democratic administrations, including the passage of Barack Obama's signature law: the Affordable Care Act.

Musk open to merger between his company xAI and Apple
Musk open to merger between his company xAI and Apple

Daily Mail​

time3 hours ago

  • Daily Mail​

Musk open to merger between his company xAI and Apple

Elon Musk has been openly hinting at a historic merger in the business world, suggesting that his company xAI should partner with tech giant Apple. Musk's company is the corporate face of his popular AI chatbot Grok, which functions similarly to competitors like ChatGPT, Claude, Gemini, and Copilot. Meanwhile, Apple has struggled to bring its own AI programs to consumers, notably delaying improvements to the Siri voice assistant until 2026. Venture capitalists started openly speculating this month that Musk and Apple make the perfect power couple in the AI world, with xAI bringing Grok to even more people using iPhones through this proposed partnership. On the All-In Podcast, investor Gavin Baker called xAI's Grok4 'the best product' in terms of AI chatbots right now, but added that 'the best product doesn't always win in technology.' 'I think there is solid industrial logic for a partnership. You could have Apple Grok, Safe Grok, whatever you want to call it,' said Baker, the Chief Investment Officer of Atreides Management LP, in a video posted to X on July 19. Musk quickly replied to the comments, saying 'Interesting idea.' The billionaire then added 'I hope so!' while responding to another post suggesting that Apple partnering with xAI was a better option than competitors like Anthropic. A partnership between the two companies could integrate xAI's Grok chatbot into Apple's devices, such as iPhones, iPads, and Macs, potentially replacing or augmenting Siri. A relationship between Musk's AI team and the $3.1 trillion Apple could also lead to smarter, more accurate AI assistants, addressing Apple's ongoing issues with AI development. Grok launched in 2023 as Musk's alternative to other chatbot which had sparked controversy for provided allegedly biased answers and citing information that had been made up. xAI has said that Grok is "designed to answer questions with a bit of wit," and the program has generally drawn widespread praise for its quick and accurate answers to prompts. Just just weeks ago, however, Grok 4 was engulfed in controversy for repeating far-right hate speech and white nationalist talking points about politics, race, and recent news events. Multiple users reported on July 8 and July 9 that Grok echoed anti-Semitic conspiracy theories, including claims that Jewish people controlled Hollywood, promoted hatred toward White people, and should be imprisoned in camps. In a post on X, xAI replied to these concerns: 'We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X,' the company added. Baker added that the deal Musk has been infatuated with would benefit xAI's reach significantly as well, since OpenAI's ChatGPT is currently used by nearly 800 million weekly active users, according to Demandsage. 'There's been a lot of news about Apple thinking about buying Perplexity or Mistral, but that's just a Band-Aid. Those companies don't get Apple what they need,' Baker said. To the investor's point, Perplexity AI is a search-engine-style AI company known for information retrieval and fact-finding tasks. It's currently valued at $18 billion. Mistral AI is a French AI firm valued at roughly $6.2 billion that's focused on easy-to-use, open-source language models. They've worked with partners like Cisco to help with tasks like research and automation. On the other hand, xAI and its Grok chatbot stand out with a current valuation of up to $200 billion and a distribution reaching 35.1 million monthly active users. Baker explained that 'xAI and Apple are natural partners,' especially after OpenAI made a multi-billion-dollar deal to create new devices that use their AI technology without relying on the iPhone. In May, OpenAI bought former Apple designer Jony Ive's hardware startup for a reported $6.5 billion. That deal brought Ive on as the AI company's new creative head, with the vision of building specialized gadgets that can use generative AI and ChatGPT without needing a smartphone or computer. While a deal between xAI and Apple is still only speculation, Musk recently turned heads by announcing that xAI was working on a new project called 'Baby Grok' which would be a new app designed to provide 'kid-friendly content.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store