
Doctors vs AI: Who wins in the battle of medical diagnosis?
The AI-Human Synergy Competition featured six top radiologists from Zhongshan Hospital, split into two teams: the 'AI Collaboration Group" that had a helping hand from an AI system, and the 'Manual Group' which relied solely on their personal expertise.
Both teams were presented with the same chest x-ray cases and asked to analyse and report their findings live.
Who won the showdown: Doctors or AI?
Although the AI-assisted team worked faster, they missed some important diagnoses, and the human team was able to identify conditions the AI group missed. According to experts, the human report was also more readable, with a warmer tone and better overall structure.
"I noticed our senior physicians carefully adjusted the structure of the report," said Wang Yi, director of the radiology department at Peking University People's Hospital.
"They made it into a coherent report with clear internal logic, unlike the AI-generated version where each item was listed separately".
"In terms of time, AI is definitely faster. Accuracy is roughly comparable, but I have to say the manual group feels warmer and more empathetic," said Zeng Mengsu, director of radiology and chief of diagnostic radiology at Zhongshan Hospital.
"Both cases involved 'annual follow-ups,' yet the doctor group without AI assistance also provided personalised recommendations – such as follow-up ultrasounds or revisits within 6 to 12 months," Mengsu added.
Is this the future of medical diagnostics?
The technology used in the showdown was the 'One-Scan-for-All' chest system, co-developed by Zhongshan Hospital and Shanghai United Imaging Medical Technology.
Unlike conventional models that focus on detecting a single disease, this system is currently capable of identifying 73 common thoracic abnormalities in one scan, from pulmonary nodules and emphysema to fractures and coronary artery calcifications.
"Its current accuracy rate is around 85 per cent, and we aim to raise that to 95 per cent in the future – I'm confident we'll get there," Zeng explained the event.
"Ultimately, we hope to double the work efficiency of radiologists through this technology".
The team behind the AI also has broader ambitions beyond just focusing exclusively on lung-related issues.
Zhou Xiang, co-founder and co-CEO of United Imaging Intelligence, said the group is working on a brain MRI programme with the goal of having one scan for multiple health issues.
"So far, we've covered over 30 diseases and are now expanding into orthopedics, ultrasound, and abdominal imaging," Xiang said.
The showdown in Shanghai comes amid a wave of breakthroughs in AI medical diagnostics. Microsoft recently announced that its latest AI tool outperformed doctors in a study involving complex medical cases.
Tested against 21 experienced physicians from the United Kingdom and United States, the system correctly diagnosed up to 85.5 per cent of patient cases - roughly four times more than the group of doctors.
The model was also cheaper than human doctors, ordering fewer scans and tests to reach the correct diagnosis, the analysis found. While the company isn't suggesting AI will replace doctors, it believes these systems could dramatically improve speed, accuracy, and personalisation in health care.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


France 24
13 hours ago
- France 24
AI search pushing an already weakened media ecosystem to the brink
"The next three or four years will be incredibly challenging for publishers everywhere. No one is immune from the AI summaries storm gathering on the horizon," warned Matt Karolian, vice president of research and development at Boston Globe Media. "Publishers need to build their own shelters or risk being swept away." While data remains limited, a recent Pew Research Center study reveals that AI-generated summaries now appearing regularly in Google searches discourage users from clicking through to source articles. When AI summaries are present, users click on suggested links half as often compared to traditional searches. This represents a devastating loss of visitors for online media sites that depend on traffic for both advertising revenue and subscription conversions. According to Northeastern University professor John Wihbey, these trends "will accelerate, and pretty soon we will have an entirely different web." The dominance of tech giants like Google and Meta had already slashed online media advertising revenue, forcing publishers to pivot toward paid subscriptions. But Wihbey noted that subscriptions also depend on traffic, and paying subscribers alone aren't sufficient to support major media organizations. Limited lifelines The Boston Globe group has begun seeing subscribers sign up through ChatGPT, offering a new touchpoint with potential readers, Karolian said. However, "these remain incredibly modest compared to other platforms, including even smaller search engines." Other AI-powered tools like Perplexity are generating even fewer new subscriptions, he added. To survive what many see as an inevitable shift, media companies are increasingly adopting GEO (Generative Engine Optimization) -- a technique that replaces traditional SEO (Search Engine Optimization). This involves providing AI models with clearly labeled content, good structure, comprehensible text, and strong presence on social networks and forums like Reddit that get crawled by AI companies. But a fundamental question remains: "Should you allow OpenAI crawlers to basically crawl your website and your content?" asks Thomas Peham, CEO of optimization startup OtterlyAI. Burned by aggressive data collection from major AI companies, many news publishers have chosen to fight back by blocking AI crawlers from accessing their content. "We just need to ensure that companies using our content are paying fair market value," argued Danielle Coffey, who heads the News/Media Alliance trade organization. Some progress has been made on this front. Licensing agreements have emerged between major players, such as the New York Times and Amazon, Google and Associated Press, and Mistral and Agence France-Presse, among others. But the issue is far from resolved, as several major legal battles are underway, most notably the New York Times' blockbuster lawsuit against OpenAI and Microsoft. Let them crawl Publishers face a dilemma: blocking AI crawlers protects their content but reduces exposure to potential new readers. Faced with this challenge, "media leaders are increasingly choosing to reopen access," Peham observed. Yet even with open access, success isn't guaranteed. According to OtterlyAI data, media outlets represent just 29 percent of citations offered by ChatGPT, trailing corporate websites at 36 percent. And while Google search has traditionally privileged sources recognized as reliable, "we don't see this with ChatGPT," Peham noted. The stakes extend beyond business models. According to the Reuters Institute's 2025 Digital News Report, about 15 percent of people under 25 now use generative AI to get their news. Given ongoing questions about AI sourcing and reliability, this trend risks confusing readers about information origins and credibility -- much like social media did before it. "At some point, someone has to do the reporting," Karolian said. "Without original journalism, none of these AI platforms would have anything to summarize." Perhaps with this in mind, Google is already developing partnerships with news organizations to feed its generative AI features, suggesting potential paths forward. "I think the platforms will realize how much they need the press," predicted Wihbey -- though whether that realization comes soon enough to save struggling newsrooms remains an open question.


Euronews
2 days ago
- Euronews
EU AI Act doesn't do enough to protect artists' copyright, groups say
As the European Artificial Intelligence Act (AI Act)comes into force, groups representing artists say there are still many loopholes that need to be fixed for them to thrive in a creative world increasingly dominated by AI. The AI Act, celebrated for being the first comprehensive legislation to regulate AI globally, is riddled with problems, these organisations say. Groups like the European Composer and Songwriter Alliance (ECSA) and the European Grouping of Societies of Authors and Composers (GESAC) argue that it fails to protect creators whose works are used to train generative AI models. Without a clear way to opt out or get paid when tech companies use their music, books, movies, and other art to train their AI models, experts say that their work is continually at risk. 'The work of our members should not be used without transparency, consent, and remuneration, and we see that the implementation of the AI Act does not give us,' Marc du Moulin, ECSA's secretary general, told Euronews Next. 'Putting the cart before the horse' The purpose of the AI Act is to make sure AI stays 'safe, transparent, traceable, non-discriminatory and environmentally friendly,' the European Commission, the European Union's executive body, says in an explainer on the law. The law rates AI companies based on four levels of risk: minimal, limited, high, or unacceptable. Those in the unacceptable range are already banned, for example AIs that are manipulative or that conduct social scoring, where they rank individuals based on behaviour or economic status. Most generative AI falls into a minimal risk category, the Commission says. The owners of those technologies still have some requirements, like publishing summaries of the copyrighted data that companies used to train their AIs. Under the EU's copyright laws, companies are allowed to use copyrighted materials for text and data mining, like they do in AI training, unless a creator has 'reserved their rights,' Du Moulin said. Du Moulin said it's unclear how an artist can go about opting out of their work being shared with AI companies. 'This whole conversation is putting the cart before the horse. You don't know how to opt out, but your work is already being used,' he said. The EU's AI Code of Practice on General-Purpose (GPAI), a voluntary agreement for AI companies, asks providers to commit to a copyright policy, put in place safeguards to avoid any infringements of rights, and designate a place to receive and process complaints. Signatories so far include major tech and AI companies such as Amazon, Google, Microsoft, and OpenAI. AI providers have to respect copyright laws, the Commission says The additional transparency requirements under the AI Act give artists clarity on who has already used their material and when, du Moulin added, making it difficult to claim any payment for work that's already been scraped to train AI models. 'Even if the AI Act has some good legal implications, it only works for the future – it will not be retroactive,' Du Moulin said. 'So everything which has been scraped already … it's a free lunch for generative AI providers who did not pay anything'. Adriana Moscono, GESAC's general manager, said some of her members tried opting out by sending letters and emails to individual AI companies to get a license for their content, but were not successful. 'There was no answer,' Moscono told Euronews Next. 'There was absolute denial of the recognition of … the need to respect copyright and to get a license. So please, European Commission, encourage licensing'. Thomas Regnier, a Commission spokesperson, said in a statement to Euronews Next that AI providers have to respect the rights holders when they carry out text and data mining, and if there have been infringements, they can settle it privately. The AI Act 'in no way affects existing EU copyright laws,' Regnier continued. Mandate licence negotiations, groups ask Du Moulin and Moscono are asking the Commission to urgently clarify the rules around opting out and copyright protection in the law. 'The code of practice, the template and the guidelines, they don't provide us any capacity to improve our situation,' Moscono said. 'They're not guaranteeing … a proper application of the AI Act'. The advocates said the Commission could also mandate that AI companies negotiate blanket or collective licenses with the respective artist groups. Germany's Society for Musical Performing and Mechanical Reproduction Rights (GEMA) filed two copyright lawsuits against AI companies OpenAI, the parent of ChatGPT, and Suno AI, an AI music generation app. While not directly related to the AI Act, Du Moulin says the verdict could determine to what extent AI companies could be bound to copyright laws. The Commission and the European Court of Justice, the EU's high court, have also signalled that they will review the text and data mining exemption in the copyright legislation issued in 2019, Du Moulin said. New AI companies have to make sure they are compliant with the AI Act's regulations by 2026. That deadline extends to 2027 to companies already operating in the EU.


Sustainability Times
3 days ago
- Sustainability Times
'AI Will Change Everything About Nuclear' as US Lab Partners With Amazon Cloud to Build the First Smart Reactors in American History
IN A NUTSHELL 🔧 Idaho National Laboratory partners with Amazon Web Services to develop AI-powered digital twins for nuclear reactors. partners with Amazon Web Services to develop for nuclear reactors. 💡 The collaboration aims to modernize the U.S. nuclear sector, making reactors autonomous and efficient . and . 🌐 The initiative is part of a national push to integrate artificial intelligence into energy infrastructure. into energy infrastructure. 🔍 Focus on safety, cost reduction, and sustainability in nuclear energy development. The United States is taking a bold step in nuclear energy innovation, leveraging the power of artificial intelligence (AI) to transform how nuclear reactors are designed and operated. The Idaho National Laboratory (INL) has partnered with Amazon Web Services (AWS) to develop autonomous nuclear systems. This collaboration aims to create digital twins of nuclear reactors using AWS's advanced cloud technology. The move is part of a larger effort to modernize the nuclear energy sector, which has historically faced challenges such as high costs and regulatory hurdles. Digital Twins: A Groundbreaking Approach The concept of digital twins is at the heart of this initiative. Digital twins are virtual replicas of physical systems that enable detailed modeling and simulation. By utilizing AWS's cloud infrastructure, INL aims to create digital twins of small modular reactors (SMRs). These reactors, with capacities ranging from 20 to 300 megawatts, are poised to benefit from AI-driven efficiencies. John Wagner, Director of INL, highlighted the significance of this collaboration, stating that it marks a leap forward in integrating AI with nuclear energy research. 'Our collaboration with Amazon Web Services marks a significant leap forward in integrating advanced AI technologies into our nuclear energy research and development initiatives,' Wagner noted. The partnership underscores the critical role of linking the nation's nuclear energy laboratory with AWS to accelerate nuclear energy deployment. By using real-time data, these digital twins will enhance modeling capabilities, facilitate simulations, and eventually allow for safe autonomous operations. This initiative is expected to revolutionize how nuclear plants are built and operated, offering potential cost reductions and improved safety. America's Artificial Sun Is Here and It's Already Tearing the Country Apart Between Tech Elites, Climate Rebels, and Energy Giants Harnessing the Power of Machine Learning As part of INL's broader vision, the integration of machine learning with nuclear technology aims to create an AI-nuclear ecosystem. This ecosystem will connect Department of Energy (DOE) labs, tech companies, and energy developers. The ultimate goal is to develop nuclear reactors that are not only faster to construct but also safer and more intelligent in operation. The INL-AWS partnership follows a similar collaboration between Westinghouse and Google Cloud, highlighting the growing importance of AI in the nuclear sector. By combining AI platforms with proprietary nuclear data, these partnerships aim to accelerate the development of advanced nuclear technologies. In May 2025, President Donald Trump signed executive orders to streamline reactor permitting and expand domestic nuclear fuel production. These efforts are part of a broader strategy to modernize the U.S. nuclear energy infrastructure and support increasing AI-driven power demands. 'Hotter Than Hellfire Itself': Critics Slam Fusion Reactor Part That Withstands Temperatures Higher Than Asteroid Impacts A National Push for AI-Driven Nuclear Power The U.S. government has recognized the critical role of nuclear energy in maintaining technological competitiveness and supporting future data center growth. The release of a national AI Action Plan in July 2025 identified reliable, dispatchable energy, including nuclear power, as essential components of this strategy. The partnership between INL and AWS is a reflection of this national push. By leveraging customized chips like Inferentia and Trainium, along with tools such as Amazon SageMaker, the collaboration aims to drive the adoption of AI in nuclear applications. Chris Ritter, division director of Scientific Computing and AI at INL, emphasized the importance of this partnership in accessing AI models and specialized cloud services. This initiative is not just about technological advancements; it is also about redefining the future of energy production. By embracing AI, the U.S. is positioning itself at the forefront of global nuclear innovation. 'Nuclear Fusion Just Got Real': Scientists Unveil Breakthrough That Could Deliver Endless Clean Energy and Erase Fossil Fuel Dependency The Road Ahead for AI and Nuclear Energy While the potential benefits of AI-driven nuclear energy are immense, the path forward is not without challenges. The integration of AI into nuclear systems requires careful consideration of safety protocols, regulatory compliance, and public acceptance. However, the collaboration between INL and AWS is a promising step toward overcoming these hurdles. As the U.S. continues to invest in AI-driven nuclear technologies, the focus will be on creating a sustainable and secure energy future. The development of autonomous reactors and digital twins represents a significant shift in how nuclear energy is perceived and utilized. The question remains: How will this transformation in nuclear energy impact global energy dynamics, and what role will AI play in shaping the future of sustainable power? This article is based on verified sources and supported by editorial technologies. Did you like it? 4.6/5 (22)