
DeepSeek's AI in hospitals is ‘too fast, too soon', Chinese medical researchers warn
A paper warning of 'substantial clinical risk' from an overreliance on AI models makes a team of medical researchers a rare voice of caution. — SCMP
A team of researchers in China has questioned hospitals' rapid adoption of DeepSeek, warning that it creates clinical safety and privacy risks, raising red flags over the rush to use the artificial intelligence (AI) start-up's cost-efficient open-source models.
As of early March, at least 300 hospitals in China have started using DeepSeek's large language models (LLMs) in clinical diagnostics and medical decision support.
The researchers warned that DeepSeek's tendency to generate 'plausible but factually incorrect outputs' could lead to 'substantial clinical risk', despite strong reasoning capabilities, according to a paper published last month in the medical journal JAMA. The team includes Wong Tien Yin, founding head of Tsinghua Medicine, a group of medical research schools at Tsinghua University in Beijing.
The paper was a rare voice of caution in China against the overzealous use of DeepSeek. The start-up has become the nation's poster child for AI after its low-cost, high-performance V3 and R1 models captured global attention this year. DeepSeek did not immediately respond to a request for comment.
According to Wong, an ophthalmology professor and former medical director at the Singapore National Eye Centre, and his co-authors, healthcare professionals could become overreliant on or uncritical of DeepSeek's output. This could result in diagnostic errors or treatment biases, while more cautious clinicians could face the burden of verifying AI output in time-sensitive clinical settings, they said.
While hospitals often choose private, on-site deployment of DeepSeek models instead of cloud-based solutions to mitigate security and privacy risks, this approach presents challenges. It 'shifts security responsibilities to individual healthcare facilities', many of which lack comprehensive cybersecurity infrastructure, according to the researchers.
In China, the combination of disparities in primary care infrastructure and high smartphone penetration also created a 'perfect storm' for clinical safety concerns, they added.
'Underserved populations with complex medical needs now have unprecedented access to AI-driven health recommendations, but often lack the clinical oversight needed for safe implementation,' the researchers wrote.
The paper reflects the healthcare community's increasing scrutiny of LLM use in clinical and medical settings, as organisations across China accelerate adoption. Researchers from the Chinese University of Hong Kong also published a paper last month on the cybersecurity of AI agents, finding that most powered by mainstream LLMs were susceptible to attacks, with DeepSeek-R1 being the most vulnerable.
The country has sped up the use of LLMs in the healthcare sector amid a boom in generative AI technologies. Last month, Chinese fintech giant Ant Group launched nearly 100 AI medical agents on its Alipay payments app. The agents are based on medical experts from China's top hospitals.
Tairex, a start-up incubated at Tsinghua University, also began internal tests of a virtual hospital platform in November. The platform features 42 AI doctors covering 21 departments, including emergency, respiratory, paediatrics and cardiology. The company aimed to make the platform available to the general public this year, it said at the time. – South China Morning Post
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Malaysian Reserve
14 hours ago
- Malaysian Reserve
Celestine Achi Launches Free AI Readiness Assessment Tool and Maturity Framework to Accelerate Africa's AI Adoption in PR, Media, and Communications
LAGOS, Nigeria, June 1, 2025 /PRNewswire/ — As the AI revolution sweeps across industries worldwide, one African innovator is ensuring the continent doesn't just keep up — but leads. Dr. Celestine Achi, renowned AI educator, PR technology pioneer, and author of AI-Powered PR: The Essential Guide for Communications Leaders to Master Artificial Intelligence, has unveiled a groundbreaking AI Maturity Assessment Framework and AI Readiness Assessment Tool tailored for African organizations and professionals. This dual innovation is designed to democratize access to strategic AI evaluation for businesses, agencies, nonprofits, and public sector entities — with a special focus on public relations, media, and communications professionals. 'AI shouldn't be a privilege for the West. It must be a catalyst for transformation in Africa — starting with those who shape public narratives,' said Celestine Achi, Founder of Cihan Digital Academy and architect of the TABS-D AI Implementation Framework. Empowering Africa's Future-Ready Workforce The AI Readiness Assessment Tool, now freely available at enables individuals and teams to instantly evaluate their current AI capabilities across key pillars such as strategy, skills, systems, and culture. Upon completion, users receive a customized AI readiness report with practical steps for growth — no technical background required. The companion AI Maturity Assessment Framework provides a structured pathway for organizations to transition from AI experimentation to enterprise-level integration. Rooted in real-world case studies and tested across PR agencies, newsrooms, and regulatory bodies, the framework allows African leaders to map their journey across five maturity stages: Nascent, Aware, Engaged, Strategic, and Transformational. Built for Communicators. Designed for Africa. What sets this initiative apart is its deep contextual relevance. Drawing from Celestine's extensive work with media agencies, government communicators, and enterprise brands across Nigeria and beyond, the tools are optimized for African realities — where connectivity, capacity gaps, and talent development remain major hurdles. 'PR and media professionals are the architects of trust. They deserve the right tools to thrive in this intelligent era,' Achi emphasized. 'With this framework, they can now measure, learn, and lead AI transformation — regardless of their current digital maturity.' A Movement, Not Just a Tool Already embraced by industry leaders and professional bodies, the AI Maturity Framework and Readiness Tool are part of the broader AI-Powered PR Ecosystem, a multi-dimensional platform offering: The AI-Powered PR playbook An immersive PR simulation game built on the TABS-D framework Community engagement tools and certification programs To access the free assessment and start your AI journey, visit: About Celestine Achi Celestine Achi (FIIM, MNIPR, ANIMC, Dr. FAIMFIN) is Africa's foremost authority on AI in PR and digital media transformation. He is the author of AI-Powered PR, developer of the TABS-D Framework, and founder of Cihan Digital Academy – a pioneer in AI education for communicators. Photo – View original content:


Free Malaysia Today
20 hours ago
- Free Malaysia Today
Silicon Valley VCs navigate uncertain AI future
ChatGPT and its rivals now handle search, translation, and coding all within one chatbot – raising doubts about what new ideas could compete. (AFP pic) VANCOUVER : For Silicon Valley venture capitalists, the world has split into two camps: those with deep enough pockets to invest in artificial intelligence behemoths, and everyone else waiting to see where the AI revolution leads. The generative AI frenzy unleashed by ChatGPT in 2022 has propelled a handful of venture-backed companies to eye-watering valuations. Leading the pack is OpenAI, which raised US$40 billion in its latest funding round at a US$300 billion valuation – unprecedented largesse in Silicon Valley's history. Other AI giants are following suit. Anthropic now commands a US$61.5 billion valuation, while Elon Musk's xAI is reportedly in talks to raise US$20 billion at a US$120 billion price tag. The stakes have grown so high that even major venture capital firms – the same ones that helped birth the internet revolution – can no longer compete. Mostly, only the deepest pockets remain in the game: big tech companies, Japan's SoftBank, and Middle Eastern investment funds betting big on a post-fossil fuel future. 'There's a really clear split between the haves and the have-nots,' says Emily Zheng, senior analyst at PitchBook, told AFP at the Web Summit in Vancouver. 'Even though the top-line figures are very high, it's not necessarily representative of venture overall, because there's just a few elite startups and a lot of them happen to be AI.' Given Silicon Valley's confidence that AI represents an era-defining shift, venture capitalists face a crucial challenge: finding viable opportunities in an excruciatingly expensive market that is rife with disruption. Simon Wu of Cathay Innovation sees clear customer demand for AI improvements, even if most spending flows to the biggest players. 'AI across the board, if you're selling a product that makes you more efficient, that's flying off the shelves,' Wu explained. 'People will find money to spend on OpenAI' and the big players. The real challenge, according to Andy McLoughlin, managing partner at San Francisco-based Uncork Capital, is determining 'where the opportunities are against the mega platforms.' 'If you're OpenAI or Anthropic, the amount that you can do is huge. So where are the places that those companies cannot play?' Finding that answer isn't easy. In an industry where large language models behind ChatGPT, Claude and Google's Gemini seem to have limitless potential, everything moves at breakneck speed. AI giants including Google, Microsoft, and Amazon are releasing tools and products at a furious pace. ChatGPT and its rivals now handle search, translation, and coding all within one chatbot – raising doubts among investors about what new ideas could possibly survive the competition. Generative AI has also democratised software development, allowing non-professionals to code new applications from simple prompts. This completely disrupts traditional startup organisation models. 'Every day I think, what am I going to wake up to today in terms of something that has changed or (was) announced geopolitically or within our world as tech investors,' reflected Christine Tsai, founding partner and CEO at 500 Global. In Silicon Valley parlance, companies are struggling to find a 'moat' – that unique feature or breakthrough like Microsoft Windows in the 1990s or Google Search in the 2000s that's so successful it takes competitors years to catch up, if ever. When it comes to business software, AI is 'shaking up the topology of what makes sense and what's investable,' noted Brett Gibson, managing partner at Initialized Capital. The risks seem particularly acute given that generative AI's economics remain unproven. Even the biggest players see a very uncertain path to profitability given the massive sums involved. The huge valuations for OpenAI and others are causing 'a lot of squinting of the eyes, with people wondering 'is this really going to replace labor costs'' at the levels needed to justify the investments, Wu observed. Despite AI's importance, 'I think everyone's starting to see how this might fall short of the magical' even if its early days, he added. Still, only the rare contrarians believe generative AI isn't here to stay. In five years, 'we won't be talking about AI the same way we're talking about it now, the same way we don't talk about mobile or cloud,' predicted McLoughlin. 'It'll become a fabric of how everything gets built.' But who will be building remains an open question.


New Straits Times
a day ago
- New Straits Times
AI oversight needed to ensure fairness, accountability, and inclusivity, says Lee Lam Thye
KUALA LUMPUR: The Alliance for a Safe Community has called for clear, forward-looking regulations and a comprehensive ethical framework to ensure artificial intelligence (AI) development prioritises fairness, accountability, and inclusivity. "This means avoiding bias in decision-making systems, ensuring that AI enhances human potential rather than replacing it, and making its benefits accessible to all, not just a select few," said chairman Tan Sri Lee Lam Thye in a statement today. The group proposed a regulatory framework including AI accountability laws, transparency and explainability for AI decision-making that impacts individuals, strengthened data protection and privacy standards, risk assessment and certification requirements, and the creation of public oversight bodies. The group also proposed the establishment of a Code of Ethics that is human-centric, non-discriminatory, fair, honest, environmentally responsible, collaborative, and inclusive. He warned that while AI holds promise for healthcare innovations and environmental sustainability, its use must always serve the greater good. Key risks include privacy breaches, algorithmic bias, job displacement, and the spread of misinformation, Lee added. "We urge policymakers, tech leaders, civil society, and global institutions to come together to build a framework that ensures AI is safe, inclusive, and used in the best interest of humanity," Lee added. The group concluded with a warning against a future where technology dictates the terms of our humanity, and called for a path where AI amplifies best qualities for the benefit of all. On Wednesday, Prime Minister Datuk Seri Anwar Ibrahim said the government plans to push for a new legislation aimed at reinterpreting sovereignty in light of the rapid growth of artificial intelligence (AI) and cloud-based technologies. Anwar added that following the evolving role of governance in the digital era, the traditional notions of sovereignty, designed for a pre-digital world, must be reconsidered to accommodate new technological realities.