logo
Commission study warns rules and mistrust stall AI in healthcare

Commission study warns rules and mistrust stall AI in healthcare

Euractiv11 hours ago
A new European Commission report says artificial intelligence (AI) could help deliver quality healthcare to an ageing population, but warns that complex regulations and a lack of trust are slowing uptake.
'Looking for a smart and fun summer read about Artificial Intelligence in healthcare? Check this out!' joked a digital health policy officer at the Commission's DG Sante in a recent LinkedIn post, pointing to the newly published 241-page Study on the Deployment of AI in Healthcare.
While the fun parts might be hard to find, the report – published just in time for the summer holidays – provides a comprehensive overview of the topic, including its opportunities and challenges.
Between 2015 and 2024, EU research projects on AI in health received €3.5 billion in funding, averaging €6.73 million per project – figures laid out in the study that give a picture of how heavily the bloc is already invested in the field. Over the same period, AI-related healthcare patents increased from 22 in 2017 to 118 in 2023, and clinical trials of AI- and machine learning–enabled medical devices rose from six in 2015 to 657 in 2024.
The report found that AI is already being used in hospitals worldwide to automate tasks such as clinical documentation, improve diagnostic accuracy and speed, and enable personalised treatments – with demonstrable impacts on care.
Such tools are urgently needed as health systems face rising demand from an ageing population increasingly affected by chronic disease, alongside staff shortages and rising costs. However, developers and implementers must navigate overlapping rules, including the Artificial Intelligence Act, the General Data Protection Regulation (GDPR), the Medical Device Regulation and the planned European Health Data Space. "While robust, the regulatory environment governing AI in healthcare presents complexities that may contribute to hesitancy in AI deployment," reads a summary of the report.
The study points to ideas such as setting up interdisciplinary AI governance committees in clinical settings to help oversee compliance and streamline decision-making. It also mentions measures like comprehensive data governance frameworks, robust storage policies, encryption methods and privacy-by-design technologies to address security concerns.
It further notes that boosting digital health literacy and technical skills among healthcare professionals and patients could help build trust in the reliability and ethical implications of AI.
"Through strategic action and a commitment to overcoming deployment challenges, the EU can position itself as a global leader in AI-driven healthcare innovation," the report concludes.
(de)
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Poland plans 3% tax on tech giants
Poland plans 3% tax on tech giants

Euractiv

time5 hours ago

  • Euractiv

Poland plans 3% tax on tech giants

Poland is planning to levy a 3% tax on large tech companies which it wants to support its own technology and media sectors. As negotiations to adapt the global tax system to modern digital realities continue at the OECD level, many European countries are working on their own taxes targeting large, predominantly US tech companies. These efforts have caught the attention and ire of US President Donald Trump. But despite noisy pushback from his administration – including threats of retaliation via trade tariffs – a number of countries are pressing on. In Poland, Donald Tusk's government signalled its intent to tax Big Tech back in March. Further details of the plan are now starting to emerge. Work on Poland's draft bill will 'continue through the end of the year', the digital ministry told Euractiv. Once ready, it said the bill will undergo public consultation. Depending on the legislative process, the tax could take effect as soon as 2027. The ministry said the tax will apply to companies whose global revenue exceeds €750 million. It will be aimed at platforms including marketplaces, social media and ride-sharing apps, as well as companies carrying personalised advertisements or selling user data. Services that only provide users with access to content (such as games) or interfaces (such as payment or comms platforms) are set to be exempt, as are financial services and direct web sales – such as via a retail company's own website. Companies would have to report revenues generated in Poland or in relation to Poland, based on whether they can 'reasonably assume' users are residents of the country – for example through their IP addresses, per the ministry. It added that a 'modest' 3% tax on tech giants' revenues could generate up to €470 million in the first year. The ministry expects the tax take to continue to grow after that. The proposal appears quite similar to the Commission's draft for an EU-wide digital tax from 2018 – which would have paid into national treasuries – but got abandoned after member states were unable to reach agreement. The Commission also publicly mulled introducing an EU digital tax to pay back the Union's Covid recovery debts. However, instead, it recently proposed an EU-wide tax on large companies in general. (nl)

EU study warns over the shortcomings of AI benchmarking
EU study warns over the shortcomings of AI benchmarking

Euractiv

time11 hours ago

  • Euractiv

EU study warns over the shortcomings of AI benchmarking

EU researchers are warning over problems with how AI capabilities are measured and urging regulators to ensure AI companies' model numbers mean what they claim. A new paper released last week by the Commission's Joint Research Centre concludes that AI benchmarks are promising too much. The authors found that the proprietary tools which compare AI models are easily game-able and measure the wrong things. AI companies use benchmarks to put numbers on how well their models perform at certain tasks. OpenAI, for example, tested its newly released GPT-5 on how reliably it abstains from answering questions that cannot be answered – with the new model purportedly achieving a higher score than an older one. But the EU researchers are urging regulators to focus on carefully examining how these tools work. Benchmarking AI is a problem for the EU because its rules for artificial intelligence rely on evaluating model capabilities in many different contexts. For example, large models can be counted as presenting special risk under the EU's AI law, based on a benchmark assessing that it has 'high impact capabilities'. The law allows the Commission to specify what exactly that should mean through a delegated act – which the EU's executive has, so far, not done. Meanwhile, on Friday the US government launched a suite of evaluation tools that its own government agencies can use to test AI tools. The country's AI Action Plan also sets out a clear ambition to push US leadership in this area. Which AI benchmarks to trust? The EU researchers say policymakers should ensure that benchmarks target real-world capabilities rather than narrow tasks; are well-documented and transparent; clearly define what they're measuring and how; and include different cultural contexts. Another problem, per the paper, is that existing benchmarks often focus on the English language. 'We especially identify a need for new ways of signalling what benchmarks to trust,' they also write. Done well, the EU researchers suggest that policymakers have an opportunity for a new kind of 'Brussels effect'. Risto Uuk, head of EU policy and research at AI-focused thinktank the Future of Life Institute, told Euractiv he shared the paper's concerns – suggesting the EU should require third-party evaluators and fund the development of the AI evaluation ecosystem. 'Improvements are necessary, but evaluating capabilities and other aspects of risks and benefits is crucial, and simply relying on vibes and anecdotes is not enough,' he added. (nl)

Commission study warns rules and mistrust stall AI in healthcare
Commission study warns rules and mistrust stall AI in healthcare

Euractiv

time11 hours ago

  • Euractiv

Commission study warns rules and mistrust stall AI in healthcare

A new European Commission report says artificial intelligence (AI) could help deliver quality healthcare to an ageing population, but warns that complex regulations and a lack of trust are slowing uptake. 'Looking for a smart and fun summer read about Artificial Intelligence in healthcare? Check this out!' joked a digital health policy officer at the Commission's DG Sante in a recent LinkedIn post, pointing to the newly published 241-page Study on the Deployment of AI in Healthcare. While the fun parts might be hard to find, the report – published just in time for the summer holidays – provides a comprehensive overview of the topic, including its opportunities and challenges. Between 2015 and 2024, EU research projects on AI in health received €3.5 billion in funding, averaging €6.73 million per project – figures laid out in the study that give a picture of how heavily the bloc is already invested in the field. Over the same period, AI-related healthcare patents increased from 22 in 2017 to 118 in 2023, and clinical trials of AI- and machine learning–enabled medical devices rose from six in 2015 to 657 in 2024. The report found that AI is already being used in hospitals worldwide to automate tasks such as clinical documentation, improve diagnostic accuracy and speed, and enable personalised treatments – with demonstrable impacts on care. Such tools are urgently needed as health systems face rising demand from an ageing population increasingly affected by chronic disease, alongside staff shortages and rising costs. However, developers and implementers must navigate overlapping rules, including the Artificial Intelligence Act, the General Data Protection Regulation (GDPR), the Medical Device Regulation and the planned European Health Data Space. "While robust, the regulatory environment governing AI in healthcare presents complexities that may contribute to hesitancy in AI deployment," reads a summary of the report. The study points to ideas such as setting up interdisciplinary AI governance committees in clinical settings to help oversee compliance and streamline decision-making. It also mentions measures like comprehensive data governance frameworks, robust storage policies, encryption methods and privacy-by-design technologies to address security concerns. It further notes that boosting digital health literacy and technical skills among healthcare professionals and patients could help build trust in the reliability and ethical implications of AI. "Through strategic action and a commitment to overcoming deployment challenges, the EU can position itself as a global leader in AI-driven healthcare innovation," the report concludes. (de)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store