logo
GPs Voice Top 10 AI Concerns

GPs Voice Top 10 AI Concerns

Medscapea day ago

A review of queries received by the Medical Protection Society (MPS) last year has highlighted the top concerns of GPs about the growing use of artificial intelligence (AI) in medical practice.
The MPS told Medscape News UK that all AI-themed queries came from GPs. No questions were raised by secondary care doctors.
Key Concerns Identified
The assessment revealed 10 main areas of concern for GPs:
Patient safety
Data protection
Patient consent
Liability
Indemnity
Transcribing tools
Generation of fit notes
Clinical prompts
Processing of laboratory results
Generative AI
GPs Wary of AI Risks
Medicolegal advice on these themes has been published on the MPS website.
"We know members are keen to explore and adopt AI tools, which may enhance patient care and help to facilitate more efficient working," Dr Ben White, MPS deputy medical director, said in a press release.
However, calls to the medicolegal advice line showed members were wary of potential risks.
Particular concerns included liability and defence society cover, safety for patients, data protection, and patient consent. Such concerns echo a survey by Medscape last year which found that one in five respondents were apprehensive about using AI in clinical practice.
More specific advice was sought on the medicolegal implications of using AI software for transcribing patient consultations, generating fit-to-fly letters or fit notes, using clinical prompts, and processing laboratory results.
"AI is of course fast-evolving, and we will continue to revisit the guidance," White said.
The MPS Foundation has contributed to a white paper on AI in healthcare , aimed at ensuring AI tools are developed and integrated to be usable, useful, and safe for both patients and clinicians.
"Bringing about greater confidence in AI among clinicians is vital if the potential benefits are to be unlocked," White said.
Doctors Support AI But Remain Concerned
Nell Thornton, improvement fellow at the Health Foundation, said the MPS findings echoed the foundation's own research.
"If AI is to help fulfil its promise to improve patient care, further clarification is needed on issues like regulation and professional liability," she told Medscape News UK .
Doctors are keen to ensure that the human dimension of care is protected in an increasingly digitised system, she added. "Addressing these issues will be crucial for ensuring the development and adoption of AI is responsible and works for everyone."
Professor Kamila Hawthorne, chair of the Royal College of General Practitioners, told Medscape News UK that the college was "always open to introducing new technologies that can improve the experience of patients – and GPs are often at the forefront of this". However, AI use is "not without potential risks" and its implementation "must be closely regulated to guarantee patient safety and the security of their data".
"Technology will always need to work alongside and complement the work of doctors and other healthcare professionals, and it can never be seen as a replacement for the expertise of a qualified medical professional," Hawthorne emphasised. "Clearly there is big potential for the use of AI in general practice, but it's vital that it is carefully implemented and closely regulated in the interest of patient safety."
Consultation Summaries Show Promise and Peril
Dr Rosie Shire from the Doctors' Association UK GP committee told Medscape News UK : "As more AI tools become available, the possibilities of how they could assist GPs grows, but so do the concerns, as highlighted by the MPS."
The idea of AI software providing a summary of a GP consultation was "very appealing", she said. It could enable GPs to focus fully on the patient rather than taking notes, which could improve communication and patient satisfaction, and save time on recall and typing up notes.
"However, it's vital software is accurate and reliable, able to distinguish who is speaking, and understands accents and dialects,' Shire stressed. 'Otherwise it could lead to increased workload, as GPs spend additional time checking the AI has performed correctly."
AI Can't Replace 'Gut Feeling'
There is potential for inaccuracies being introduced into patient records if clinicians become over-reliant on AI records of consultations, Shire said.
In time-pressured clinics, GPs might not read through AI summaries.
"It's important clinicians are given time to review and confirm any AI-generated outcome to ensure accuracy. We also need to remember that AI can't replace that gut feeling you occasionally get as a GP when you see a patient you know, and feel that something just isn't right,' she added.
Liability Questions Remain
Shire said that a standard policy on AI use in general practice was needed. Clarity is also required over who would be responsible if AI software malfunctioned and led to patient harm.
"Would it be the practice partners, as the data owners, the GP who saw the patient, or the AI software provider?"
White said the MPS would not normally provide indemnity for issues relating to AI software failure. While the society is not currently aware of any binding case law, "we would expect the designers or producers of the AI software to be liable for any issues relating to the failure of the AI software'.
Where a practice has purchased an AI system, the individual GP using it would remain potentially liable for any harm that results from its use.
"Doctors remain responsible for the decisions they make whilst using such technology,' he said. 'In order to mitigate the risk of a complaint or claim, doctors should assure themselves that any system they are using is fit for purpose and appropriate for the patients that they are seeing."

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Talent Pipeline: How Nations Compete In The Global AI Race
AI Talent Pipeline: How Nations Compete In The Global AI Race

Forbes

time36 minutes ago

  • Forbes

AI Talent Pipeline: How Nations Compete In The Global AI Race

The artificial intelligence revolution is often framed as a clash of algorithms and infrastructure, but its true battleground lies in the minds shaping it – the AI talent. While nations vie for technological supremacy, the real contest revolves around human capital-who educates, retains, and ethically deploys the brightest minds. This deeper struggle, unfolding across universities, immigration offices, and corporate labs, beneath the surface of flashy model releases and geopolitical posturing, will determine whether AI becomes a force for equitable progress or a catalyst for deeper global divides. China's educational machinery is producing graduates at an unprecedented scale. In 2024, approximately 11.79 million students graduated from Chinese universities, an increase of 210,000 from the previous year, with around 1.6 million specializing in engineering and technology in 2022. By comparison, the United States produces far fewer graduates (around 4.1 million overall in 2022), with only 112,000 graduating with computer and information science degrees. This numerical advantage is reshaping the global talent landscape. According to MacroPolo's Global AI Talent Tracker, China has expanded its domestic AI talent pool significantly, with the percentage of the world's top AI researchers originating from China (based on undergraduate degrees) rising from 29% in 2019 to 47% in 2022. However, the United States remains "the top destination for top-tier AI talent to work," hosting approximately 60% of top AI institutions. Despite this, America's approach to talent faces structural challenges. The H-1B visa cap, which limits skilled foreign workers, is 'set at 65,000 per fiscal year, with an additional 20,000 visas for those with advanced U.S. degrees'. This constraint affects the tech industry's ability to recruit globally. This self-imposed constraint forces companies like Google and Microsoft to establish research centers in Toronto and London – a technological offshoring driven not by cost but by talent access. Europe's predicament completes this global triangle of talent misallocation. The continent invests heavily in AI education – ETH Zurich and Oxford produce world-class specialists-only to watch a significant number board one-way flights to California, lured by higher salaries and cutting-edge projects. This exodus potentially creates a vicious cycle: fewer AI experts means fewer competitive startups, which means fewer reasons for graduates to stay – a continental brain drain that undermines Europe's digital sovereignty. As AI permeates daily life, a quieter conflict emerges: balancing innovation with ethical guardrails. Europe's GDPR has levied huge fines since 2018 for data misuse, such as the $1.3 billion imposed on Meta for transferring EU citizens' data to the US without adequate safeguards, while China's surveillance networks track citizens through 626 million CCTV cameras. U.S. giant Apple was recently fined $95 million amid claims they were spying on users over a decade-long period. These divergent approaches reflect fundamentally different visions for AI's role in society. 'AI is a tool, and its values are human values. That means we need to be responsible developers as well as governors of this technology-which requires a framework,' argues Fei-Fei Li, Stanford AI pioneer. Her call for ethical ecosystems contrasts sharply with China's state-driven model, where AI development aligns with national objectives like social stability and economic planning. The growing capabilities of AI raise significant questions about the future of work. Andrew Ng, co-founder of Google Brain, observes: 'AI software will be in direct competition with a lot of people for a lot of jobs.' McKinsey estimates that 30% of U.S. jobs could be automated by 2030, but impacts will vary wildly. According to the WEF Future of Jobs Report 2025, 'on average, workers can expect that two-fifths (39%) of their existing skill sets will be transformed or become outdated over the 2025-2030 period.' The report goes on to state that, 'the fastest declining roles include various clerical roles, such as cashiers and ticket clerks, alongside administrative assistants and executive secretaries, printing workers, and accountants and auditors.' These changes are accelerated by AI and information processing technologies, as well as an increase in digital access for businesses. The panic that could be caused by these figures is palpable, but some are arguing that the real challenge isn't job loss – it's ensuring that displaced workers can transition to new roles. Countries investing in vocational AI training, like Germany's Industry 4.0 initiative, could potentially achieve smoother workforce transitions. By 2030, China could reverse its brain drain through initiatives like the Thousand Talents Plan, which has already repatriated over 8,000 scientists since 2008. If successful, this repatriation could supercharge domestic innovation while depriving Western labs of critical expertise. Europe's stringent AI Act may inadvertently cede ground to less regulated regions. Businesses could potentially start self-censoring AI projects to avoid EU compliance costs, with complexity, risk governance, data governance and talent listed as the 4 major challenges facing EU organisations in light of this Act, according to McKinsey. They went further to state that only 4% of their survey respondents thought that the requirements of the EU AI Act were even clear. This could create an innovation vacuum, pushing experimental AI development to jurisdictions with less stringent oversight. Breakthroughs in quantum computing could reshape talent flows. IBM's Condor and China's Jiuzhang 3.0 are vying to crack quantum supremacy, with the winner likely attracting a new wave of specialists. 'We need to make the world quantum-ready today by focusing on education and workforce development,' according to Alessandro Curioni, vice-president at IBM Research Europe. A recent WEF report on quantum technologies warns that, 'demand for experts is outpacing available talent and companies are struggling to recruit people in this increasingly competitive and strategic industry.' An IBM quantum data center dpa/picture alliance via Getty Images The focus on human talent suggests a more nuanced understanding of AI development – one that values creative problem-solving and ethical considerations alongside technical progress. "I think the future of global competition is, unambiguously, about creative talent," explains Vivienne Ming, executive chair and co-founder of Socos Labs. 'Everyone will have access to amazing AI. Your vendor on that will not be a huge differentiator. Your creative talent though-that will be who you are.' Similarly, Silvio Savarese, executive vice president and chief scientist at Salesforce AI Research, believes: 'AI is placing tools of unprecedented power, flexibility and even personalisation into everyone's hands, requiring little more than natural language to operate. They'll assist us in many parts of our lives, taking on the role of superpowered collaborators.' The employment landscape for AI talent faces complex challenges. In China, the record graduating class of 11.8 million students in 2024 confronts a difficult job market, with youth unemployment in urban areas at 18.8% in August, the highest for the year. These economic realities are forcing both countries to reconsider how they develop, attract, and retain AI talent. The competition isn't just about who can produce or attract the most talent – it's about who can create environments where that talent can thrive and innovate responsibly. The AI race transcends technical capabilities and infrastructure. While computing power, algorithms, and data remain important, human creativity, ethics, and talent ultimately determine how AI will shape our future. As nations compete for AI dominance, they must recognize that sustainable success requires nurturing not just technical expertise but also creative problem-solving and ethical judgment – skills that remain distinctly human even as AI capabilities expand. In this regard success isn't about which nation develops the smartest algorithms, but which creates environments where human AI talent can flourish alongside increasingly powerful AI systems. This human element may well be the deciding factor in who leads the next phase of the AI revolution.

Meta plans to automate many of its product risk assessments
Meta plans to automate many of its product risk assessments

Yahoo

time37 minutes ago

  • Yahoo

Meta plans to automate many of its product risk assessments

An AI-powered system could soon take responsibility for evaluating the potential harms and privacy risks of up to 90% of updates made to Meta apps like Instagram and WhatsApp, according to internal documents reportedly viewed by NPR. NPR says a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission requires the company to conduct privacy reviews of its products, evaluating the risks of any potential updates. Until now, those reviews have been largely conducted by human evaluators. Under the new system, Meta reportedly said product teams will be asked to fill out a questionaire about their work, then will usually receive an "instant decision" with AI-identified risks, along with requirements that an update or feature must meet before it launches. This AI-centric approach would allow Meta to update its products more quickly, but one former executive told NPR it also creates 'higher risks,' as 'negative externalities of product changes are less likely to be prevented before they start causing problems in the world.' In a statement, Meta seemed to confirm that it's changing its review system, but it insisted that only 'low-risk decisions' will be automated, while 'human expertise' will still be used to examine 'novel and complex issues.' This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Canadian Medical Association welcomes Dr. Margot Burnell as new president
Canadian Medical Association welcomes Dr. Margot Burnell as new president

Yahoo

time41 minutes ago

  • Yahoo

Canadian Medical Association welcomes Dr. Margot Burnell as new president

OTTAWA, ON, May 31, 2025 /CNW/ - New Brunswick's Dr. Margot Burnell is now the president of the Canadian Medical Association (CMA), following an installation ceremony at its annual general meeting today. An oncologist for more than three decades at the Saint John Regional Hospital, Dr. Burnell has seen the impact of a health system under pressure. During her address to physician-colleagues, Dr. Burnell emphasized her commitment to improve access to care as a core issue of her year-long mandate. She is also eager to advance Indigenous reconciliation in health care, strengthen Canada's health workforce, reduce physicians' paperwork and improve physician well-being. "Canadians deserve a health system that is there for them when they need it," she says. "I strongly believe that ideas and lived experience from patients, physicians and others at the front lines are critical to building a future for health care that Canadians can take pride in." The president of the CMA is elected by members from a different province or territory every year and speaks on behalf of Canada's physicians to policy-makers, partners, media and Canadians. With the start of Dr. Burnell's presidency, Dr. Joss Reimer of Winnipeg becomes the CMA's past president. Dr. Bolu Ogunyemi named CMA president-electDr. Bolu Ogunyemi became president-elect at today's AGM. Dr. Ogunyemi practises medical dermatology in St. John's, NL, and has maintained a visiting specialist clinic in Labrador City since 2018. In addition to numerous leadership roles at Memorial University of Newfoundland's Faculty of Medicine, including the inaugural Assistant Dean of Social Accountability (2020–23), he has served on the Newfoundland and Labrador Medical Association Board of Directors. About the CMA The Canadian Medical Association leads a national movement with physicians who believe in a better future of health. Our ambition is a sustainable, accessible health system where patients are partners, a culture of medicine that elevates equity, diversity and wellbeing, and supportive communities where everyone has the chance to be healthy. We drive change through advocacy, giving and knowledge sharing – guided by values of collaboration and inclusion. SOURCE Canadian Medical Association View original content: Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store