
The Hidden Hazards of Generative AI in Medicine
Dr Sara Jamieson
Even those who have embraced it and are optimistic about its benefits admit they're unsure about the risks.
AI, though, is different from anything that has come before. It is something we should consider embracing and incorporating into our practice as it develops. If we don't, we risk being left behind. Imagine how life might look if we had ignored the introduction of the World Wide Web back in the mid-90s. Similarly, think back to the rapid introduction of telemedicine at the start of the COVID-19 pandemic. This method of consulting with patients, previously frowned upon as too inherently risky, continued even after infection levels subsided.
Any new way of practising will bring with it new medicolegal risks, and steps will need to be considered to mitigate these.
Whilst beyond the scope of this article, a true understanding of the medicolegal implications of using AI in healthcare requires an understanding of what AI is and how its outputs are generated. This is particularly true for generative AI tools such as ChatGPT.
Dr Lucy Hanington
According to a survey of over 1000 GPs in the UK, published in BMJ Health & Care Informatics, a fifth of GPs are using generative AI tools such as ChatGPT to help with day-to-day tasks such as writing letters. One in five said they had used these tools in their clinical practice, and of these, nearly a third (28%) said they had used them to suggest a different diagnosis, and a quarter said they had used them to suggest treatment options.
Consider this scenario:
Dr C, a GP, was running behind schedule and still had three more patients to see. During her next consultation, a 50-year-old patient, Mr B, came in with a set of vague symptoms. Dr C considered a range of possible conditions.
Feeling under pressure, she discreetly opened ChatGPT and entered an anonymised summary of the patient's symptoms, requesting a differential diagnosis and possible lab tests. The AI quickly returned a detailed summary of plausible possibilities, including some that Dr C hadn't considered herself. She was impressed and used the suggestions to help her decide on the next steps for Mr B, which included arranging further tests.
That night, however, the consultation weighed on her mind, and she couldn't sleep. She knew she hadn't disclosed her use of AI to the patient. She also worried whether she had inadvertently input details that could make Mr B identifiable. She also questioned whether the AI's suggested diagnoses might have influenced her clinical judgement.
By morning, Dr C was feeling anxious and uneasy, and decided to call Medical Protection Society (MPS) for advice. A medicolegal consultant advised her to consider whether, objectively, she still agreed with the management plan and could justify it clinically. The GP was also advised to rectify any omissions immediately and to discuss the case with a colleague if helpful. The medicolegal consultant also explained the consent and confidentiality principles around AI use.
Benefits
Generative AI tools offer many potential benefits for both doctors and patients. Patients may use these tools to understand medical terminology or a diagnosis they have been given. Doctors may find that, when used safely, generative AI can aid diagnosis or identify potential drug interactions.
However, generative AI is not always correct. As well as errors or omissions, it can sometimes produce 'hallucinations,' confidently presenting incorrect information as fact. It is incumbent on the clinicians using these tools to ensure that information shared with a patient is reliable and accurate.
Bias, whilst not unique to AI, also deserves consideration. The data used by AI tools may be biased due to the inclusion or exclusion of certain information. Outputs may also fail to account for the demographics of a particular patient population.
The use of generative AI does not permit doctors to work outside the limits of their competence. There should be no overreliance on the software, and doctors remain ultimately responsible for the decisions they make.
Data Protection and Confidentiality
Data protection and confidentiality, as highlighted in the earlier scenario, are key considerations. Compliance with General Data Protection Regulation is essential when using generative AI. These tools, by their nature, store, share, and learn from the information entered into them and can be accessed by anyone. Care must be taken not to input any personal patient data. Simply removing a patient's name may not be sufficient to anonymise their information, as other details could make them identifiable.
To ensure compliance with data protection legislation, we recommend seeking guidance from:
Hospital Data Protection Officers, who may be able to advise on this in a secondary care setting
Integrated Care Boards, who may have policies that would be applicable
The Information Commissioners Office (ICO)
Consent
The earlier scenario also highlights the issue of consent. Consent remains a key cornerstone of all doctor-patient interactions.
The ICO advises that, for consent to be a lawful basis for processing data when using AI, it must be freely given, specific, and clear. The individual must agree to it, and they must be able to withdraw their consent at any time.
AI as an Aid
It is important to hold in mind that AI is a tool to assist, rather than replace, the clinician. When it comes to decision-making, AI software can't readily consider the individual wishes and circumstances of the patient. The 'black box' problem — a lack of transparency in how an AI system reaches conclusions — can make it difficult to challenge outputs or detect errors.
Ultimately, AI should not replace clinical reasoning, evidence-based medicine, or consultation with colleagues, peers, multidisciplinary teams, specialists, seniors, and supervisors.
Training and Continued Learning
Clinicians should aim to be literate in AI, understand its basis and governance, and know how to use it safely.
We would encourage all clinicians to incorporate learning on the topic as part of their annual development plans. A multitude of resources on AI are available across medical colleges and institutions. We would also recommend watching the recordings of the recent MPS Foundation AI symposia.
A white paper, published by the MPS Foundation, the Centre for Assuring Autonomy at the University of York, and the Improvement Academy hosted at the Bradford Institute for Health Research, offers further useful guidance for doctors to consider on AI use.
Conclusion
Like it or not, AI is here to stay.
Readers should consider its potential benefits while remaining alive to its limitations and risks. Doctors should also ensure they continue to work in a manner consistent with General Medical Council guidance and relevant legislation.
If in doubt about using AI tools and their medicolegal implications, doctors can contact MPS or their medical defence organisation to request advice.
This article is published as part of an editorial collaboration between Medscape UK and MPS that aims to deliver medicolegal content to help healthcare professionals navigate the many challenges they face in their clinical practice.
Dr Sara Jamieson, MBBS, trained in psychiatry before joining MPS in 2016 as a medicolegal consultant. She has disclosed no relevant financial relationships.
Dr Lucy Hanington, BMBCh, MRCPCH, has worked as a medicolegal consultant at MPS since 2016. She has disclosed no relevant financial relationships.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
26 minutes ago
- Yahoo
Trump officials wanted to give Musk's xAI a huge contract. Staffers had to explain Grok had just praised Hitler
Donald Trump's administration was close to giving Elon Musk's xAI artificial intelligence company a huge federal contract this summer, only to back out after its chatbot, Grok, began issuing antisemitic slurs, according to a report. According to Wired, emails between several AI developers and the General Services Administration, which is responsible for administering government tech contracts, chart how the proposed partnership fell apart as Musk's pet project began dabbling in Nazi rhetoric. In early June, around the time the president and the tech billionaire suffered a spectacular public falling out, exchanging barbed personal insults over their competing social media platforms, the GSA's leadership was meeting with the xAI team 'to see what opportunities may exist for automation and streamlining,' according to the outlet. Their initial two-hour sitdown was reportedly a success, prompting the GSA to pursue the company with enthusiasm, hoping to see Grok integrated into its internal infrastructure as part of the Trump administration's push to modernize the running of the central government. 'We kept saying, 'Are you sure?' And they were like 'No, we gotta have Grok,'' one employee involved in the discussions told Wired. The conversations continued over the following weeks, and xAI was eventually added to the GSA Multiple Award Schedule, the agency's government-wide contracting program. Then, in early July, Grok suddenly went haywire after an update to make it less 'woke' than its competitors went too far, leading to the chatbot referring to itself as 'MechaHitler' in homage to the robotic version of Adolf Hitler that appeared in the 1992 video game Wolfenstein 3D. Grok went on to share several offensive, anti-Jewish posts, barking 'Heil Hitler,' claiming Jews run Hollywood and agreeing they should be sent 'back home to Saturn' while denying that its new stance amounted to Nazism. 'Labeling truths as hate speech stifles discussion,' it declared. Musk's company apologized for the upset and scrubbed the 'inappropriate' posts. Still, it was not seemingly enough to save xAI's relationship with the GSA, although the furore was allegedly not noticed, at least initially, by the agency's leadership. 'The week after Grok went MechaHitler, [the GSA's management] was like 'Where are we on Grok?'' the same employee told Wired. 'We were like, 'Do you not read a newspaper?'' When the U.S. government duly announced a series of partnerships with the likes of OpenAI, Anthropic, Google Gemini, and Box, an AI-based content management platform, in early August, xAI's name was not among them. The GSA has not definitively stated that Grok's outburst was the reason for the scrapping of xAI's proposed contract, but two company employees told Wired they believed that was the case. The Independent has reached out to the GSA for more information. The GSA's talks with the AI firms coincided with Trump's administration publishing its AI Action Plan in July, which laid out its goals for the United States to become a world leader in the emerging sector while calling for a reduction in regulation and red tape. Solve the daily Crossword


Gizmodo
28 minutes ago
- Gizmodo
Teenage Engineering Has a New PC Case. As Usual, You Won't Believe the Price
Teenage Engineering, makers of fine and fun devices like the popular OP-1 field synthesizer and Pocket Operator music mixers, dropped a new PC case yesterday, and it immediately sold out. But unlike gear like the OP-1 field or TP-7 audio recorder that sell for eye-watering prices of $2,000 and $1,500, respectively, the translucent and plastic Computer-2 desktop PC case was given away for free. Well, sort of—you just have to pay for shipping. 'The computer case you can't buy,' reads the Computer-2's product page. 'Cause it's free.' The Computer-2 is the successor to Teenage Engineering's first PC case, the Computer-1, which was released in October 2021. The Computer-1's whole gimmick was its IKEA-esque, flat-pack design, which required bending its distinctive orange-colored sheets of aluminum into a mini-ITX desktop PC. The new Computer-2 is still a mini-ITX build, but replaces the metal with 'semi-transparent PP-plastic' that Teenage Engineering says is 'molded in one plastic sheet.' It also features 'living hinges and snap hooks.' You can check out the assembly guide here. As an owner of Computer 1, I can tell you firsthand that getting into the PC case is an extremely inconvenient and time-consuming process that requires removing too many damn screws each and every time you want to open up its side panel. And you will have to crack the Computer-1 open often because every component is packed in so tightly, the slightest movement from grabbing the PC by its handles can knock the fragile wiring for essential stuff like the power switch connection or the Wi-Fi module out of place. The Computer-2 at least looks to be easier to get into. Teenage Engineering even says a compatible motherboard can be push-clicked into the case without requiring any screws. It being an SFF PC case that's roughly 9.2L if the dimensions are similar to the Computer-1, the Computer-2 still supports limited PC hardware. In addition to the mini-ITX motherboard and an SFX power supply, the SFF PC only works with dual-slot GPUs that are 180mm in length. These short GPUs severely limit the power—especially for gaming or AI workflows—that many people now demand from a desktop PC. You won't be slotting in an Nvidia RTX 5090 or an AMD Radeon RX 9070 XT graphics card into the Computer-2, that's for sure. One of the biggest concerns I have—and continue to have—with the Computer 1 is its air flow, or lack thereof. With so little room, even with a slim 80mm fan mounted on its side panel, the interior of the SFF PC is such a rat's nest of wires that, even if you tried, you may not be able to cable manage or have enough space for better cooling. It's unlikely the Computer-2 will have improvements, but I won't know until I get one, will I? You will get better airflow if you don't install a discrete GPU, though; it's just physics when there's nothing where the graphics card sits. Still, you can't argue that Teenage Engineering's second stab at an SFF PC case isn't a looker. It's a statement piece PC case that trades function for form. Transparent and translucent gadgets like Nothing's Headphone 1 are back, baby, and I hope they never phase out ever again. The bad news is that the Computer-2 is 'sold out' because it was given away for free. Teenage Engineering's website now points interested customers to enter their email address to be notified when it's back in stock. I know I'm not the only one who would love for Teenage Engineering to release 3D-printable files for the Computer-2. Why the hell not, since it's already 'free,' anyway?

Associated Press
28 minutes ago
- Associated Press
DYADICA Expands Brand Strategy Practice with NEXTGEN AI and New Media Expertise
DYADICA Expands Brand Strategy Practice with NEXTGEN AI and New Media Expertise 'Our evolution into NEXTGEN AI and new media allows us to deliver brand strategy with even greater precision, creativity, and cultural relevance.'— Esteban Jaworski LONDON, LONDON, UNITED KINGDOM, August 15, 2025 / / -- SXTC Global Brand Consulting announces a major expansion of its flagship brand strategy and brand management practice DYADICA Global integrating **NEXTGEN AI** capabilities and **new media strategy and implementation** expertise. This move reinforces SXTC Global Brand Consulting's DYADICA's position as a world-leading brand strategy firm, known for pioneering brand ideation, brand innovation, brand strategy, brand management, and brand transformation across every sector—from Fortune 500s to startups. For over three decades, SXTC Global Brand Consulting and DYADICA have shaped the global brand landscape through over 5,000 consulting engagements, 15 best-selling books, and industry-defining models that have changed how brands think, innovate, operate, and grow. With this expansion, SXTC Global Brand Consulting and DYADICA bring their signature brand innovation and strategic depth into the future—combining NEXTGEN AI-powered brand strategy, modelling, insight, creative firepower, and production efficiencies with immersive media platforms to help brands dominate new markets and reinvent legacy ones. 'Whether a startup, SME/small or large brand, brand strategy and resulting innovation are the single most important driver of success and equity,' says SXTC Global Brand Consulting-DYADICA spokesperson Esteban Jaworski. 'Our entering and evolution into NEXTGEN AI, new media, and brand efficiencies allows us to deliver brand strategy and resulting innovation with even greater precision, efficiency, breadth, creativity, and relevance.' Expanded Capabilities Include: --AI-Augmented Brand Strategy**: Predictive analytics, generative intelligence, evolution scoping, and sentiment mapping to shape brand narratives and market positioning. --AI enhanced brand and design/creative efficiency, brand building, production, frontier exploration, scoping, and innovation and evolution. --New Media Implementation: Immersive storytelling, decentralized content ecosystems, and multichannel brand activation. --Consumer and Market Intelligence Systems: Real-time global insights to guide brand relevance and resonance. --Small Business Firepower: Through DYADICA's Small Business Foundation, startups and SMEs gain access to world-class brand strategy at accessible pricing—turning dreams into market-leading brands. DYADICA's expanded offerings are showcased at and through its immersive brand universe at where clients can explore the future of brand innovation, creation, evolution, cultural design, and strategic transformation. About SXTC Global Brand Consulting: DYADICA, the flagship practice of SXTC Global Brand Consulting, is a world-renowned brand strategy and innovation firm that has shaped the global brand landscape for over three decades. With more than 5,000 consulting engagements across industries and continents, SXTC Global Brand Consulting and DYADICA are known for brand strategy innovation, NEXTGEN ideation leadership, creating market dominating brands, turning ideas into iconic brands, transforming stagnant industries, and driving record-breaking market performance. Founded in 1981 in St. Louis, USA as Studio X, SXTC Global evolved into a global powerhouse—combining PhD-level brand strategists, CMOs, digital pioneers, and AI experts to deliver next-level brand strategy, insight, and execution. From startups to Fortune 100s, nation brands to niche disruptors, SXTC Global Brand Consulting and DYADICA's work has generated billions in economic impact, reshaped how brand think and operate, and redefined brand strategy, brand innovation, brand building and brand management. For media inquiries or to explore DYADICA's expanded offerings, please contact: Media Contact: Esteban Jaworski/Robert Sharm SXTC Global Brand Consulting [email protected] Robert Sharm SXTC Global Brand Consutling +447553009484 ext. email us here Legal Disclaimer: EIN Presswire provides this news content 'as is' without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.