logo
#

Latest news with #PersonalisedPatientPreferencePredictor

The ethics of using AI to predict patient choices
The ethics of using AI to predict patient choices

Observer

time3 days ago

  • Health
  • Observer

The ethics of using AI to predict patient choices

I recently attended a conference on bioethics in Switzerland where professionals from different countries met to discuss recent topics in medical ethics which was the main theme of this year's conference. Among the highlights of the meeting were several talks about the inclusion of Artificial Intelligence in decision-making and its ethical impact. What caught my attention was a talk about Personalised Patient Preference Predictor, or P4, which is a tool that aims to predict an individual patient's preferences for healthcare, using machine learning. The idea is that in situations where a person is incapacitated — for example, found unconscious with no advance directive — the AI would comb through their digital footprint, including tweets, Instagram and Facebook posts, and possibly even emails, to infer their likely wishes. The system would then create a virtual copy of the individual's personality, known as a 'psychological twin,' which would communicate decisions to the medical team on the person's behalf. While this concept is technologically fascinating, it raises several pressing ethical concerns. First, it assumes that our social media presence accurately reflects our core values and long-term preferences. However, people's views are dynamic and influenced by their emotional state, life experiences, and personal growth. A sarcastic tweet or a momentary opinion shared online may not represent someone's actual end-of-life wishes. Second, the use of AI risks introducing or amplifying bias — especially against the elderly and individuals from ethnic or religious minorities. AI systems often generalise from large datasets, which can lead to 'one-size-fits-all' assumptions that disregard cultural, spiritual, or personal nuances. Another critical question is: can AI truly understand or navigate the emotional and moral complexity of disagreements among family members and healthcare providers? Would it possess the empathy required to mediate a delicate conversation, or would it deliver cold logic such as: 'Grandpa is too old, his survival chances are low, so resources would be better allocated elsewhere'? Furthermore, relying on AI for such deeply human decisions risks the deskilling of health professionals. Ethical decision-making is an essential skill developed through experience, reflection, and dialogue. If AI takes over these roles, clinicians may gradually lose the ability — or the confidence — to engage in these vital discussions. The speaker, who advocated for the use of P4, admitted he did not fully understand how the AI makes its decisions. This lack of transparency is alarming. If we are to entrust a machine with life-or-death recommendations, we must first demand clarity and accountability in its design and operation. In my opinion, while AI has a growing role in healthcare, ethical decision-making remains a human responsibility. These discussions are often fraught with disagreement, cultural sensitivity, and intense emotion — particularly when they involve questions of life and death. In my view, we are not yet ready to hand this task over to machines. We are not yet ready to hand this task over to machines.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store