logo
The ethics of using AI to predict patient choices

The ethics of using AI to predict patient choices

Observer3 days ago

I recently attended a conference on bioethics in Switzerland where professionals from different countries met to discuss recent topics in medical ethics which was the main theme of this year's conference. Among the highlights of the meeting were several talks about the inclusion of Artificial Intelligence in decision-making and its ethical impact.
What caught my attention was a talk about Personalised Patient Preference Predictor, or P4, which is a tool that aims to predict an individual patient's preferences for healthcare, using machine learning.
The idea is that in situations where a person is incapacitated — for example, found unconscious with no advance directive — the AI would comb through their digital footprint, including tweets, Instagram and Facebook posts, and possibly even emails, to infer their likely wishes. The system would then create a virtual copy of the individual's personality, known as a 'psychological twin,' which would communicate decisions to the medical team on the person's behalf.
While this concept is technologically fascinating, it raises several pressing ethical concerns. First, it assumes that our social media presence accurately reflects our core values and long-term preferences. However, people's views are dynamic and influenced by their emotional state, life experiences, and personal growth. A sarcastic tweet or a momentary opinion shared online may not represent someone's actual end-of-life wishes.
Second, the use of AI risks introducing or amplifying bias — especially against the elderly and individuals from ethnic or religious minorities. AI systems often generalise from large datasets, which can lead to 'one-size-fits-all' assumptions that disregard cultural, spiritual, or personal nuances.
Another critical question is: can AI truly understand or navigate the emotional and moral complexity of disagreements among family members and healthcare providers? Would it possess the empathy required to mediate a delicate conversation, or would it deliver cold logic such as: 'Grandpa is too old, his survival chances are low, so resources would be better allocated elsewhere'?
Furthermore, relying on AI for such deeply human decisions risks the deskilling of health professionals. Ethical decision-making is an essential skill developed through experience, reflection, and dialogue. If AI takes over these roles, clinicians may gradually lose the ability — or the confidence — to engage in these vital discussions.
The speaker, who advocated for the use of P4, admitted he did not fully understand how the AI makes its decisions. This lack of transparency is alarming. If we are to entrust a machine with life-or-death recommendations, we must first demand clarity and accountability in its design and operation.
In my opinion, while AI has a growing role in healthcare, ethical decision-making remains a human responsibility. These discussions are often fraught with disagreement, cultural sensitivity, and intense emotion — particularly when they involve questions of life and death. In my view, we are not yet ready to hand this task over to machines. We are not yet ready to hand this task over to machines.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

South Al Batinah Design Hackathon kicks off
South Al Batinah Design Hackathon kicks off

Times of Oman

time21 hours ago

  • Times of Oman

South Al Batinah Design Hackathon kicks off

Al Rustaq: The South Al Batinah Design Hackathon, organised by the Governorate's Office in collaboration with the University of Technology and Applied Sciences in Al Musannah, launched at the Office of Wali of Al Rustaq on Monday. Held under the theme 'An Urban Design with the Identity of Place," the two-day event aims to design and develop an integrated tourism project for 'Khor Al Qareem" in the Wilayat of Al Musannah, combining the area's environmental and cultural identity with modern technologies, including artificial intelligence (AI) tools. The hackathon aligns with national approach in sustainable tourism and smart urban development, supporting the goals of Oman Vision 2040. Over 125 students from 25 educational institutions—both within the Sultanate of Oman and abroad—are participating in multidisciplinary teams specializing in urban planning, architecture, civil engineering, and interior design. The program includes training workshops and consultative sessions supervised by experts to enhance participants' skills and refine their design ideas. On the final day, a judging panel will select six projects for evaluation based on criteria such as innovation, sustainability, environmental and cultural integration, space efficiency, and AI utilisation. The top three winning teams will receive valuable prizes, with the first-place project being implemented in Khor Al Qareem. The rights to the three shortlisted projects will belong to the South Al Batinah Governorate, which reserves the right to execute or modify them for public benefit. This hackathon serves as a platform to empower Omani youth, showcase their creative talents in design and urban planning, and engage them in developing sustainable urban and tourism environments that reflect community aspirations and cope with the national principles of the sustainable development.

Zoho Launches Zia Hubs to Empower Businesses in Oman and the region
Zoho Launches Zia Hubs to Empower Businesses in Oman and the region

Times of Oman

time2 days ago

  • Times of Oman

Zoho Launches Zia Hubs to Empower Businesses in Oman and the region

Zoho Corp., a leading global technology company, today announced the launch of Zia Hubs in the Middle East and North Africa (MENA) region, a new solution within Zoho WorkDrive, the company's secure content collaboration platform that enables businesses to extract intelligence from unstructured data and activate it across their operations. Built on Zia, Zoho's proprietary AI engine, Zia Hubs allows organisations to connect various types of content—documents, audio, video, and more—to Zoho's expansive suite of apps for deeper insights, agentic AI capabilities, and unified, context-aware search. "According to IDC, 80 percent of business data is unstructured," said Hyther Nizam, President Middle East and Africa (MEA), Zoho. "Most unstructured data is text-based, meaning pertinent information lives within email conversations, social media posts, word processor documents, or audio and video transcripts. In fast-evolving economies like those in the Middle East, the ability to leverage AI to unlock hidden value from business content can dramatically improve decision-making, operations, and customer engagement. With Zia Hubs deeply integrated into our suite of apps, customers can put their data to work in ways unmatched by any other platform,' Nizam added. Zia Hubs is now part of Zoho WorkDrive, which gives businesses full control over what content is AI-accessible and allows them to organise team or project-specific material into dedicated 'hubs". Once the content—ranging from PDFs and spreadsheets to call logs and video files—is added to a hub, Zia automatically structures it, preserving context through section headings, supporting visuals, and linked references. For audio and video content, Zia generates searchable transcripts and highlights key moments tied to relevant topics, making it easy to locate exact moments quickly. With Zia Hubs, users can ask complex questions and receive cited answers that span across myriad content formats. Whether a user is referencing a legal document, financial spreadsheet, or customer support recording, Zia's responses link directly to the original material. Businesses can also create automated workflows using Zoho Flow, ensuring that the latest relevant files are consistently routed into the appropriate hubs for ongoing analysis. Moreover, Zia Hubs supports content from third-party sources such as DocuSign, RingCentral, and Zoom, giving teams the ability to manage and analyse external data with the same intelligence layer. Zia Hubs is a foundational element of Zoho's long-term AI strategy, laying the groundwork for a future where intelligent agents can act contextually on content across the company's entire product suite. With full ownership of its technology stack spanning more than 55 products, Zoho is uniquely positioned to help organisations unlock deeper value from their business content compared to competitors. Future updates to Zia Hubs will allow it to identify structured information within unstructured files and trigger specialised agents tailored to specific business needs, further utilising Zia Hubs as the central content intelligence layer that activates AI-native workflows across the full Zoho ecosystem. Zia Hubs will be available as part of Zoho WorkDrive by end of Q3 2025, and would integrate fully with applications like Zoho CRM, Zoho Desk, Zoho Projects, and Zoho Flow. Businesses in the UAE, Saudi Arabia, Egypt, and throughout the region can take advantage of this powerful new capability to transform content into intelligence and intelligence into action.

The ethics of using AI to predict patient choices
The ethics of using AI to predict patient choices

Observer

time3 days ago

  • Observer

The ethics of using AI to predict patient choices

I recently attended a conference on bioethics in Switzerland where professionals from different countries met to discuss recent topics in medical ethics which was the main theme of this year's conference. Among the highlights of the meeting were several talks about the inclusion of Artificial Intelligence in decision-making and its ethical impact. What caught my attention was a talk about Personalised Patient Preference Predictor, or P4, which is a tool that aims to predict an individual patient's preferences for healthcare, using machine learning. The idea is that in situations where a person is incapacitated — for example, found unconscious with no advance directive — the AI would comb through their digital footprint, including tweets, Instagram and Facebook posts, and possibly even emails, to infer their likely wishes. The system would then create a virtual copy of the individual's personality, known as a 'psychological twin,' which would communicate decisions to the medical team on the person's behalf. While this concept is technologically fascinating, it raises several pressing ethical concerns. First, it assumes that our social media presence accurately reflects our core values and long-term preferences. However, people's views are dynamic and influenced by their emotional state, life experiences, and personal growth. A sarcastic tweet or a momentary opinion shared online may not represent someone's actual end-of-life wishes. Second, the use of AI risks introducing or amplifying bias — especially against the elderly and individuals from ethnic or religious minorities. AI systems often generalise from large datasets, which can lead to 'one-size-fits-all' assumptions that disregard cultural, spiritual, or personal nuances. Another critical question is: can AI truly understand or navigate the emotional and moral complexity of disagreements among family members and healthcare providers? Would it possess the empathy required to mediate a delicate conversation, or would it deliver cold logic such as: 'Grandpa is too old, his survival chances are low, so resources would be better allocated elsewhere'? Furthermore, relying on AI for such deeply human decisions risks the deskilling of health professionals. Ethical decision-making is an essential skill developed through experience, reflection, and dialogue. If AI takes over these roles, clinicians may gradually lose the ability — or the confidence — to engage in these vital discussions. The speaker, who advocated for the use of P4, admitted he did not fully understand how the AI makes its decisions. This lack of transparency is alarming. If we are to entrust a machine with life-or-death recommendations, we must first demand clarity and accountability in its design and operation. In my opinion, while AI has a growing role in healthcare, ethical decision-making remains a human responsibility. These discussions are often fraught with disagreement, cultural sensitivity, and intense emotion — particularly when they involve questions of life and death. In my view, we are not yet ready to hand this task over to machines. We are not yet ready to hand this task over to machines.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store