
ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?
Altman mentioned; 'health is one of the top reasons consumers use ChatGPT, saying it 'empowers you to be more in control of your healthcare journey.'
Around the world, patients are turning to AI chatbots like ChatGPT and Claude to better understand their diagnoses and take a more active role in managing their health. In hospitals, both patients and clinicians sometimes use these AI tools informally to verify information. At medical conferences, some healthcare professionals admit to carrying a 'second phone' dedicated solely to AI queries. Without accessing any private patient data, they use it to validate their assessments, much like patients seeking a digital 'second opinion' alongside their physician's advice.
Even during leisure activities like hiking or camping, parents often rely on AI Chatbots like ChatGPT or Claude for quick guidance on everyday concerns such as treating insect bites or skin reactions in their children. This raises an important question:
Can AI Companions Like ChatGPT, Claude, and Others Offer the Same Promise, Comfort, Commitment, and Care as Some Humans?
As AI tools become more integrated into patient management, their potential to provide emotional support alongside clinical care is rapidly evolving. These chatbots can be especially helpful in alleviating anxiety caused by uncertainty, whether it's about a diagnosis, prognosis, or simply reassurance regarding potential next steps in medical or personal decisions.
Given the existing ongoing stressors from disease management burden on patients, advanced AI companions like ChatGPT and Claude can play an important role by providing timely, 24/7 reassurance, clear guidance, and emotional support. Notably, some studies suggest that AI responses can be perceived as even more compassionate and reassuring than those from humans.
Loneliness is another pervasive issue in healthcare. Emerging research suggests that social chatbots can reduce loneliness and social anxiety, underscoring their potential as complementary tools in mental health care. These advanced AI models help bridge gaps in information access, emotional reassurance, and patient engagement, offering clear answers, confidence, comfort, and a digital second opinion, particularly valuable when human resources are limited.
Mustafa Suleyman, CEO of Microsoft AI, has articulated a vision for AI companions that evolve over time and transform our lives by providing calm and comfort. He describes an AI 'companion that sees what you see online and hears what you hear, personalized to you. Imagine the overload you carry quietly, subtly diminishing. Imagine clarity. Imagine calm.'
While there are many reasons AI is increasingly used in healthcare, a key question remains:
Why Are Healthcare Stakeholders Increasingly Turning to AI?
Healthcare providers are increasingly adopting AI companions because they fill critical gaps in care delivery. Their constant availability and scalability enhance patient experience and outcomes by offering emotional support, cognitive clarity, and trusted advice whenever patients need it most.
While AI companions are not new, today's technology delivers measurable benefits in patient care. For example, Woebot, an AI mental health chatbot, demonstrated reductions in anxiety and depression symptoms within just two weeks.
ChatGPT's current investment in HealthBench to promote health and well-being further demonstrate its promise, commitment, and potential to help even more patients. These advances illustrate how AI tools can effectively complement traditional healthcare by improving patient well-being through consistent reassurance and engagement. So, what's holding back wider reliance on chatbots?
The Hindrance: Why We Can't Fully Rely on AI Chatbot Companions
Despite rapid advancements, AI companions are far from flawless, especially in healthcare where the margin for error is razor thin. Large language models (LLMs) like ChatGPT and Claude are trained on vast datasets that may harbor hidden biases, potentially misleading vulnerable patient populations. Even with impressive capabilities, ChatGPT can still hallucinate or provide factually incorrect information—posing real risks if patients substitute AI guidance for professional medical advice. While future versions may improve reliability, current models are not suited for unsupervised clinical use. Sometimes, AI-generated recommendations may conflict with physicians' advice, which can undermine trust and disrupt the patient–clinician relationship.
There is also a risk of patients forming deep emotional bonds with AI, leading to over-dependence and blurred boundaries between digital and human interaction. As LinkedIn cofounder Reid Hoffman put it in Business Insider, 'I don't think any AI tool today is capable of being a friend,' and "And I think if it's pretending to be a friend, you're actually harming the person in so doing."
For now, AI companions should be regarded as valuable complements to human expertise, empathy, and accountability — not replacements.
A Balanced, Safe Framework: Maximizing Benefit, Minimizing Risk
To harness AI companions' full potential while minimizing risks, a robust framework is essential. This begins with data transparency and governance: models must be trained on inclusive, high-quality datasets designed to reduce demographic bias and errors.
Clinical alignment is critical; AI systems should be trained on evidence-based protocols and guidelines, with a clear distinction between educational information and personalized medical advice.
Reliability and ethical safeguards are vital, including break prompts during extended interactions, guidance directing users to seek human support when needed, and transparent communication about AI's limitations.
Above all, AI should complement human clinicians, acting as a navigator or translator to encourage and facilitate open dialogue between patients and their healthcare providers.
Executive Call to Action
In today's digital age, patients inevitably turn to the internet and increasingly to AI chatbots like ChatGPT and Claude for answers and reassurance. Attempts to restrict this behavior are neither practical nor beneficial. Executive physician advisors and healthcare leaders are therefore responsible for embracing this reality by providing structured, transparent, and integrated pathways that guide patients in using these powerful tools wisely. It is critical that healthcare systems are equipped with frameworks ensuring AI complements clinical care rather than confuses or replaces it.
Where AI capabilities fall short, these gaps must be bridged with human expertise and ethical oversight. Innovation should never come at the expense of patient safety, trust, or quality of care.
By proactively shaping AI deployment in healthcare, stakeholders can empower patients with reliable information, foster meaningful clinician-patient dialogue, and ultimately improve outcomes in this new era of AI-driven medicine.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
7 minutes ago
- Yahoo
Major bank's Australian-first move to crack down on costly $9.5m scourge: 'Screenshot the text'
Commonwealth Bank (CBA) has launched an Australian-first tool to help people check whether they're about to be scammed. It can be difficult sometimes to work out whether a text message that's landed in your phone is real or from a criminal trying to steal your information or money. But CBA's new AI-powered Scam Checker aims to crack down on this issue. Users will now be able to send a screenshot of the message to the Truyu app, which is owned by CBA, to check what they should do. "When you upload a suspicious text to Scam Checker, you're not just protecting yourself. You're also helping keep others safe by sharing valuable information that can be used to help protect them too," Melanie Hayden, Truyu's managing director, said. RELATED Duplicitous new scam targeting 'vulnerable' Aussies costs pensioner $45,000 Text message 'proves' common dinner bill foul play as woman left '$500 out-of-pocket Woolworths shopper saves $60 after discovering game-changing new trick How does CBA's Scam Checker work? Scammers have been able to impersonate banks big and small across Australia, as well as other trusted organisations like Centrelink, the ATO, telcos, internet service providers, and myGov. Some text messages can arrive in the same thread as previous legitimate conversations from that person or organisation, which can make it hard to know what to trust. But Scam Checker uses a "powerful combination" of generative AI and CBA's scams intelligence to dig into the nitty-gritty of any message you give it. While scammers might try their best to look and sound exactly like the group or person they're imitating, they're not always perfect. The tool will be able to scan the message and any links included within seconds to determine whether you should reply to avoid. In the first half of 2024, nearly 58,000 scam text messages were reported, but calls led to the highest reported losses. There have been more than 11,700 of these dodgy messages reported in 2025 so far, with $9.5 million in reported is Truyu? The Truyu app was launched last year between CBA and its digital business arm x15ventures as a way to prevent customers from being scammed. They can check the app to see whether their personal or banking information has been exposed in a data breach. Users will get alerted if their name, date of birth, passport or driver's licence details are being used by thousands of retailers and vendors across the country. If there's a company or business that doesn't ring a bell, customers can find out how the details are being used and shut it down if it's illegitimate. Scam Checker is another weapon in Truyu's arsenal, which has already saved thousands of Australians from being hacked. You can get three months of free access when you sign up, and then after that it costs $4.99 per month. CBA users will be asked to verify certain card purchases One-time passwords (OTPs) have been used by many banks across Australia to help verify a payment or money transfer. However, CBA customers will be asked to log in to the bank's app to approve certain card payments instead of receiving those OTPs. "We are able to give clearer guidance and warnings in the app than in a text message," James Roberts, CBA's general manager of Group Fraud, said. It's aimed at relying less on text messages for important communication between the bank and its customers, as these messages can be hijacked by scammers. 'Earlier this year CommBank introduced in-app authentication to help stop unauthorised access to a customer's online banking, even if a would-be intruder has obtained the customer's password," Roberts added. "We're now looking at progressively moving other sensitive notifications and actions into the app – such as transaction alerts and security prompts – to enhance customer protections."Error in retrieving data Sign in to access your portfolio Error in retrieving data


Gizmodo
9 minutes ago
- Gizmodo
The Real Reason You Haven't Been Replaced by AI Yet
It's the ticking time bomb in the global economy, and every CEO knows it: AI is already powerful enough to replace millions of jobs. So why haven't the mass layoffs begun? The answer has little to do with technology and everything to do with fear. Corporate leaders are quietly waiting to see who will be the first to pull the trigger. My discussions about Generative AI reveal a stark generational divide. Most people under 35 are convinced that AI is a reality, not a gimmick, and that the displacement of human workers is an urgent, present-day issue. For many over 35, the assessment is more cautious; they believe the replacement will happen, but not for another five or ten years. The problem is that the second group is several steps behind. The AI revolution isn't being held back because the technology isn't ready. It's being held back for political reasons. CEOs are nervously looking at each other, waiting for someone else to make the first move and announce that they are eliminating a significant number of jobs because AI can do the work faster and cheaper. They are tiptoeing around what they already know. And they are telegraphing their intentions subliminally. Take Palantir's CEO, Alex Karp. During an interview with CNBC in August, he said: 'We're planning to grow our revenue … while decreasing our number of people.' Karp then continued: 'This is a crazy, efficient revolution. The goal is to get 10x revenue and have 3,600 people. We have now 4,100.' The subtext is clear: Palantir already considers 500 of its employees to be a surplus that AI could replace. It could increase its revenue by 10x while reducing its workforce by almost 12.2%. Look at Amazon. The company has more than one million robots (Hercules, Pegasus, and Proteus, its fully autonomous robot) in its facilities and believes that AI will help increase its robot mobility by 10%. The number of its robots is nearly equivalent to the 1.546 million people (full-time and part-time) that the company employs globally. CEO Andy Jassy has already warned his workforce of what's to come. 'We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,' Jassy told employees in a memo last June. 'It's hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce.' CEOs are waiting for political cover that isn't coming. None of them want to become the poster child for the revolution that killed human jobs in America. They don't want to become the target of politicians, knowing that on this issue, the attacks will come from both the populist left and the populist right. The problem is that politicians are just as unprepared as the over-35s. They seem to believe this is a problem for the next administration, a challenge for a few years down the road. They are wrong. The problem is here now. The questions are urgent: what will the displaced workers do? What safety nets need to be built? What happens to the healthcare of millions who are still a long way from retirement? These are questions politicians have not yet addressed, likely because they don't have the answers. So, for now, the CEOs are buying them time. Instead of mass firings, a quieter trend has emerged: hiring freezes. Increasingly, managers are being forced to justify why a human is needed for a role that an AI could potentially perform. This is already devastating the job market for young people. According to Handshake, a career platform for Gen Z employees, job listings for entry-level corporate roles have declined 15% over the past year. And for those who still think the great displacement is far away, the outplacement firm Challenger, Gray & Christmas reported a few days ago that AI is already one of the top five factors contributing to job losses this year. Companies have announced over 806,000 private-sector job cuts since January, the highest number for that period since 2020. The tech industry is leading the charge. The machine is in motion. It's not that AI can't replace us, especially in knowledge jobs. It's that your boss doesn't yet have the courage to tell you they're firing you for a robot. They don't want to be the villain. They're waiting for one of their peers to be crucified before they enter the stage. But for how long?


Gizmodo
9 minutes ago
- Gizmodo
Looking for a Cheap VPN? This One's Down to $1.99/Month, But Not for Long
It's not every day that you can save hundreds on a VPN service. When it's cheap and excellent, like Surfshark, the excitement is even greater. Surfshark has never been expensive, but these discounts are otherworldly. If you act today, you can save up to $450 on the chosen biennial plan. Unsure if it's the right choice? You can clarify that in our Surfshark VPN review. Now, let's discuss discounts, shall we? Save Up to $450 on Surfshark Now The VPN offers flexible, multi-tiered plans. You have Starter, One, and One+ plans, all divided into 24, 12, and a month-long length. However, the VPN's most significant discounts are on the longest, 24-month plans, which look like this: The initial monthly price is $15.45, $17.95, and $20.65, respectively, so you can see how much you're actually saving. For example, the Surfshark One+ plan saves you $450 and provides 27 months of VPN protection. If you'd rather spend less, the One plan is amazing, especially with its inclusion of antivirus and other innovative security features. While the Starter plan is the least expensive, it lacks the advanced features of these two. The good news is, all plans have a 30-day money-back guarantee. It doesn't matter which length you choose; even annual and monthly plans are suitable. This lets you try Surfshark risk-free and get a refund if you're unsatisfied. Quick warning: this scenario likely won't happen! A cheap piece of tech may be underwhelming, but a cheap VPN is rarely a bad bargain. Surfshark confirms our words and expands upon them by adding a few essential features, such as unlimited concurrent connections. This allows you to use the VPN on as many devices as you want, and even share the subscription with friends and family. The most popular, Surfshark One plan, also includes antivirus, which works on all devices and includes scheduled scanning for added convenience. The VPN's Search function is a bespoke incognito browser, for example. Meanwhile, you have an Alternative ID, which makes a new fake identity that you can use online. The One+ plan, which saves you $450, includes Incogni, which removes your data from data brokers and agencies. This all-encompassing plan delights privacy geeks who want to leave no traces of their digital footprint. Surfshark's offer is one of the most popular right now. We can see why. However, we can also see why it'll end soon. It's too good to be true, and while it is, Surfshark will file for bankruptcy if we all steal it at this price. Seize the opportunity while it's there. Explore Surfshark Discounts