
Doctors fear ICE agents in health facilities are deterring people from seeking care
Dr. Céline Gounder, CBS News medical contributor and editor-at-large for public health at KFF Health News, told "CBS Mornings Plus" on Tuesday that she has not seen any official ICE raids in hospitals, but that ICE agents have been seen in hospitals as well as other health care facilities.
That's because detention standards require that ICE detainees be provided medical services, including initial medical and dental screenings, as well as emergency care.
"They are often bringing in people that they've detained for medical clearance," said Gounder, who is also a practicing internist and infectious disease expert in New York City. "We see this often with law enforcement. But it is creating an atmosphere of fear. And my colleagues and I have had numerous patients tell us that they hesitated or waited too long to come in for health care."
And delays in care matter, Gounder added. Delayed care for a heart attack or stroke, for example, can lead to more loss of heart or brain tissue.
Gounder also heard from an emergency medicine physician in Los Angeles who has seen the impact of ICE agents appearing in hospital settings.
The agents are arriving with ski masks and looking intimidating to the general patient, affecting the overall health of the community because it's creating an atmosphere of fear instead of of wellness, according to the doctor.
The doctor also alleged agents have committed ethics violations, including not showing their identification, not allowing patient privacy during interviews and examinations, preventing doctors from contacting family for necessary medical information and preventing family from visiting.
"These are really standard things," Gounder said. "Every patient should have the right to these kinds of provisions for good health care."
"If you're a law enforcement official coming into a hospital or health care facility, you need to be identifying yourself as such, you need to be showing your badge or your ID," Gounder said, adding that those who want to enter private patient areas "also need to be showing a judicial warrant."
Federal legal standards and privacy protections, including HIPAA and the 4th Amendment to the Constitution, bar unreasonable searches and seizures, including in non-public hospital areas.
CBS News has reached out to ICE and the Department of Homeland Security for comment.
A lot of health care providers don't know what their rights are, Gounder said, prompting at least some hospitals to offer employees guidance on potential ICE encounters.
At Bellevue Hospital, for example, where Gounder works, staff were recently given sample prompts for interacting with non-local law enforcement, including ICE agents.
The hospital told staff, in part: "We do not require a patient's immigration status to provide care, and we do not share medical or personal information about our patients unless required by law."
The presence of ICE agents is not just a concern for physical health, but also mental health.
"Think about who has come here as an immigrant, many of them have faced real trauma in their home countries," Gounder said. "So this, what feels like militarization of an emergency room, can be very re-traumatizing and cause some very relevant health impacts."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
19 minutes ago
- Forbes
ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?
During the GPT-5 launch this week, Sam Altman, CEO of OpenAI, invited a cancer patient and her husband to the stage. She shared how, after receiving her biopsy report, she turned to ChatGPT for help. The AI instantly decoded the dense medical terminology, interpreted the findings, and outlined possible next steps. That moment of clarity gave her a renewed sense of control over her care. Altman mentioned; 'health is one of the top reasons consumers use ChatGPT, saying it 'empowers you to be more in control of your healthcare journey.' Around the world, patients are turning to AI chatbots like ChatGPT and Claude to better understand their diagnoses and take a more active role in managing their health. In hospitals, both patients and clinicians sometimes use these AI tools informally to verify information. At medical conferences, some healthcare professionals admit to carrying a 'second phone' dedicated solely to AI queries. Without accessing any private patient data, they use it to validate their assessments, much like patients seeking a digital 'second opinion' alongside their physician's advice. Even during leisure activities like hiking or camping, parents often rely on AI Chatbots like ChatGPT or Claude for quick guidance on everyday concerns such as treating insect bites or skin reactions in their children. This raises an important question: Can AI Companions Like ChatGPT, Claude, and Others Offer the Same Promise, Comfort, Commitment, and Care as Some Humans? As AI tools become more integrated into patient management, their potential to provide emotional support alongside clinical care is rapidly evolving. These chatbots can be especially helpful in alleviating anxiety caused by uncertainty, whether it's about a diagnosis, prognosis, or simply reassurance regarding potential next steps in medical or personal decisions. Given the existing ongoing stressors from disease management burden on patients, advanced AI companions like ChatGPT and Claude can play an important role by providing timely, 24/7 reassurance, clear guidance, and emotional support. Notably, some studies suggest that AI responses can be perceived as even more compassionate and reassuring than those from humans. Loneliness is another pervasive issue in healthcare. Emerging research suggests that social chatbots can reduce loneliness and social anxiety, underscoring their potential as complementary tools in mental health care. These advanced AI models help bridge gaps in information access, emotional reassurance, and patient engagement, offering clear answers, confidence, comfort, and a digital second opinion, particularly valuable when human resources are limited. Mustafa Suleyman, CEO of Microsoft AI, has articulated a vision for AI companions that evolve over time and transform our lives by providing calm and comfort. He describes an AI 'companion that sees what you see online and hears what you hear, personalized to you. Imagine the overload you carry quietly, subtly diminishing. Imagine clarity. Imagine calm.' While there are many reasons AI is increasingly used in healthcare, a key question remains: Why Are Healthcare Stakeholders Increasingly Turning to AI? Healthcare providers are increasingly adopting AI companions because they fill critical gaps in care delivery. Their constant availability and scalability enhance patient experience and outcomes by offering emotional support, cognitive clarity, and trusted advice whenever patients need it most. While AI companions are not new, today's technology delivers measurable benefits in patient care. For example, Woebot, an AI mental health chatbot, demonstrated reductions in anxiety and depression symptoms within just two weeks. ChatGPT's current investment in HealthBench to promote health and well-being further demonstrate its promise, commitment, and potential to help even more patients. These advances illustrate how AI tools can effectively complement traditional healthcare by improving patient well-being through consistent reassurance and engagement. So, what's holding back wider reliance on chatbots? The Hindrance: Why We Can't Fully Rely on AI Chatbot Companions Despite rapid advancements, AI companions are far from flawless, especially in healthcare where the margin for error is razor thin. Large language models (LLMs) like ChatGPT and Claude are trained on vast datasets that may harbor hidden biases, potentially misleading vulnerable patient populations. Even with impressive capabilities, ChatGPT can still hallucinate or provide factually incorrect information—posing real risks if patients substitute AI guidance for professional medical advice. While future versions may improve reliability, current models are not suited for unsupervised clinical use. Sometimes, AI-generated recommendations may conflict with physicians' advice, which can undermine trust and disrupt the patient–clinician relationship. There is also a risk of patients forming deep emotional bonds with AI, leading to over-dependence and blurred boundaries between digital and human interaction. As LinkedIn cofounder Reid Hoffman put it in Business Insider, 'I don't think any AI tool today is capable of being a friend,' and "And I think if it's pretending to be a friend, you're actually harming the person in so doing." For now, AI companions should be regarded as valuable complements to human expertise, empathy, and accountability — not replacements. A Balanced, Safe Framework: Maximizing Benefit, Minimizing Risk To harness AI companions' full potential while minimizing risks, a robust framework is essential. This begins with data transparency and governance: models must be trained on inclusive, high-quality datasets designed to reduce demographic bias and errors. Clinical alignment is critical; AI systems should be trained on evidence-based protocols and guidelines, with a clear distinction between educational information and personalized medical advice. Reliability and ethical safeguards are vital, including break prompts during extended interactions, guidance directing users to seek human support when needed, and transparent communication about AI's limitations. Above all, AI should complement human clinicians, acting as a navigator or translator to encourage and facilitate open dialogue between patients and their healthcare providers. Executive Call to Action In today's digital age, patients inevitably turn to the internet and increasingly to AI chatbots like ChatGPT and Claude for answers and reassurance. Attempts to restrict this behavior are neither practical nor beneficial. Executive physician advisors and healthcare leaders are therefore responsible for embracing this reality by providing structured, transparent, and integrated pathways that guide patients in using these powerful tools wisely. It is critical that healthcare systems are equipped with frameworks ensuring AI complements clinical care rather than confuses or replaces it. Where AI capabilities fall short, these gaps must be bridged with human expertise and ethical oversight. Innovation should never come at the expense of patient safety, trust, or quality of care. By proactively shaping AI deployment in healthcare, stakeholders can empower patients with reliable information, foster meaningful clinician-patient dialogue, and ultimately improve outcomes in this new era of AI-driven medicine.
Yahoo
an hour ago
- Yahoo
Atlanta police officer dies after shooting near CDC headquarters
A police officer has died from injuries sustained while responding to a shooting outside the headquarters of the US Centers for Disease Control and Prevention (CDC) in Atlanta. The incident, which took place on Friday near Emory University, involved a "single shooter" who is now dead, the Atlanta police department said. It said the officer, David Rose, had been taken to hospital and later died from his injuries. No civilian was wounded in the incident. The motive is unclear, but US media, citing an unnamed law-enforcement official, reported a theory that the gunman believed he was sick as a result of a coronavirus vaccine. Media reports also suggested the man's father had called police on the day of the shooting believing his son was suicidal. CDC Director Susan Monarez said the centre was "heartbroken" by the attack. "DeKalb County police, CDC security, and Emory University responded immediately and decisively, helping to prevent further harm to our staff and community," she wrote in a post on X. In a press briefing on Friday, police said they became aware of a report of an active shooter at around 16:50 local time (21:50 BST) near the CDC campus. Officers from multiple agencies responded. Emory University posted at the time on social media: "Active shooter on Emory Atlanta Campus at Emory Point CVS. RUN, HIDE, FIGHT." The CDC campus received multiple rounds of gunfire into buildings. Police said they found the shooter "struck by gunfire" - but could not specify whether that was from law enforcement or self-inflicted. Media outlets have reported that CDC employees have been asked to work remotely on Monday.
Yahoo
an hour ago
- Yahoo
Thousands of Birmingham people to benefit from projects aimed at tackling health issues
Millions of pounds will fund a number of new projects across Birmingham to help boost healthcare for thousands of people across the city. A total of £4.51 million has been allocated to fund nine projects, with grants of up to £500,000 each being handed to charities, community and health organisations. It is part of the NHS Birmingham and Solihull Integrated Care System's (BSol ICS) innovative Fairer Futures Fund. READ MORE: 24/7 mental health texting service launched to help support Birmingham and Solihull people The aim is for the projects to support the delivery Government's 10-year Health Plan and more than 27,000 people are expected to benefit. Projects will focus on areas including supporting healthy ageing, reducing levels of mental and cardiovascular ill-health and improve the lives of children and young people. Those receiving funding are: Forward Carers CIC – Mental health support programme for unpaid carers with a focus on engaging over 5,000 carers from under served communities that experience health inequalities. Midland Mencap – Working with primary care and local communities to improve uptake and quality of annual health checks for citizens with learning disabilities. FND Change CIC – Increasing local capacity to provide testing for blood borne viruses and provide a recovery pathway for individuals facing significant mental and physical health challenges linked to addiction, and blood-borne viruses, particularly Hepatitis C. BID Services – Recruiting 'Deaf Health Champions' to tackle health inequalities experienced by deaf and hard of hearing citizens. Birmingham and Solihull Mental Health NHS Foundation Trust – Developing a more culturally sensitive and effective approach to dementia diagnosis with a focus on under served communities. Midlands Medical Partnership – Delivering campaigns and developing health champions to work in communities to tackle health inequalities experienced by men of a Black Caribbean and African heritage. Murray Hall Community Trust – Developing and utilising community assets to improve mental health and wellbeing for South Asian communities. Sense, the National Deafblind and Rubella Association – Addressing the need for timely, accessible and holistic support for families of children and young people on the neurodevelopmental pathway. Witton Lodge Community Association – Delivering behavioural change interventions for citizens from under served communities with type 2 diabetes to support them through weight management and lifestyle change. The nine projects are due to launch in September 2025 and conclude in March 2028. Paul Athey, Deputy Chief Executive and Chief Finance Officer of NHS Birmingham and Solihull, said: 'Our Fairer Futures Fund provides a key opportunity for us to support the implementation of the 10-Year Health Plan and these new projects will provide access to services at a neighbourhood level, which will enhance the day-to-day lives of our communities, helping them to live healthier, happier lives.' Coun Mariam Khan, Birmingham City Council 's Cabinet Member for Health and Social Care said: 'These projects demonstrate the power of collaboration between local government, the NHS, and the voluntary and community sector. 'We are proud to support initiatives that are rooted in the communities they serve, and which tackle the health challenges people face.'