Latest news with #physicians
Yahoo
3 days ago
- Health
- Yahoo
Opinion - The ugly truth about the student loan caps in Trump's ‘big beautiful' law
New federal student loan caps pose an urgent and overlooked threat to the health of all Americans. These changes will severely undermine the graduate education pipeline for the clinician workforce — including both nurses and physicians— jeopardizing access to care, straining the workforce and, ultimately, harming patients. The bill, now signed into law, will cap graduate unsubsidized student loans at $20,500, with a $100,000 total cap on top of undergrad loans, and phase out Grad PLUS loans. These changes are especially detrimental for those pursuing clinician roles, such as nurse practitioners. Nurse practitioners play a crucial role, filling gaps in primary care — especially in rural and underserved communities. Their presence expands access, relieves pressure on healthcare systems and allows physicians to focus on the most complex cases. Graduate education is not optional for becoming a nurse practitioner. Nor is it optional for becoming faculty to teach the next generation of physicians and nurses. Weakening the pipeline of advanced practice nurses doesn't just hurt nursing, it threatens the entire care delivery system. For nursing, this is a moment where education is already strained. Nurses have left the profession en masse since the COVID-19 pandemic and older nurses are retiring. We urgently need more nurses and nurse educators in the pipeline. Yet in 2023, enrollment in bachelor's-level nursing programs grew by just 0.3 percent. Meanwhile, enrollment in master's and Ph.D. nursing programs declined by 0.9 percent and 3.1 percent, respectively. That same year, U.S. nursing schools turned away more than 65,000 qualified applications due to a lack of faculty, clinical placements and funding — not because of a lack of interest. Faculty shortages are especially dire. Nearly 2,000 full-time faculty vacancies remain unfilled nationwide, according to the American Association of Colleges of Nursing. These positions require a master's or doctoral degree — precisely the kind of education now placed at risk by this legislation. Without nurse educators, we cannot train the next generation of nurses at any level. This law also directly contradicts the Make America Healthy Again initiative, which calls on healthcare systems to take on chronic disease through prevention. Nurses make up the largest segment of the healthcare workforce. Their education emphasizes prevention and whole-person care for people and communities. Nurses are central to the shift from reactive 'sick care' to proactive prevention, so restricting their ability to enter the profession is not just shortsighted, it's self-defeating. A diminished nursing workforce will trigger a familiar cycle: reduced access, longer wait times, more chronic disease and an even more overwhelmed workforce. And these consequences won't be limited to nurses — they will affect physicians, hospitals, insurers and, most of all, everyday Americans. This is a national health issue. While the bill has passed, it is not too late to mitigate its harm. Policymakers must find alternative solutions, from scholarship expansion to loan forgiveness, to ensure access to graduate nursing education remains within reach. We cannot solve a workforce shortage and a chronic disease crisis by cutting off the professionals trained to fix it. Sarah Szanton is dean of the Johns Hopkins School of Nursing. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Solve the daily Crossword
Yahoo
4 days ago
- Health
- Yahoo
Lewis: Regina hospital physician culture is both tragedy and farce
Life is short, and based on a lifetime of experience with the genre, I cannot recommend you spend much of it reading reports on health care. But should you find yourself awaiting a root canal, a phone scroll through the 2024-25 External Review of Regina Hospital Physician Culture might make you feel a bit better about your impending chair time. In a bracing 30 pages, including appendices, the report describes a litany of dysfunctions among physicians working in what is supposed to be the most professionally managed precinct of health care. It's not all bleak. There are no allegations of American-style billing for non-existent surgeries or fistfights in the doctors' lounge. But it is plenty bad enough. The highlights: Physicians in Regina have largely held themselves apart from the mission, vision and values adopted by the SHA (Saskatchewan Health Authority) since its formation in 2017. We heard examples of divisions and departments where it appears pursuit of financial compensation has overtaken the priority for high quality accessible care for patients. There is no functioning electronic health record, and no database that allows either effective wait list management or workforce planning. The Ministry of Health allows interests to plead their cases directly, undermining the SHA mandated to run the system. Whether rooted in illness, aging or personality factors, disruptive patterns of behaviour have often been in place for many years and not addressed in a decisive fashion. Very few physicians were able to describe how they monitor and improve quality in their services. Leaders who have identified problematic behaviours and acted appropriately to protect patients and teams should not be vilified or suffer retribution. In some cases, efforts to recruit have been thwarted by physicians to preserve their service volumes despite wait times. There are legacy contracts, deals and arrangements that create inequity and inconsistency in negotiating with physicians and groups. It is embarrassing to have to commission a review to make blindingly obvious recommendations. A report on a school system in similar disarray would recommend having principals who are actually in charge. The schools should teach the students to read and write. They should know what students' needs are and organize to meet them. There should be no side deals and special privileges for a few teachers. Records should be computerized and generate data to plan and assess performance. Evaluate your staff. Discipline teachers who throw tantrums and abuse their colleagues. Is it any wonder why people misbehave when bad behaviour is not only tolerated, but rewarded? The Regina physicians have told the SHA to park its mission, vision and values where the sun don't shine for eight years, with zero consequence. So much for a unified provincial system. Medical groups have frozen out new recruits to protect their incomes while wait lists ballooned. Physician leaders who tried to impose some order and civility were abused and left hung out to dry. Don't for a moment think these problems are unique to Regina. Do a quick search of conflict of interest in Alberta, or pediatric chaos in Kelowna. The only difference between Regina and dozens of other communities is that Regina's pathologies are now out in the open. Like all reports written by physicians about physicians, professional self-governance is assumed to be entirely in the public interest, fully compatible with fulfilling public and professional obligations found routinely unfulfilled, and despite acknowledgement that 'some physicians have lost the plot of why we are here.' And therein lies the problem. The report says as much: 'Physician autonomy is clashing with the broader social contract to ensure quality and safety.' Workers at Starbucks or Toyota can tell you how their work is organized and monitored to produce quality. Most physicians in Regina are tongue-tied. This is what you get when a profession is accountable to itself, and self-evaluation in a data-free environment is standard operating procedure. A cultural problem? Sounds so much more anthropological than negligence, cowardice, greed, and abdication of responsibility. As a wise physician friend told me years ago, what you permit, you promote. The rot has been called out. What next? Steven Lewis spent 45 years as a health policy analyst and health researcher in Saskatchewan. He can be reached at slewistoon1@ The Regina Leader-Post has created an Afternoon Headlines newsletter that can be delivered daily to your inbox so you are up to date with the most vital news of the day. Click here to subscribe. With some online platforms blocking access to the journalism upon which you depend, our website is your destination for up-to-the-minute news, so make sure to bookmark and sign up for our newsletters so we can keep you informed. Click here to subscribe.


CTV News
4 days ago
- Health
- CTV News
Complex care doctor says hundreds are underserved on Saanich Peninsula
A tiny team of physicians says with a little extra space and a few more staff, it could be helping more than 1,000 high-needs patients on Vancouver Island.


Forbes
4 days ago
- Health
- Forbes
Rethinking Work With AI: What Stanford's Groundbreaking Workforce Study Means For Healthcare's Future
AI systems should reduce cognitive load, clarify ambiguity, and work alongside teams as intelligent collaborators, not as black-box disruptors. What if the AI systems you're building solve the wrong problems and alienate the workforce you're trying to support? That's the uncomfortable reality laid bare by a new research study from Stanford University. While AI pilots race ahead across administrative and clinical functions, most are still built on a flawed assumption: that automating tasks equals progress. But for the people doing the work—clinicians, care coordinators, billing specialists—that's not what they asked for. The report, Future of Work with AI Agents, offers the most granular audit yet of how worker sentiment, task complexity, and technical feasibility collide in the age of artificial intelligence. Over 1,500 U.S. workers were surveyed across 104 occupations, producing the most detailed dataset yet on where AI could, and should, fit. Their preferences were paired with ratings from 52 AI experts to create a map of the true automation and augmentation landscape. For healthcare, the findings could not come at a more urgent moment. The healthcare sector faces a burned-out workforce, escalating administrative waste, and widespread dissatisfaction with digital tools that were meant to help. A majority of physicians report that documentation burden is a leading cause of burnout, with recent studies showing U.S. physicians spend excessive time on documentation tasks. Nurse attrition has also remained a concern since the pandemic. Meanwhile, AI adoption is surging, with the vast majority of health systems piloting or planning AI integration. However, there remains a lack of consistent frameworks to align these technologies with real-world clinical and operational dynamics. The result? Misplaced investment, fractured trust, and resistance from the very people AI is meant to assist. The Stanford study confirms it: the majority of tasks that healthcare workers want automated—like documentation, claims rework, or prior auth form generation—are not where AI tools are being focused. In fact, less than 2% of those high-desire tasks are showing up in actual LLM usage today. Instead, attention and venture funding are often diverted toward automating interpersonal communication, appeals, or triage. These are areas where trust, nuance, and empathy matter most. This is more than a technical oversight. It's a strategic miscalculation. This study clarifies that the future of AI in healthcare isn't about replacing human judgment—it's about protecting it. Leaders must pivot from automation-at-any-cost to augmentation-by-design. That means building AI systems that reduce cognitive load, clarify ambiguity, and work alongside teams as intelligent collaborators, not as black-box disruptors. And, most critically, it means listening to the workforce before you deploy. A New Lens on Work: Automation Desire vs. Technical Feasibility Stanford's framework introduces two powerful filters for every task: what workers want automated and what AI can do. This produces a four-quadrant map: This approach is especially revealing in healthcare, where: Critically, 69% of workers said their top reason for wanting AI was to free up time for higher-value work. Only 12% wanted AI to fully take over a task. The takeaway? Augmentation, not replacement. Green Light (Automate Now): R&D Opportunity (Invest in Next-Gen AI): Red Light (Approach with Caution): Y Combinator is one of the world's most influential startup accelerators, known for launching and funding early-stage technology companies, including many that shape the future of artificial intelligence. Its relevance in this context comes from its outsized role in setting trends and priorities for the tech industry: the types of problems YC-backed startups pursue often signal where talent, investment, and innovation are headed. The Stanford study highlights a striking disconnect between these startup priorities and actual workforce needs. Specifically, it found that 41% of Y Combinator-backed AI startups are developing solutions for tasks that workers have little interest in automating—referred to as 'Red Light Zones' or low-priority areas. This reveals a substantial missed opportunity: if leading accelerators like Y Combinator better aligned their focus with the real needs and preferences of the workforce, AI innovation could deliver far greater value and acceptance in the workplace. The Human Agency Scale: AI as a Teammate To move beyond binary thinking (automate vs. don't), the Stanford research team introduces a more nuanced framework: the Human Agency Scale (HAS). This five-tier model offers a conceptual scaffold for evaluating how AI agents should integrate into human workflows. Rather than asking whether a task should be automated, the HAS asks to what extent the human remains in control, how decision-making is shared, and what level of oversight is required. The scale ranges from H1 to H5, as follows: The Stanford study reveals a clear pattern across occupations: the majority of workers—particularly in healthcare—prefer H2 or H3. Specifically, 45.2% of tasks analyzed across all industries favor an H3 arrangement, in which AI acts as a collaborative peer. In healthcare contexts—where judgment, empathy, and contextual nuance are foundational—H3 is even more critical. In roles such as care coordination, utilization review, and social work, tasks often require a mix of real-time decision-making, human empathy, and risk stratification. A system built for full automation (H5) in these contexts would not only be resisted—it would likely produce unsafe or ethically problematic outcomes. Instead, what's required are AI agents that can surface relevant information, adapt to the evolving contours of a task, and remain responsive to human steering. John Halamka, President of Mayo Clinic Platform, reinforced this collaborative mindset in February 2025: 'We have to use AI,' he said, noting that ambient listening tools represent 'the thing that will solve many business problems' with relatively low risk. He cited Mayo's inpatient ambient nursing solutions, which handle '100% of the nursing charting without the nurse having to touch a keyboard,' but was clear that these tools are 'all augmenting human behavior and not replacing the human.' These insights echo a broader workforce trend: automation without agency is unlikely to succeed. Clinical leaders don't want AI to dictate care pathways or handle nuanced appeals independently. They want AI that reduces friction, illuminates blind spots, and extends their cognitive reach, without erasing professional identity or judgment. As such, designing for HAS Level 3 (equal partnership) is emerging as the gold standard for intelligent systems in healthcare. This model balances speed and efficiency with explainability and oversight. It also offers a governance and performance evaluation framework that prioritizes human trust. Building AI for HAS Level 3 requires features that go beyond prediction accuracy. Systems must be architected with: Healthcare doesn't need one-size-fits-all automation. It requires collaboration at scale, grounded in transparency and guided by human expertise. These perspectives align perfectly with the Stanford findings: workers don't fear AI—they fear being sidelined by it. The solution isn't to slow down AI development. It's to direct it with clarity, co-design it with the people who rely on it, and evaluate it not just by outputs but also by the experience and empowerment it delivers to human professionals. The true ROI of AI is trust, relief, and time reclaimed. Outcomes like 'claims processed' or 'notes generated' aren't enough. Metrics should track cognitive load reduced, time returned to patient care, and worker trust in AI recommendations. While throughput remains a necessary benchmark, these human-centered outcomes provide the clearest signal of whether AI improves the healthcare experience. Measurement frameworks must be longitudinal, capturing not just initial productivity but long-term operational resilience, clinician satisfaction, and sustainable value. Only then can we ensure that AI fulfills its promise to elevate both performance and purpose in healthcare. Dr. Rohit Chandra, Chief Digital Officer at Cleveland Clinic, gave voice to this idea in June 2025: 'It's made their jobs a ton easier. Patient interactions are a lot better because now patients actually engage with the doctor,' he said, referring to 4,000 physicians now using AI scribes. 'I'm hoping that we can keep building on the success that we've had so far to literally drive the documentation burden to zero.' Build With, Not For This moment is too important for misalignment. The Stanford study offers a blueprint. For healthcare leaders, the message is clear: If you want AI to scale, build with the workforce in mind. Prioritize the Green Light Zones. Invest in agentic systems that enhance, not override. Govern AI like a trusted partner, not a productivity engine. The future of AI in healthcare won't be determined by the size of your model. It will be defined by the quality of your teaming.
Yahoo
5 days ago
- Health
- Yahoo
Dementia Linked With Treatment For Chronic Lower Back Pain
A drug widely used to treat nerve pain and epilepsy has been linked with an increase in cases of dementia and mild cognitive impairment. A team from Case Western Reserve University School of Medicine, Arizona State University, and the MetroHealth Medical Center in the US crunched the numbers on 26,416 records of patients with chronic lower back pain, looking at the relationship between prescriptions for the anticonvulsant gabapentin and dementia diagnoses. Having six or more gabapentin prescriptions was linked to a significant increase in dementia risk and mild cognitive impairment (MCI), the data showed: those in that group were 29 percent and 85 percent more likely to be diagnosed with dementia and MCI respectively, within 10 years. Related: Massive Study Links 15 Factors to Early Dementia Risk The increase was higher among patients aged between 35 and 49, and also rose with the number of prescriptions given, the researchers found. Though the study can't establish a cause for the increase, physicians are encouraged to keep a close eye on patients taking the drug. "Gabapentin prescription in adults with chronic low back pain is associated with increased risk of dementia and cognitive impairment, particularly in non-elderly adults," write the researchers in their published paper. "Physicians should monitor cognitive outcomes in patients prescribed gabapentin." Sold under brand names including Neurontin, gabapentin has proved to be less addictive than opioids, making it more likely to be prescribed in recent years. The drug does have some known side effects though, including extreme moods and allergic reactions. This isn't the first time researchers have examined associations between gabapentin and dementia, but previous studies haven't agreed on whether or not concerns are warranted. One of the study's strengths is the relatively large sample size of its participants, though the sample largely consisted of just one group of people – those with chronic lower back pain. A study published in 1997 found no link between gabapentin and cognitive decline in people with epilepsy, so it's important to continue to widen the data set. These conflicting results could suggest unique mechanisms among patients with the type of backpain that leads to a gabapentin prescription that also increases their risk of dementia, like a certain type of location of inflammation. But gabapentin works by dampening some of the brain's key communication channels, in order to provide relief from pain or make seizures less likely. So the worry is that it could also be damaging links between neurons in ways that might lead to dementia – a concern backed up by this latest study. Dementia is a challenging condition to study with so many potential factors to account for, but each study gets us closer to the full picture of how the brain breaks down over time. "We hope the current study promotes further research to delineate whether gabapentin plays a causal role in the development of dementia and the underlying mechanisms of this relationship," write the researchers. The research has been published in Regional Anesthesia & Pain Medicine. Related News 8 Babies Born in UK Using Radical 'Three Parent' IVF Technique These 4 Simple Exercises Could Help Break Your Insomnia Energy Drinks Seen Fuelling Cancer, But There's a Strange Catch Solve the daily Crossword