
UC Health, Blue Shield extend contract deadline, stave off disruption of care
This means the thousands of Californians who get medical care at UC Health through Blue Shield of California — including many in the Bay Area who go to UCSF and One Medical, a UCSF affiliate — have an additional 30 days of breathing room before potentially having to find a different health insurer or pay out-of-network rates for services if UC Health and Blue Shield cannot reach a new contract.
UC Health and Blue Shield have been renegotiating contracts to establish how much Blue Shield will reimburse services provided by UC Health hospitals, clinics and other facilities. One Medical is an affiliate of UCSF Health, one of the six UC Health academic medical centers statewide.
Contract negotiations between health care providers and insurers are routine and often involve disagreements over reimbursement rates. In recent years, the tenor of such negotiations has grown more public and combative, often with each side accusing the other of taking positions that would ultimately harm consumers through higher prices or less accessible medical care.
Late last week, San Francisco City Attorney David Chiu and Supervisor Matt Dorsey waded into the matter, urging Blue Shield to finalize an agreement with UC Health so that the roughly 5,000 city employees and retirees who go to UCSF for medical care will not lose access to critical medical services.
Last year, UC Health and Anthem Blue Cross, another major insurer in California, similarly had a dispute over contract terms that lasted months. The two sides eventually reached a new contract.
In the Bay Area, the outcome of the negotiations between UC and Blue Shield could impact residents insured by Blue Shield who get care at UCSF Medical Center and UCSF Benioff Children's Hospitals in San Francisco and Oakland. It includes people in CalPERS plans, employer plans, Covered California plans and Medicare plans (including Medicare Advantage) offered or administered by Blue Shield.
The negotiations do not affect UCSF Health Community Hospitals at Saint Francis and St. Mary's, which will remain in-network.
Both UC Health and Blue Shield said Monday that they hope to reach a new agreement and avoid interruptions for their patients and members.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


San Francisco Chronicle
3 hours ago
- San Francisco Chronicle
Valley fever is rising across California. Numbers are skyrocketing in one coastal county
Valley fever, an infection caused by breathing in fungal spores in dirt or dust, is on the rise across California, according to a news release by the California Department of Public Health. Over 6,700 provisional cases have been reported across California through the first seven months of 2025. The rates are highest in the San Joaquin Valley, the disease's namesake, but cases have been increasing in the northern Central Valley and the Central Coast. 'Valley fever is a serious illness that's here to stay in California,' said California Department of Public Health director and state public health officer Erica Pan, in a statement. 'We want to remind Californians, travelers to California and their healthcare providers to watch for signs and symptoms of valley fever to help detect it early.' The biggest rise in valley fever in recent years has been in coastal Monterey County, which has logged 348 cases in 2025, through the end of July. That's an increase of over 260% compared with the 2023-24 average. Ventura County had the next-highest increase, 92%. California recorded nearly 12,500 cases of valley fever in 2024, the most in a single year. By comparison, the state logged 7,000 to 9,000 cases per year from 2017 through 2023. Valley fever can occur any time of year, but infections typically happen in late summer and fall. The disease is caused by the fungus coccidioides, which grows in the soil in parts of California. Wet winters help the fungus grow while dry, windy weather during warmer months helps spread spores in the air. People living in areas with high rates of valley fever face higher risk of infection, especially if they live or work near where dirt is stirred up, like farms and construction sites. Where valley fever is common, experts recommend people stay inside and keep windows and doors closed when it's windy and dusty outside. They also suggest drivers keep car windows closed and use recirculating air, if possible. Experts recommend that those who have to be outside in dusty air consider using a well-fitted N95 respirator. Not everyone who gets valley fever experiences symptoms. But some people can deal with long-term coughing, fatigue and other flu-like symptoms. In rare cases, valley fever can cause severe lung infections and even become fatal. In 2024, there were at least 19 confirmed cases of valley fever after a music festival in Kern County. Of all California counties, Kern County has logged the highest number of cases of valley fever so far in 2025, with 1,945 provisional cases reported. Over 100 cases of valley fever have been provisionally reported in Contra Costa County in 2025, through the end of July. The total is the most of any Bay Area county. Research indicates that the rise and spread of valley fever may be linked to changes in climate. A 2024 study reported that swings between wet and dry conditions, which are expected to intensify in a warmer world, were associated with more cases of valley fever.
Yahoo
5 hours ago
- Yahoo
Warren Buffett Sparks Massive Rally in Troubled UnitedHealth Stock
Aug 15 - UnitedHealth Group (NYSE:UNH) shares soared 10% Friday morning after Warren Buffett (Trades, Portfolio)'s Berkshire Hathaway revealed a stake of 5 million shares, valued at roughly $1.6 billion. The move sent the stock up 12% in morning trading, marking its best day in five years and adding about 209 points to the Dow Jones Industrial Average at the open. Warning! GuruFocus has detected 5 Warning Sign with UNH. Buffett's vote of confidence comes as UnitedHealth faces a challenging year. The stock had fallen nearly 50% through Thursday, weighed down by rising healthcare costs, a Justice Department investigation into Medicare billing practices, and CEO Andrew Witty's recent departure. The company had also pulled its annual earnings outlook in May, and its updated 2025 guidance fell short of Wall Street estimates. Legendary investors Michael Burry (Trades, Portfolio) and David Tepper (Trades, Portfolio) also disclosed sizable stakes in the insurer, signaling renewed interest in the sector. George Hill, healthcare analyst at Deutsche Bank, commented that Berkshire's investment could act as a near-term trading floor for managed care stocks, reassuring other investors that the space is safe to consider again. This article first appeared on GuruFocus. Sign in to access your portfolio

Business Insider
6 hours ago
- Business Insider
I'm a psychiatrist who has treated 12 patients with 'AI psychosis' this year. Watch out for these red flags.
Dr. Keith Sakata said he has seen 12 patients hospitalized in 2025 after experiencing "AI psychosis." He works in San Francisco and said the patients were mostly younger men in fields such as engineering. Sakata said AI isn't "bad" — he uses it to journal — but it can "supercharge" people's vulnerabilities. This as-told-to essay is based on a conversation with Dr. Keith Sakata, a psychiatrist working at UCSF in San Francisco. It has been edited for length and clarity. I use the phrase "AI psychosis," but it's not a clinical term — we really just don't have the words for what we're seeing. I work in San Francisco, where there are a lot of younger adults, engineers, and other people inclined to use AI. Patients are referred to my hospital when they're in crisis. It's hard to extrapolate from 12 people what might be going on in the world, but the patients I saw with "AI psychosis" were typically males between the ages of 18 and 45. A lot of them had used AI before experiencing psychosis, but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities. I don't think AI is bad, and it could have a net benefit for humanity. The patients I'm talking about are a small sliver of people, but when millions and millions of us use AI, that small number can become big. AI was not the only thing at play with these patients. Maybe they had lost a job, used substances like alcohol or stimulants in recent days, or had underlying mental health vulnerabilities like a mood disorder. On its own, " psychosis" is a clinical term describing the presence of two or three things: false delusions, fixed beliefs, or disorganized thinking. It's not a diagnosis, it's a symptom, just like a fever can be a sign of infection. You might find it confusing when people talk to you, or have visual or auditory hallucinations. It has many different causes, some reversible, like stress or drug use, while others are longer acting, like an infection or cancer, and then there are long-term conditions like schizophrenia. My patients had either short-term or medium to long-term psychosis, and the treatment depended on the issue. Drug use is more common in my patients in San Francisco than, say, those in the suburbs. Cocaine, meth, and even different types of prescription drugs like Adderall, when taken at a high dose, can lead to psychosis. So can medications, like some antibiotics, as well as alcohol withdrawal. Another key component in these patients was isolation. They were stuck alone in a room for hours using AI, without a human being to say: "Hey, you're acting kind of different. Do you want to go for a walk and talk this out?" Over time, they became detached from social connections and were just talking to the chatbot. Chat GPT is right there. It's available 24/7, cheaper than a therapist, and it validates you. It tells you what you want to hear. If you're worried about someone using AI chatbots, there are ways to help In one case, the person had a conversation with a chatbot about quantum mechanics, which started out normally but resulted in delusions of grandeur. The longer they talked, the more the science and the philosophy of that field morphed into something else, something almost religious. Technologically speaking, the longer you engage with the chatbot, the higher the risk that it will start to no longer make sense. I've gotten a lot of messages from people worried about family members using AI chatbots, asking what they should do. First, if the person is unsafe, call 911 or your local emergency services. If suicide is an issue, the hotline in the United States is: 988. If they are at risk of harming themselves or others, or engage in risky behavior — like spending all of their money — put yourself in between them and the chatbot. The thing about delusions is that if you come in too harshly, the person might back off from you, so show them support and that you care. In less severe cases, let their primary care doctor or, if they have one, their therapist know your concerns. I'm happy for patients to use ChatGPT alongside therapy — if they understand the pros and cons I use AI a lot to code and to write things, and I have used ChatGPT to help with journaling or processing situations. When patients tell me they want to use AI, I don't automatically say no. A lot of my patients are really lonely and isolated, especially if they have mood or anxiety challenges. I understand that ChatGPT might be fulfilling a need that they're not getting in their social circle. If they have a good sense of the benefits and risks of AI, I am OK with them trying it. Otherwise, I'll check in with them about it more frequently. But, for example, if a person is socially anxious, a good therapist would challenge them, tell them some hard truths, and kindly and empathetically guide them to face their fears, knowing that's the treatment for anxiety. ChatGPT isn't set up to do that, and might instead give misguided reassurance. When you do therapy for psychosis, it is similar to cognitive behavioral therapy, and at the heart of that is reality testing. In a very empathetic way, you try to understand where the person is coming from before gently challenging them. Psychosis thrives when reality stops pushing back, and AI really just lowers that barrier for people. It doesn't challenge you really when we need it to. But if you prompt it to solve a specific problem, it can help you address your biases. Just make sure that you know the risks and benefits, and you let someone know you are using a chatbot to work through things. If you or someone you know withdraws from family members or connections, is paranoid, or feels more frustration or distress if they can't use ChatGPT, those are red flags. I get frustrated because my field can be slow to react, and do damage control years later rather than upfront. Until we think clearly about how to use these things for mental health, what I saw in the patients is still going to happen — that's my worry. OpenAI told Business Insider: "We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we're working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful, and supportive. "We're working to constantly improve our models and train ChatGPT to respond with care and to recommend professional help and resources where appropriate." If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line — just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.