‘Grotesque.' How Fresno leaders are reacting to alleged hospital kickback scheme
Fresno area elected officials are demanding more transparency and accountability after an alleged kickback scheme orchestrated by a group of executives from the region's largest healthcare system came to light.
Earlier this month, the U.S. Attorney's Office announced that Community Health System (CHS) and healthcare technology affiliate Physician Network Advantage Inc. (PNA) agreed to pay a $31 million fine to settle allegations that they violated the False Claim Act.
Some of the region's public leaders told The Bee they are disappointed in the healthcare system's previous leadership and would push for increased accountability. They expressed bewilderment that the executives were able to carry out the fraud scheme, undetected, for more than a decade.
An unsealed whistleblower lawsuit filed by former PNA Controller Michael Terpening detailed the alleged scheme and listed more than a dozen gifts and donations provided to a select group of medical leaders. The gifts included access to a state-of-the-art wine and cigar lounge, expensive trips to Europe, Napa Valley and Las Vegas, trips to strip clubs, and nepotism hires.
Representative Jim Costa, D-Fresno, said the complaints in this case are serious and need to be addressed.
'Clearly, there is a need for strong oversight and transparency in our healthcare system,' Costa said. 'We must ensure that public trust and taxpayer dollars are never compromised by improper financial incentives. The people of the San Joaquin Valley deserve the best possible healthcare system that operates with integrity.'
Assemblymember Dr. Joaquin Arambula, a Democrat from Fresno and physician, said in a statement that 'these revelations are grotesque, to say the least.'
'These episodes, brought to light only because a whistleblower did so, clearly stemmed from a warped sense of entitlement. My hope is that the new leadership of CHS will right this ship and reflect the true heroes of CHS — the dedicated and compassionate workers who every day in CHS hospitals and clinics care for and heal the people of our Valley,' Arambula said.
Community Health System (CHS), Fresno's largest healthcare group, owns downtown Fresno's Community Regional Medical Center and the Clovis Community Medical Center, as well as a health plan and physician network. Community Medical Centers (CMC) is the name of the group that includes the hospitals and clinics under the CHS umbrella.
CHS is the the leading healthcare provider in the central San Joaquin Valley and owns the only level one trauma center in the region.
Assemblymember Esmeralda Soria, whose district includes west Fresno County, Madera and Merced, called the lawsuit and settlement 'unfortunate' and stressed the impact on patients receiving federally subsidized care and whose healthcare access is jeopardized by significant federal funding cuts.
'The other loser in this settlement is Medi-Cal and Medicaid patients, who are already suffering from the federal government's proposed cuts to these programs. Moving forward, I hope leadership and boards for both CHS and PNA make serious changes in their approach to regain the community's trust and putting patients first above all else,' Soria said.
In a statement on the settlement, CHS Board Chair Roger Sturdevant said the current board and leadership team recognizes they are accountable to assuring compliance that is consistent with regulatory requirements and their own standards. CHS leadership has cooperated with the U.S. Attorneys' Office in its review, Sturdevant said.
'Based on both our own internal assessment and external findings, we have identified and addressed areas for improvement. We are confident that our policies and procedures today address all areas of concern. It is important to note, that at no time were patients inappropriately billed and this issue had no impact on our clinicians' ability to provide outstanding care,' he said.
As part of the settlement, CHS has entered into a five-year Corporate Integrity Agreement with the U.S. Department of Health and Human Services Office of Inspector General that requires a risk assessment and internal review process designed to identify and address evolving compliance risks. The agreement also requires an independent review organization to annually assess the policies and systems to track arrangements with some referral sources.
Sign up here to receive our weekly newsletter centered around Latino issues in California.
Fresno County Supervisor Nathan Magsig, whose district includes part of Fresno, Clovis and the rural foothill communities, said he's looked into the case that focuses on Community's implementation of electronic health record system, and the associated vendor relationship.
'This is a difficult time for hospitals across the nation, which is why the settlement is particularly disheartening. But I'm confident in Community's current leadership,' Magsig said. 'They have identified and addressed the government's concerns and I believe Community's leadership and board will do everything they can to assure its approach to vendor relationships, and reporting practices are solid to avoid this ever happening again.'
Fresno County Supervisor and Vice Chairman Garry Bredefeld called the conduct by CHS and affiliate PNA 'disgraceful and shameful.'
'They violated the trust of their patients by providing extravagant benefits and kickbacks to executives and some physicians in order to entice them to refer their patients to Community,' he said.
Bredefeld, a retired clinical psychologist, said the medical community already lost confidence and trust of the public with their 'mishandling of the COVID-19 pandemic.'
'Now this corruption has been exposed which will further diminish the trust. I applaud the (Department of Justice) for their work and ensuring that there now will be greater oversight of these institutions, so this corruption never occurs again,' he said.
Fresno County District 3 Supervisor Luis Chavez, said he was 'deeply disappointed' by the details that have come to light in the settlement and wanted to see concrete change to restore trust.
'I've met directly with the new CEO and had frank, honest conversations about the urgent need to rebuild community trust,' Chavez said. 'I made it clear that restoring confidence in our healthcare system requires more than words, it requires action. The new leadership team has assured me of their commitment to downtown Fresno and to delivering care that puts patients first. But assurances alone won't move the needle.'
'We need to see real, measurable improvements, starting with transparency, reducing emergency room wait times and ensuring that life-saving services are readily available to every corner of our city. I will continue to hold this hospital system accountable and remain steadfast in advocating for the health, safety, and dignity of every Fresno resident,' he said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
25 minutes ago
- Yahoo
'South Park' Turns Up The Heat On Trump With 'Perfect' Return Of Beloved Character
'South Park' released a new clip teasing Wednesday night's episode that features the return of a fan-favorite character as the show appears set to continue trolling President Donald Trump. The clip shows Towelie ― a sentient towel who loves to get high ― arriving by bus in Washington, D.C. to find the city under military control. 'This seems like the perfect place for a towel,' Towelie says as he watches a tank roll past the White House ― mimicking the real-life situation in which Trump has sent the National Guard into the city. Trump has claimed the military is needed to bring order to a city besieged by crime. However, the violent crime rate there dropped in both 2024 and 2025, leading critics to blast the move as a 'stunt.' 'South Park' has pulled a few stunts of its own since the show returned last month, mocking corporate parent Paramount for caving to Trump by agreeing to pay $16 million to settle a lawsuit over '60 Minutes' that most legal observers considered frivolous. Related: Trump has claimed the settlement includes PSAs, and 'South Park' mockingly gave him one at the end of the episode, which showed a very realistic Trump stripping in the desert until he was naked, complete with a talking 'teeny tiny' penis. The show continued to go after Trump and his administration in the second episode, which focused mostly on Homeland Security Secretary Kristi Noem. The next episode airs Wednesday night on Comedy Central, and will stream on Paramount+. 'South Park' Goes Scorched-Earth On Trump In Shockingly NSFW Season Premiere Aubrey Plaza Details 'Awfulness' After Her Husband's Shocking Death Elon Musk Was Not Pleased With 'Silicon Valley' Show's Portrayal Of Tech Parties


Forbes
28 minutes ago
- Forbes
Nevada Enacts New Law To Shut Down The Use Of AI For Mental Health But Sizzling Loopholes Might Exist
In today's column, I examine the recently enacted law by Nevada that seemingly aims to shut down the use of AI for mental health therapy in the glitzy Silver State. Here's the deal. Several states have been quickly pushing through new legislation to try and restrict or outrightly ban the use of AI to perform therapy. The idea is that only human therapists, psychologists, psychiatrists, and mental health professionals are allowed to perform mental health services. It is presumably an act reserved for human-to-human exclusivity. I recently analyzed the latest such AI-restricting law that was passed in Illinois, see the link here, which in many ways is akin to the Nevada law. I will discuss the mainstay similarities and differences herein. All in all, a looming spread of these laws, including potentially having federal enactments too, puts AI makers in potential trouble and will inexorably squash the use of AI as a mental health tool. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. Background On AI For Mental Health First, I'd like to set the stage on how generative AI and LLMs are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations. The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets, see my coverage at the link here. This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis. Compared to using a human therapist, the AI usage is a breeze and readily undertaken. When I say that I am referring to generative AI and LLMs, please know that there are generic versions versus non-generic versions of such AI. Generic AI is used for all kinds of everyday tasks, and just so happens to also encompass providing a semblance of mental health advice. On the other hand, there are customized AIs specifically for performing therapy; see my discussion at the link here. I'm going to primarily be discussing generic generative AI, though many of these points can involve the specialized marketplace, too. Therapists Adopting AI Therapists are realizing that they, too, must consider adopting the use of AI. The reason is straightforward. Potential clients and patients are walking in the door with claimed-to-be cognitive diagnoses that AI has generated for them. Some therapists tell their clients and patients to simply ignore the AI. But that doesn't usually do much good, since people will indubitably go behind the back of their therapist and access AI anyway. For more on the ins and outs of therapists using AI for mental health, see my analysis at the link here. An emerging strategy for therapists is to avidly adopt the use of AI into their practices. It's the proverbial if you can't beat them, join them refrain. The march toward AI is unstoppable. They generally do so in these two major ways: An administrative use of AI by a therapist is generally less concerning than when using AI for therapeutic purposes. Assuming that the administrative use of AI is done with proper security and rigor, most clients or patients won't especially care that the therapist is using AI in that manner. The assumption is that the AI streamlines the business side and hopefully reduces The controversial angle is the use of AI as an arm of the therapist. Some therapists say that choosing to use AI is a big mistake and that the cherished therapist-client dyad bond should remain untouched and unchanged. Others, such as my advocacy, assert that we are heading into an era of a new triad, consisting of a therapist-AI-client relationship. It is inevitable and unavoidable. See my coverage at the link here. The Law Gap Is Closing Fast Consider then that we have two potential overarching issues brewing: It would be possible to establish regulations that could be a means of dealing with one or both of those brewing concerns. Lawmakers could opt to formalize legal conditions associated with how therapists lean into AI. That could be a consideration all by itself. Likewise, a circumstance all by itself could be the matter of regulating AI makers about allowing their AI to wantonly provide mental health advice. A double whammy would be to tackle both tough topics in one fell swoop. The recently passed and signed Nevada law tries to tackle both topics at the same time. Indeed, the Nevada law goes further and has a veritable potpourri of other aspects. In contrast, the Illinois law tended to focus primarily on the two above-noted considerations. These state laws not only have to do with the respective states, but they are also a bellwether of how AI for mental health is possibly going to be regulated. Often, regulations of one kind or another start in one state and then are reused or recast when other states opt to do something similar. They might take the language used in the already passed law and use that as a draft for their own proposed law. Some language gets changed, new language is added, and so on. The first law to get approved often serves as a template or model. Besides the various states enacting their own laws, there is often a dynamic that gets the federal government to also pursue the same or similar regulation. Once again, the initial state law might be an illuminating example. Questions naturally arise on how to best reshape a state-specific law into a law that might be suitable across the board as a federal law. Impacts Are Plenty Let's go ahead and take a quick peek at the Nevada law and see what we can make of it. I will share just some mindfully chosen snippets and give you a taste of what the law contains. Please know that the law has numerous twists and turns. Also, my commentary is merely a layman's viewpoint. Make sure to consult with your attorney to garner the legal ramifications of whatever your own situation entails. The law that was passed is known as Assembly Bill 406, abbreviated as AB406, and was signed into law by the governor of Nevada on June 5, 2025. You will see the nomenclature of 'NRS' throughout the passages -- NRS means Nevada Revised Statutes, referring to the various codified laws of Nevada. Here is the official description of AB406 at a 30,000-foot level: A significant difference between the Nevada law versus the Illinois law is the added element of AI usage in schools. The background is that there is a lot of controversy these days about deploying AI in schools, and that AI could be used in undesirable ways. This law tries to provide conditions and restrictions on the educational use of AI in the public schools of Nevada. I am not going to address that topic in this discussion and will be covering the debate regarding school uses of AI in a future posting. Be on the watch for that coverage. AI Use By The Public I had mentioned earlier that the Nevada law and the Illinois law entail two major AI-powered mental health capacities. One aspect is the use of AI for mental health as undertaken by the general public of their own volition. The other capacity is the use of AI by mental health professionals via their services and professional practice. Let's start with the perhaps biggest kicker, the use of AI for mental health by the public at large. That portion is contained in Section 7 of AB406. In Section 7 of AB406, the legal indication amends an existing statute known as Chapter 433, and here is the core language (excerpt): There are three bulleted points labeled as 'a', 'b', and 'c'. Read those points carefully. You will keenly observe that the crux is that AI available for use in Nevada is not supposed to provide professional mental or behavioral health care. This includes that the AI is not to simulate human conversation in that same vein. Nor can the provider of the AI, nor the AI itself, appear to suggest or outrightly state that it is a kind of professional mental health specialist, such as a human therapist. I am paraphrasing the language shown, so realize that the actual language is paramount, not my paraphrasing. Emphasis On Professional And Of Programmed The next portion of that Section 7 adds a further twist to this: The language once again hammers away at the notion that the AI isn't supposed to be doing the job of a living-breathing mental health therapist. There are some key wording aspects in this Section 7 that are worth pondering. One aspect is the conspicuous use of the word 'professional'. A legal beagle that represents an AI maker would undoubtedly argue that if the AI isn't performing professional-caliber mental health guidance, the AI doesn't come under these provisions. Only AI that is acting in a seemingly professional manner, or that the AI maker claims is functioning professionally, ends up under this umbrella. In short, if the AI is, shall we say, feeble, in contrast to professional mental health guidance, the AI maker might insist they are off the hook. Hey, our AI is just playing around and perchance dips its toe into mental health therapy. It isn't the real thing. Period, end of story. In the portion above, you might also have noted that the phrase 'specifically programmed' was utilized. This provides perhaps another escape route. An AI maker might stridently claim that they didn't devise the AI to be professional. And, for sure, it isn't rising to any iota of a professional level or caliber. Voila, on both elements, we don't have to worry about this law and can proceed scot-free. It will be interesting to see how that plays out in reality and whether legal cases will be launched in Nevada that deal with the semantics or interpretation of this law. Due to space limitations here and wanting to be succinct, I'll note that there is a Section 7c that provides a definition of 'Professional mental or behavioral health care' that dovetails into the semantics considerations. I am not going to dive into that passage here but bring it to your attention for the sake of completeness. Self-Help Exclusion A provision in Section 6 brings forth another possible wiggle-room pathway: You might have caught the notable wording that if the AI is providing self-help in the mental health context, and assuming the AI and nor is the AI maker proclaiming that the AI is of professional quality, the AI maker might be given a free pass. Do you think that generative AI could be construed as providing a self-help style of interaction? If so, an additional avenue opens for AI makers who are trying to avoid getting ensnared in this new law. AI Provided Via Therapists Shifting gears, let's explore the side of things involving human therapists who opt to make use of AI in their services and practice. That comes up in Section 8: This smacks of a similar approach used in the Illinois law. The idea is that therapists can use AI for administrative purposes, such as scheduling and billing, but they dare not use AI for therapeutic purposes. I find this trend to be unfortunate, short-minded, and dismal. A therapist could legitimately have AI interacting with clients while the client is at home or elsewhere, doing some follow-up homework under the overall guidance of the therapist. These laws are an overreach, a clumsy catch-all. It will utterly chase therapists away from using AI in a manner that can be highly productive, merely because the wording is like a sword dangling over their heads. Society ought to want mental health professionals to use AI in sound ways, especially so on the therapy side of things. Thus, don't ban the AI. Instead, provide provisions associated with sensible and tangible guardrails and stipulations. Setting sensible boundaries would be useful. Squashing or heavily stifling innovation in mental health is not the way we should be headed. Thoughtful and guarded adoption of AI is warranted and saluted. I vote that any laws related to therapists' use of AI for therapy ought to be of a balanced nature. So far, they are lopsided, egregiously so. Key Takeaways For AI Makers For the AI makers, they should take this as a clarion call and closely scrutinize these laws and take immediate and mindful actions to protect themselves from undue legal vulnerabilities and reputational risks. Allow me to clarify that this does not imply they should dodge these laws. They should abide by these laws and figure out where their AI sits in the emerging wanton mosaic that is being formulated. In the case of Nevada law, it would seem likely that the legal counsel of an AI maker would urge their client to take overt action to refrain from suggesting or stating that their AI is performing professional mental health care. Don't say that your generic generative AI instantiates professional quality therapy, such as brazenly blabbing so in your marketing materials. The top executives should not be giving speeches proclaiming that their generic generative AI is an ace at mental health. The AI itself should not be telling users that it is professional quality. That could easily happen. Depending on how the LLM has been data-trained and subsequently tuned, there is a chance that on some occasions, it might emit that kind of claim. AI makers would be wise to institute specialized add-on filters and computational double-checks that catch the AI from making those emboldened assertions. Other considerations include aligning the online licensing agreement and terms of service so that there isn't any wording that leans toward professional-stating claims. Another angle is whether the AI maker has users perform some form of consent when using the AI in a mental health capacity. That would be a ripe spot to declare that the AI isn't of professional quality and make darned sure to remove any wording that suggests otherwise. Getting Over Our Skis A crucial distinction that some of these laws are astutely making, while others are not, consists of separating unregulated AI that does therapy from regulated AI. Think of it this way. Suppose a highly customized AI app for therapy is carefully devised, tested, and then fielded. It has gone through the gauntlet of getting certification or regulated approval. These laws that toss everything into the same kitchen sink are equating generic generative AI to those mindfully devised AIs. No distinction is acknowledged. It is one thing to go after generic generative AI, and yet an entirely different consideration involving AI that has been dutifully crafted with evidence-based rigor for mental health. Like the old saying goes, to some degree, we might be tossing out the baby with the bathwater. A final thought for now. The famous line that what happens in Vegas, stays in Vegas, provides ample fodder associated with this Nevada law. Any AI maker that has their AI available to anyone in Nevada has now gained a new high risk. Whether the AI maker knows it or not, and many AI makers seem not to be cognizant of these laws, what happens in Nevada doesn't per se stay in Nevada. An AI maker in New York, California, or wherever else is a high roller in Nevada, assuming their AI is accessible in that state. Your stake is sitting on the table, with both legal and financial penalties cozying up, so open your eyes, make sure you have a safe hand, else you could lose your shirt.

Washington Post
28 minutes ago
- Washington Post
Texas Capitol evacuated after shooting threat citing ‘political situation'
The Texas Capitol was evacuated late Tuesday in response to a shooting threat that forced people — including those protesting a Republican order for police to escort Democratic lawmakers to their homes — to leave the building.