logo
AI For Mental Health Gets Attentively Analyzed Via Exciting New Initiative At Stanford University

AI For Mental Health Gets Attentively Analyzed Via Exciting New Initiative At Stanford University

Forbes17-04-2025

Stanford University has launched an important new initiative on AI for mental health, AI4MH, doing ... More so via the School of Medicine, Department of Psychiatry and Behavioral Sciences.
In today's column, I continue my ongoing coverage of the latest trends in AI for mental health by highlighting a new initiative at Stanford University, known aptly as AI4MH, undertaken by Stanford's esteemed Department of Psychiatry and Behavioral Sciences in the School of Medicine.
Their inaugural launch of AI4MH took place on April 15, 2025, and luminary Dr. Tom Insel, M.D., famed psychiatrist and neuroscientist, served as the kick-off speaker. Dr. Insel is renowned for his outstanding work in mental health research and technology, and served as the Director of the National Institute of Mental Health (NIMH). He is also known for having founded several companies that innovatively integrate high-tech into mental health care.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Readers familiar with my coverage on AI for mental health might recall that I've closely examined and reviewed a myriad of important aspects underlying this rapidly evolving topic, doing so in over one hundred of my column postings.
This includes analyzing the latest notable research papers and avidly assessing the practical utility of apps and chatbots employing generative AI and large language models (LLMs) for performing mental health therapy. I have spoken about those advances, such as during an appearance on a CBS 60 Minutes episode last year, and compiled the analyses into two popular books depicting the disruption and transformation that AI is having on mental health care.
It was with great optimism that I share here the new initiative at the Stanford School of Medicine on AI4MH and fully anticipate that this program will provide yet another crucial step in identifying where AI for mental health is heading and the impacts on society all told. Per the mission statement articulated for AI4MH:
Thanks go to the organizers of the AI4MH that I met at the inaugural event, including Dr. Kilian Pohl, Professor of Psychiatry and Behavioral Sciences (Major Labs and Incubator), Ehsan Adeli, Assistant Professor of Psychiatry and Behavioral Sciences (Public Mental Health and Populations Sciences), and Carolyn Rodriguez, Professor of Psychiatry and Behavioral Sciences (Public Mental Health and Population Sciences), and others, for their astute vision and resolute passion on getting this vital initiative underway.
During his talk, Dr. Insel carefully set the stage, depicting the current state of AI for mental health care and insightfully exploring where the dynamic field is heading. His remarks established a significant point that I've been repeatedly urging, namely that our existing approach to mental health care is woefully inadequate and that we need to rethink and reformulate what is currently being done.
The need, or shall we say, the growing demand for mental health care is astronomical, yet the available and accessible supply of quality therapists and mental health advisors is far too insufficient in numerous respects.
I relished that this intuitive sense of the mounting issue was turned into a codified and well-structured set of five major factors by Dr. Insel:
I'll recap the semblance of those essential factors.
Starting with diagnosis as a key factor, it is perhaps surprising to some to discover that the diagnosis of mental health is a lot more loosey-goosey than might be otherwise assumed. The layperson tends to assume that a precise and fully calculable means exists to produce a mental health diagnosis to an ironclad nth degree. This is not the case. If you peruse the DSM-5 standard guidebook, you'll quickly realize that there is a lot of latitude and imprecision underpinning the act of diagnosis. The upshot is that there is a lack of clarity when it comes to undertaking a diagnosis, and we need to recognize that this is a serious problem that requires much more rigor and reliability.
For my detailed look at the DSM-5 and how generative AI leans into the guidebook contents while performing AI-based mental health diagnoses, see the link here.
The second key factor entails engagement.
The deal is this. People needing or desiring mental health care are often unable to readily gain access to mental health care resources. This can be due to cost, logistics, and a litany of economic and supply/demand considerations. Dr. Insel noted a statistic that perhaps 60% of those potentially benefiting from therapy aren't receiving mental health care, and thus, a sizable proportion of people aren't getting needed help. That's a problem that deserves close scrutiny and outside-the-box thinking to resolve.
A related factor is capacity, the third of the five listed.
We don't have enough therapists and mental health professionals, along with related facilities, to meet the existing and growing needs for mental health care. In the United States, for example, various published counts suggest there are approximately 200,000 therapists and perhaps 100,000 psychologists, supporting a population of nearly 350 million people. That ratio won't cut it, and indeed, studies indicate that practicing mental health care professionals are overworked, highly stressed out, and unable to readily manage workloads that at times can riskily compromise quality of care.
For my coverage of how therapists are using AI as a means of augmenting their practice, allowing them to focus more on their clients and sensibly cope with the heightened workloads, see the link here.
The fourth factor is quality.
You can plainly see from the other factors how quality can be insidiously undercut. If a therapist is tight for time and trying to see as many patients as possible, seeking to maximize their mental health care for as many people as possible, the odds of quality taking a hit are relatively obvious. Overall, even with the best of intentions, quality is frequently fragmented and episodic. There is also a kind of reactive quality phenomenon, whereby after realizing that quality is suffering, a short-term boost in quality occurs, but this soon fizzles out, and the rest of the constraining infrastructure magnetically pulls back to the somewhat haphazard quality levels.
For my analysis of how AI can be used to improve quality when it comes to mental health care, see the link here.
Accountability is the fifth factor.
There's a famous quote attributed to the legendary management guru Peter Drucker that what gets measured gets managed. The corollary to that wisdom is that what doesn't get measured is bound to be poorly managed. The same holds true for mental health care. By and large, there is sparse data on the outcomes associated with mental health therapy. Worse still, perhaps, the adoption of evidence-based mental health care is thin and leaves us in the dark about the big picture associated with the efficacy of therapy.
For my discussion about AI as a means of collecting mental health data and spurring evidence-based care, see the link here and the link here.
The talk openly helped to clarify that we pretty much have a broken system when it comes to mental health care today, and that if we don't do something at scale about it, the prognosis is that things will get even worse.
A tsunami of mental health needs is heading towards us. The mental health therapy flotilla currently afloat is not prepared to handle it and is barely keeping above water as is.
What can be done?
One of a slew of intertwined opportunities includes the use of modern-day AI.
The advent of advanced generative AI and LLMs has already markedly impacted mental health advisement across the board. People are consulting daily with generative AI on mental health questions. Recent studies, such as one included in the Harvard Business Review, indicate that the #1 use of generative AI is now for therapy-related advice (I'll be covering that in an upcoming post, please stay tuned).
We don't yet have tight figures on how widespread the use of generative AI for mental health purposes is, but in my exploration of population-level facets, we know that there are for example 400 million weekly active users of ChatGPT, and likely several hundred million other users associated with Anthropic Claude, Google Gemini, Meta Llama, etc. Estimates of the proportion that might be using the AI for mental health insights are worth considering, and I identify various means at the link here.
It makes abundant sense that people would turn to generative AI for mental health facets. Most of the generative AI apps are free to use, tend to be available 24/7, and can be utilized just about anywhere on Earth. You can create an account in minutes and immediately start conversing on a wide range of mental health aspects.
Contrast those ease-of-use characteristics to having to find and use a human therapist. First, you need to find a therapist and determine whether they seem suitable to your preferences. Next, you need to set up an agreement for services, schedule to converse with the therapist, deal with constraints on when the therapist is available, financially handle the costs of using the therapist, and so on. There is a sizable amount of friction associated with using human therapists.
Contemporary AI is nearly friction-free in comparison.
There's more to the matter.
People tend to like the sense of anonymity associated with using AI for this purpose. If you sought a human therapist, your identity would be known, and a fellow human would have your deepest secrets. Users of AI assume that they are essentially anonymous to AI and that AI won't reveal to anyone else their private mental health considerations.
Another angle is that conversing with AI is generally a lot easier than doing so with a human therapist. The AI has been tuned by the AI makers to be overly accommodating. This is partially done to keep users loyal, such that if the AI were overbearing, then users would probably find some other vendor's AI to utilize.
Judgment is a hidden consideration that makes a big distinction, too. It goes like this. You see a human therapist. During a session, you get a visceral sense that the therapist is judging you, perhaps by the raising of their eyebrows or the harshening tone of their voice. The therapist might explicitly express judgments about you to your face, which certainly makes sense in providing mental health guidance, though preferably done with a suitable bedside manner.
None of that is normally likely to arise when using AI.
The default mode of most generative AI apps is that they avidly avoid judging you. Again, this tuning is undertaken at the direction of the AI makers (in case you are interested, here's what an unfiltered, unfettered generative AI might say to users, see my analysis at the link here).
A user using AI can feel utterly unjudged. Of course, you can argue whether that is a proper way to perform mental health advisement, but nonetheless, the point is that people are more likely to cherish the non-judgmental zone of AI.
As a notable aside, I've demonstrated that you can readily prompt AI to be more 'judgmental' and be more introspective about your mental health, which overrides the usual default and provides a less guarded assessment (see the link here). In that sense, the AI isn't mired or stuck in an all-pleasing mode that would seem inconsistent with proper mental health assessment and guidance.
Users can readily direct the AI as preferred by themselves, or use customized GPTs that can provide the same change in functionality, see the link here.
Use of AI in this context is not a savior per se, but it does provide a huge upside in many crucial ways. A recurring question or qualm that I am asked about is whether the downsides or gotchas of AI are going to impede and possibly mistreat users when it comes to conveying suitable mental health advisement.
For example, the reality is that the AI makers, via their licensing agreements, usually reserve the right to manually inspect a user's entered data, along with reusing the data to further train their AI, see my discussion at the link here. The gist is that people aren't necessarily going to have their entered data treated with any kind of healthcare-related privacy or confidentiality.
Another issue is the nature of so-called AI hallucinations. At times, generative AI produces confabulations, made-up seemingly out of thin air, that appear to be truthful but are not grounded in factuality. Imagine that someone is using generative AI for mental health advice, and suddenly, the AI tells the person to do something untoward. Not good. The person might have become dependent on the AI, building a sense of trust, and not realize when an AI hallucination has occurred.
For more on AI hallucinations, see my explanation at the link here.
What are we to make of these downsides?
First, we ought to be careful not to toss out the baby with the bathwater (an old expression).
Categorically rejecting AI for this type of usage would seem myopic and probably not even practical (for my assessment of the calls for banning certain uses of generative AI, see the link here). As far as we know so far, the likely ready access to generative AI for mental health purposes seems to outweigh the downsides (please note that more research and polling are welcomed and indeed required on these matters).
Furthermore, there are advances in AI that are mitigating or eliminating many of the gotchas. AI makers are astute enough to realize that they need to keep their wares progressing if they wish to meet user needs and remain a viable money-making product or service.
An additional twist is that AI can be used by mental health therapists as an integral tool in their mental health care toolkit. We don't need to fall into the mental trap that a patient uses either AI or a human therapist – they can use both in a jointly smartly devised way. The conventional non-AI approach is the classic client-therapist relationship. I have coined that we are entering into a new triad, labeled as client-AI-therapist relationships. The therapist uses AI seamlessly in the mental health care process and embraces rather than rejects the capabilities of AI.
For more on the client-AI-therapist triad, see my discussion at the link here and the link here.
I lean into the celebrated words of American psychologist Carl Rogers: 'In my early professional years, I was asking the question, how can I treat, or cure, or change this person? Now I would phrase the question in this way: how can I provide a relationship that this person may use for their personal growth?'
That relationship is going to include AI, one way or another.
One quite probable view of the future is that we will inevitably have fully autonomous AI that can provide mental health therapy that is completely on par with human therapists, potentially even exceeding what a human therapist can achieve. The AI will be autonomously proficient without the need for a human therapist at the ready.
This might be likened to the Waymo or Zoox of mental health therapy, referring to the emerging advent of today's autonomous self-driving cars. As a subtle clarification, currently, existing self-driving cars are only at Level 4 of the standard autonomy scale, not yet reaching the topmost Level 5. Similarly, I have predicted that AI for mental health will likely initially attain Level 4, akin to the autonomous level of today's self-driving cars, and then be further progressed into Level 5.
For my detailed explanation and framework for the levels of autonomy associated with AI for mental health, see the link here.
I wholly concur with Dr. Insel's suggested point that we need to consider the use of AI on an ROI basis, such that we compare apples to apples. Per his outlined set of pressing issues associated with the existing quagmire of how mental health care is taking place, we must take a thoughtful stance by gauging AI in comparison to what we have now.
You see, we need to realize that AI, if suitably devised and adopted, can demonstrably aid in overcoming the prevailing mental health care system problems. Plus, AI will likely open the door to new possibilities. Perhaps we will discover that AI not only aids evidence-based mental health care but takes us several steps further.
AI, when used cleverly, might help us to decipher how human minds work. We could shift from our existing black box approach to understanding mental health and reveal the inner workings that cause mental health issues. As eloquently stated by Dr. Insel, AI could be for mental health what DNA has been for cancer.
We are clearly amid a widespread disruption and transformation of mental health care, and AI is an amazing and exciting catalyst driving us toward a mental health care future that we get to define. Let's all use our initiative and our energies to define and guide the coming AI adoption to fruition as a benefit to us all.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Current and Future AI Uses in Dermatology: An Expert's View
Current and Future AI Uses in Dermatology: An Expert's View

Medscape

timea day ago

  • Medscape

Current and Future AI Uses in Dermatology: An Expert's View

Roxana Daneshjou, MD, PhD, is one of the leading experts on artificial intelligence (AI) in American dermatology. Daneshjou, assistant professor of biomedical data science and dermatology at Stanford University, Stanford, California, leads landmark AI studies, is an associate editor of the journal NEJM AI , and gives presentations about the topic, including one at the recent Society for Investigative Dermatology (SID) 2025 annual meeting where she starkly warned colleagues that 'dermatologists who use AI will replace dermatologists who don't.' Roxana Daneshjou, MD, PhD So one could assume that Daneshjou embraces AI in her clinical practice. But she doesn't — not quite yet. While AI is helpful with office tasks that involve writing, she said, it's not currently good enough at handling tasks, such as evaluating skin lesions or helping solve diagnostic riddles. 'You should only use it for tasks where you can easily catch errors and rectify them. You shouldn't use it when you're not sure of the answer or the next step because you could be badly misled,' she said in an interview with Medscape Medical News . But just wait. 'Eventually, once we have valid, well-validated AI tools that can help with diagnosis and triage, they're going to become essentially standard of care,' Daneshjou said. The following are excerpts from the interview with Daneshjou about the present and future of AI in dermatology. What do you mean when you say, 'Dermatologists who use AI will replace dermatologists who don't'? Daneshjou: That's actually a rehashed phrase originally coined by Curt Langlotz, a radiologist who made the same claim about radiologists. The point is that dermatologists aren't going anywhere. AI is not replacing dermatologists. It's that dermatologists who use AI will replace dermatologists who don't. Will some dermatologists be left behind? Daneshjou: Medicine always evolves. There was a time when we didn't have advanced imaging technologies like CT scans and MRIs. And think about how many dermatologists now use electronic health records (EHRs) vs writing everything down by hand. There are still some people writing things by hand, but physicians who can use EHRs have largely replaced those who can't. This isn't a new phenomenon. Whenever new technology comes along, it becomes incorporated into medical practice, and those who learn to adapt and adopt it eventually replace those who don't. Is there fear and denial in the dermatology community about AI? Daneshjou: There's fear, but there's also enthusiasm — sometimes enthusiasm to the point of using things that aren't ready for prime time. In my SID talk, I discussed how it's not safe to use large language models [AI] — LLMs — for any clinical task where you don't know the answer or can't validate it quickly. These models can have errors that are difficult to catch because the outputs look so convincing. Can you give an example of how using LLMs clinically might get a dermatologist in trouble? Daneshjou: In my presentation, I showed AI being asked to calculate a RegiSCAR score for a patient. It gives an output that looks really convincing but has some of the scores wrong. If you didn't know the RegiSCAR score yourself, you might not catch that mistake. Similarly, if you ask about medication dosing, sometimes AI gets it right. But research papers show it can get dosing wrong. If you're not certain of the answer, you shouldn't use an LLM for that task. That's different from giving it bullet points and saying, 'Follow these bullet points to draft a prior authorization letter' or 'Write an after-visit summary for my patient' about a disease you're well-versed in, and you can verify [the text] for accuracy. Are there reliable clinical uses for AI now? Daneshjou: First, I should note that publicly facing models aren't Health Insurance Portability and Accountability Act (HIPAA)–compliant, so you have to be careful about putting patient information in them. Some institutions like Stanford have HIPAA-compliant versions internally. I'd be very wary of using these models for diagnosis and treatment because they can say things that are wrong. I've heard dermatologists say they've put patient images into these models to get a differential diagnosis, which I would strongly advise against — both for HIPAA concerns and because the outputs aren't reliable. What about 'vision language' models (VLMs) in dermatology that are trained on skin images and could potentially be used for tasks such as identifying lesions? Daneshjou: The VLMs we've tested perform worse than the LLMs. They're even more in the research realm. Are current AI systems actually good at categorizing skin lesions? There are many papers claiming they're good, but there's not much prospective trial data validating that performance. We need more trial data proving that a particular model will continue to perform well in a clinical setting. So AI isn't ready for prime time in diagnosis and treatment? Daneshjou: That's correct. It's more useful in a supportive role — helping with writing or editing text. You worked on a 'red teaming' event that assigned attendees — engineers, computer scientists, and health professionals such as dermatologists to assign medical tasks to AI and ask questions. The results were published in Nature in March 2025. What did you discover? Daneshjou: We found that across all models tested, there was an error rate of around 20%. As our chief data scientist at Stanford likes to joke, 'You can use large language models for any task where a 20% error rate is acceptable.' Where do you think AI and dermatology are headed next? Daneshjou: Image-based models will eventually get good enough to earn US Food and Drug Administration clearance. But my concern is this will happen without the creators having to prove the models work across diverse skin tones — an incredibly important part of validation. Our research has shown that most image-based AI models exclude diverse skin tones in their training and testing. We're also going to see more multimodal models — ones that incorporate diverse information like images, text, and molecular data — to provide outputs or risk assessments. That's where AI is heading generally, not only just focusing on text or images alone but also taking information from multiple modalities the way humans do. How often do you use AI in your clinical practice? Daneshjou: Not very much. I run a research lab, so I use it extensively in research. I've used it to help with grant writing and to analyze recommendation letters I've written, asking it to identify weaknesses so I can improve them. Clinically, I've shown my nurse how to use our secure AI to draft prior authorization letters or rebuttals to insurance [rejections]. But otherwise, I don't really use it in clinic. You've discussed how AI handles clinical vignettes vs real patients. What should dermatologists understand about this? Daneshjou: Headlines often misrepresent reality. They'll say, 'AI models can diagnose patients.' But in reality, these models were given very nicely packaged vignettes and were able to provide a diagnosis. Patients don't come as nicely packaged vignettes. In real clinical practice, I have to ask, 'What's going on?' I have to do the skin check, identify lesions, gather history, and ask about duration, symptoms, occupation, and sun exposure. I have to collect all this information and make a judgment. Sometimes, the history doesn't match what you see, so you have to use clinical reasoning. This kind of clinical reasoning isn't what they're testing in research papers that claim AI can diagnose patients. Would you recommend using AI at all for generating differential diagnoses? Daneshjou: I'm not using AI just to use it. I need a specific reason why I think it will help me. For example, if I'm writing a grant and want a summary of one of my own research papers, I might ask it to write a first draft that I can edit because I know my own research well enough to verify what's correct. But I'm not using it to develop differentials for my patients. What would you advise dermatologists who want to adapt to AI but don't know where to start? Daneshjou: The American Academy of Dermatology (AAD) has AI boot camp videos. At the annual AAD meetings, the AAD offers educational sessions on AI. If you look in the Journal of the American Academy of Dermatology , there are Continuing Medical Education reviews that the AAD's Augmented Intelligence Committee has written to educate dermatologists about AI technologies and what to watch for. A few years ago, this content was sparse. But there's been a concerted effort to create educational materials for dermatologists. What would you tell dermatologists who are agonizing about AI? Daneshjou: I see people posting on LinkedIn what I would call outrageous claims based on research papers. They'll say, 'This research paper shows we have autonomous AI agents that can treat patients,' but when you read the actual paper, it doesn't show that at all. Often, the hype doesn't match the reality on the ground. And what about those who think AI is overblown and not worth worrying about? Daneshjou: Claims about AI replacing physicians or dermatologists are indeed overblown. But this is definitely something dermatologists will have to adapt to. It's eventually going to become part of practice in some ways.

Halo Biosciences Announces Thorax Publication of Phase 2a SATURN Study Results in PH-ILD
Halo Biosciences Announces Thorax Publication of Phase 2a SATURN Study Results in PH-ILD

Business Wire

timea day ago

  • Business Wire

Halo Biosciences Announces Thorax Publication of Phase 2a SATURN Study Results in PH-ILD

PALO ALTO, Calif.--(BUSINESS WIRE)--Halo Biosciences ('Halo'), a clinical-stage biotechnology company developing extracellular matrix-targeted therapies, today announced publication of results from the Phase 2a SATURN study in Thorax. The study, conducted at Stanford University, evaluated 4-methylumbelliferone (4-MU) in patients with pulmonary hypertension, a highly progressive disease with significant unmet needs. The SATURN study, a Phase 2a randomized, double-blind, placebo-controlled trial, enrolled 16 patients with pulmonary hypertension. 4-MU was safe and well-tolerated throughout the 24-week treatment period. The primary hemodynamic measurement of change in pulmonary vascular resistance was not statistically significant. Among patients with PH-ILD, prespecified exploratory efficacy signals showed a mean improvement of 66 meters in six-minute walk distance and enhanced quality-of-life scores. These findings support further clinical evaluation of 4-MU as a potential disease-modifying therapy for inflammatory and fibrotic lung diseases. 'These clinical data reinforce the scientific rationale for targeting hyaluronan in fibrotic and inflammatory lung disease and highlight 4-MU's potential as a first-in-class, disease-modifying ECM-modulator for patients with serious conditions like PH-ILD,' said Paul Bollyky, M.D., professor of medicine at Stanford University and scientific co-founder of Halo Biosciences. 'This represents a meaningful step forward for individuals living with PH-ILD, a condition with limited treatment options and a high burden of disease.' HB-1614, Halo's lead investigational therapy, is a proprietary, oral formulation of 4-MU optimized for improved bioavailability and long-term use in patients with chronic lung conditions such as PH-ILD. By inhibiting hyaluronan synthesis, HB-1614 targets a key driver of extracellular matrix (ECM) remodeling involved in inflammation and fibrosis—processes central to disease progression in several debilitating diseases, including PH-ILD. 'We are proud to see the SATURN study featured in Thorax, validating our translational approach and marking a key milestone in our development of HB-1614,' said Anissa Kalinowski, chief executive officer of Halo Biosciences. 'We are thankful to Stanford University and sponsor investigators Roham Zamanian, M.D., and Vinicio de Jesus Perez, M.D., for their leadership of the SATURN trial, unlocking the potential of this new mechanism of action.' Halo Biosciences is progressing clinical development of HB-1614 and exploring partnership opportunities to accelerate its work in PH-ILD and other fibrotic conditions. The company holds exclusive intellectual property for its formulation and is positioned to optimize drug delivery, bioavailability and regulatory strategy. The full manuscript is now available online. To access the paper, visit: ABOUT HB-1614 HB-1614 is Halo Biosciences' lead investigational therapy, a proprietary formulation of 4-methylumbelliferone (4-MU) designed to inhibit hyaluronan synthesis, a key driver of inflammation and fibrosis in the ECM. By targeting this dysregulated pathway, HB-1614 offers a novel, disease-modifying approach for conditions like pulmonary hypertension associated with interstitial lung disease (PH-ILD). ABOUT PULMONARY HYPERTENSION Pulmonary hypertension (PH) is a progressive condition caused by elevated blood pressure in the arteries of the lungs, leading to reduced oxygen exchange, right heart strain, and eventual heart failure. i Symptoms include breathlessness, fatigue, and dizziness ii. PH diagnosis is often delayed and accompanied by comorbidities, with most patients diagnosed between the ages of 60 and 70 iii. When PH is associated with interstitial lung disease (PH-ILD), the course of disease is often more accelerated, with these patients facing a median survival of just 2 to 5 years. iv Currently, there is only one FDA-approved therapy for PH-ILD, iv leaving a significant unmet need for therapies that target the underlying mechanisms of disease progression. New approaches are urgently needed to improve outcomes and quality of life for this vulnerable patient population. ABOUT HALO BIOSCIENCES Halo Biosciences is a clinical-stage biopharmaceutical company targeting the extracellular matrix (ECM) to transform the treatment of diseases characterized by inflammation and fibrosis. It is headquartered in Palo Alto, CA. For more information, visit i Pulmonary Fibrosis Foundation. Pulmonary Hypertension Related to Interstitial Lung Disease (for Patients). Retrieved from ii Pulmonary Hypertension Association. Diagnosing Pulmonary Hypertensio n. Accessed on June 3, 2025 from iii Mount Sinai Health System. (n.d.). Idiopathic pulmonary fibrosis. Mount Sinai Health Library. Retrieved June 3, 2025, from iv Nathan, S. D., Stinchon, M. R., Atcheson, S., Simone, L., & Nelson, M. (2025). Shining a spotlight on pulmonary hypertension associated with interstitial lung disease care: The latest advances in diagnosis and treatment. Journal of Managed Care & Specialty Pharmacy, 31(1-a Suppl), S2–S29.

Stanford professor turns his terminal cancer diagnosis into a class
Stanford professor turns his terminal cancer diagnosis into a class

Yahoo

time3 days ago

  • Yahoo

Stanford professor turns his terminal cancer diagnosis into a class

Dr. Bryant Lin thought his lingering cough was just allergies. Six weeks later, the Stanford University professor received devastating news: stage 4 lung cancer. The irony wasn't lost on Lin, who had spent years researching and teaching about non-smoker lung cancer. "I become the poster child for the disease," he said. Lin, who never smoked and wasn't exposed to secondhand smoke, represents a growing demographic. For Asians, the odds are higher. Asian women have twice the rate of non-smoker lung cancer than non-Asian women, according to Lin and recent studies. Rather than retreat from his diagnosis, the 50-year-old Lin made an unprecedented decision: He created a Stanford course centered around his cancer journey, giving medical students an unfiltered view of terminal illness from a patient's perspective. "I have stage four lung cancer, which is not curable," Lin told his class. "I will likely die of this cancer or something related to this cancer. It may be one year, it may be two years, it may be five years, I really don't know." The course aimed to rebalance medical education by showing students what patients truly experience. "Even though I knew what a patient goes through as a doctor, I didn't really know," Lin explained. By week three, Lin was documenting his chemotherapy treatments for students, sharing both physical symptoms and emotional struggles. "Feeling nauseous. Avoided the Chipotle today because of that," he told his class. Despite his terminal prognosis, Lin remains focused on living rather than preparing for death. His priorities center on family time with his wife, Christine, and their two sons, 17-year-old Dominic and 13-year-old Atticus. The family has been candid about Lin's diagnosis and prognosis. Lin has written letters to his sons for when he's no longer there, telling them: "Whether I'm here or not, I want you to know I love you. Of the many things I've done that have given my life meaning, being your daddy is the greatest of all." Lin's teaching philosophy extends beyond medical knowledge and also focuses on kindness and the power of hope. "It's easy to forget to be kind when you're sick," he said. "It's easy to forget to be kind when you're not feeling well, when you're busy, when life has got you down." The course opened with a letter from a former patient who wrote: "You treated me like you would treat your own father." The patient died two weeks after writing the letter. "He spent time writing a letter for me during his last hours, days of life," Lin said emotionally. "And in a way, this class is part of my letter, my way of giving back to my community." At the course's conclusion, Lin channeled Lou Gehrig's famous farewell speech, telling his students: "I consider myself the luckiest man on the face of this earth. I know I had a tough break, but I have an awful lot to live for." David Begnaud loves uncovering the heart of every story and will continue to do so, highlighting everyday heroes and proving that there is good news in the news with his exclusive "CBS Mornings" series, "Beg-Knows America." Every Monday, get ready for moments that will make you smile or even shed a tear. Do you have a story about an ordinary person doing something extraordinary for someone else? Email David and his team at DearDavid@ The wonderfully weird world of artist Luigi Serafini Fans turn out for estate sale at home of Tom Petty Why the FBI is calling the Boulder mall attack a "targeted act of violence"

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store