logo
Does Harvard need a medical school?

Does Harvard need a medical school?

Boston Globe10-05-2025
While considering a response to this situation, my nine years as dean of HMS remind me that the school is an institution that is poorly understood within the Harvard community. Thus, a brief primer on the country's top medical school may be useful.
Advertisement
Founded in 1782, HMS is the third oldest US medical school, was Harvard's first graduate school, and has long been the
HMS has a complex organizational structure. Most of the 160 US medical degree-granting medical schools are associated with universities, but relationships vary. Stanford University, Johns Hopkins University, the University of Pennsylvania, and many other universities own both their medical schools and associated hospitals. Others resemble Harvard, where the medical school is part of the university but the hospitals are independent corporate entities, linked to the school by affiliation agreements. These stipulate shared educational activities and terms by which HMS faculty employed by hospitals are appointed, promoted, and subjected to medical school policies.
Advertisement
The foundational goal of HMS is to educate and graduate 165 students per year who emerge with medical degrees after completing a four-year curriculum combining preclinical education with hospital-based clinical training. HMS has
HMS is also internationally recognized for the research of its faculty, who are employed by either HMS or HMS-affiliated hospitals and institutes. Approximately 325 faculty employed by HMS are deployed in various departments on the HMS quad, the informal name for the Harvard-owned campus on Longwood Avenue in Boston. These faculty are recruited through highly competitive searches, and their research is kickstarted with generous startup funds. The National Institutes of Health is the largest funder of ongoing research, with approximately $234 million being awarded to HMS in 2024.
Many more HMS faculty are employed by the affiliated hospitals and institutes. Most provide clinical care and education, but several thousand with MD, PhD, or MD/PhD degrees primarily conduct research, from fundamental science to research in every disease area. This occurs in hospital facilities, funded by grants awarded to the hospitals. The combined grant funding to HMS affiliates is three to four times greater than that awarded directly to all Harvard schools. A small number of affiliate-based faculty also hold appointments in quad departments. One prominent example is recent Nobel Prize recipient
Advertisement
This relationship allows Harvard to receive recognition for the achievements of faculty at the quad and hospitals, while hospitals and their faculty benefit reputationally from association with the Harvard brand.
What accounts for many Harvard faculty being poorly informed about the medical school and its faculty?
The first issue is numerical.
Nevertheless, a
There are other differences between HMS hospital-based faculty and those at the quad and other Harvard schools. Few of the former are hired on what is called an up-or-out tenure clock, where employment ends if professorial tenure isn't granted after a specific period. So many remain for years at assistant or associate professor ranks. In addition, when HMS affiliate-employed faculty are promoted to full professor by a rigorous Harvard-managed process, they have tenure of title, meaning they hold their title indefinitely; but unlike professors at the quad and elsewhere in the university, they lack financial tenure that provides indefinite salary commitment. These differences lead some to conclude that HMS is not really a Harvard school.
Advertisement
This thinking works against Harvard's interests. At a moment when medical science and health research are seen as increasingly important, the entire HMS ecosystem is, despite its organizational complexity, a critical Harvard asset. And when the fuel enabling this research is held hostage to punish the university for alleged flaws largely unrelated to HMS, questioning the importance of HMS to Harvard is a serious strategic blunder.
To remain a world-renowned modern university, Harvard must continue research at HMS, its affiliates, and across the university. The humanities, many social sciences, and research-light Harvard schools might continue without external funding. But the mission of HMS and its affiliates developed over the past 70 years in collaboration with federal research funding. The focus of our common struggle should be preventing this success from being terminated by misguided government actions.
To effectively defend our incredible ecosystem requires a clear-eyed understanding of what HMS is, how it contributes in multiple ways to the academic culture of Harvard and the welfare of society, and why this vital relationship must continue if Harvard wishes to remain a great university in the century ahead.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Illinois becomes third state to restrict use of artificial intelligence in mental health industry as experts warn about ‘AI psychosis'
Illinois becomes third state to restrict use of artificial intelligence in mental health industry as experts warn about ‘AI psychosis'

New York Post

timean hour ago

  • New York Post

Illinois becomes third state to restrict use of artificial intelligence in mental health industry as experts warn about ‘AI psychosis'

Illinois passed a bill banning therapists from employing artificial intelligence chatbots for assistance with mental health therapy, as experts countrywide warn against people's ever-growing reliance on the machines. The 'Therapy Resources Oversight' legislation prohibits licensed mental health professionals in Illinois from using AI for treatment decisions or communication with clients. It also bans companies from recommending chatbot therapy tools as a be-all alternative to traditional therapy. 5 Illinois became the third state to pass a bill banning therapists from relying on artificial intelligence. terovesalainen – Advertisement Enforcement of the bill will rely on complaints from the public that the Illinois Department of Financial and Professional Regulation will investigate. Anyone determined to be violating the ban could face a civil penalty of up to $10,000, according to the legislation text. Utah and Nevada, two Republican-run states, previously passed similar laws limiting AI's capacity in mental health services in May and late June, respectively. Unregulated chatbots can take harmless conversations in any direction, sometimes incidentally leading people into divulging sensitive information or pushing people who are already in vulnerable situations to do something drastic, like take their own life, experts have warned. Advertisement A Stanford University study released in June found that many chatbots, which are programmed to respond enthusiastically to users, fail to sidestep concerning prompts — including requests for high bridges in specific locations to jump off of. 5 Utah and Nevada previously passed similar bans limiting AI. AnnaStills – Whereas chatbots affirm unequivocally regardless of the circumstance, therapists provide support and the means to help their patients improve, Vaile Wright, senior director for the office of health care innovation at the American Psychological Association, told the Washington Post. 'Therapists are validating, but it's also our job to point out when somebody is engaging in unhealthy thoughts, feelings, behaviors and then help somebody challenge those and find better options,' Wright told the outlet. Advertisement 5 Experts warn that overreliance on AI is creating 'psychosis' for heavy users. Ascannio – The bans, though, are difficult to effectively enforce — and can't prevent everyday people from turning to AI for mental health assistance on their own. New research released in early August found that many bots like ChatGPT are inducing 'AI psychosis' in unwitting users with no history of mental illnesses. 5 Some chatbots have allegedly contributed to users' suicide attempts. PhotoGranary – Advertisement Roughly 75% of Americans have used some form of AI in the last six months, with 33% reporting daily usage for anything from help on homework to desperate romantic connections. This deep engagement is breeding psychological distress in heavy users, according to the digital marketing study. Many youth, in particular, are falling down the chatbot rabbit hole and turning to machines to supplement human interaction. 5 Roughly 75% of Americans have used AI in some capacity over the last six months, according to one study. Vane Nunes – a popular platform where users can create and share chatbots usually based on fictional characters, had to place a warning clarifying that anything the bots say 'should not be relied upon as fact or advice' after a Florida teen fell in love with his 'Game of Thrones' AI character and took his own life. The platform is still dealing with a lawsuit filed against the company for the teen's death. Despite repeated attempts to dismiss it on First Amendment grounds, a federal judge ruled that the suit could move forward in August. Another Texas family sued after a chatbot on the app named 'Shonie' encouraged their autistic son to cut himself.

Her 6-Year-Old Son Told Her He Wanted to Die. So She Built an AI Company to Save Him
Her 6-Year-Old Son Told Her He Wanted to Die. So She Built an AI Company to Save Him

Gizmodo

time9 hours ago

  • Gizmodo

Her 6-Year-Old Son Told Her He Wanted to Die. So She Built an AI Company to Save Him

The burgeoning world of AI-powered mental health support is a minefield. From chatbots giving dangerously incorrect medical advice to AI companions encouraging self-harm, the headlines are filled with cautionary tales. High-profile apps like and Replika have faced backlash for harmful and inappropriate responses, and academic studies have raised alarms. Two recent studies from Stanford University and Cornell University found that AI chatbots often stigmatize conditions such as alcohol dependence and schizophrenia, respond 'inappropriately' to certain common and 'encourage clients' delusional thinking.' They warned about the risk of over-reliance on AI without human oversight. But against that backdrop, Hafeezah Muhammad, a Black woman, is building something different. And she's doing it for reasons that are painfully personal. 'In October of 2020, my son, who was six, came to me and told me that he wanted to kill himself,' she recounts, her voice still carrying the weight of that moment. 'My heart broke. I didn't see it coming.' At the time, she was an executive at a national mental health company, someone who knew the system inside and out. Yet, she still couldn't get her son, who has a disability and is on Medicaid, into care. 'Only 30% or less of providers even accept Medicaid,' she explains. 'More than 50% of kids in the U.S. now come from multicultural households, and there weren't solutions for us.' She says she was terrified, embarrassed and worried about the stigma of a child struggling. So she built the thing she couldn't find. Today, Muhammad is the founder and CEO of Backpack Healthcare, a Maryland-based provider that has served more than 4,000 pediatric patients, most of them on Medicaid. It's a company staking its future the radical idea that technology can support mental health without replacing the human touch. On paper, Backpack sounds like many other telehealth startups. In reality, its approach to AI is deliberately pragmatic, focusing on 'boring' but impactful applications that empower human therapists. An algorithm pairs kids with the best possible therapist on the first try (91% of patients stick with their first match). AI also drafts treatment plans and session notes, giving clinicians back hours they used to lose to paperwork. 'Our providers were spending more than 20 hours a week on administrative tasks,' Muhammad explains. 'But they are the editors.' This human-in-the-loop approach is central to Backpack's philosophy. The most critical differentiator for Backpack lies in its robust ethical guardrails. Its 24/7 AI care companion is represented by 'Zipp,' a friendly cartoon character. It's a deliberate choice to avoid the dangerous 'illusion of empathy' seen in other chatbots. 'We wanted to make it clear this is a tool, not a human,' Muhammad says. Investor Nans Rivat of Pace Healthcare Capital calls this the trap of 'LLM empathy,' where users 'forget that you're talking to a tool at the end of the day.' He points to cases like where a lack of these guardrails led to 'tragic' outcomes. Muhammad is also adamant about data privacy. She explains that individual patient data is never shared without explicit, signed consent. However, the company does use aggregated, anonymized data to report on trends, like how quickly a group of patients was scheduled for care, to its partners. More importantly, Backpack uses its internal data to improve clinical outcomes. By tracking metrics like anxiety or depression levels, the system can flag a patient who might need a higher level of care, ensuring the technology serves to get kids better, faster. Crucially, Backpack's system also includes an immediate crisis detection protocol. If a child types a phrase indicating suicidal ideation, the chatbot instantly replies with crisis hotline numbers and instructions to call 911. Simultaneously, an 'immediate distress message' is sent to Backpack's human crisis response team, who reach out directly to the family. 'We're not trying to replace a therapist,' Rivat says. 'We're adding a tool that didn't exist before, with safety built in.' Beyond its ethical tech, Backpack is also tackling the national therapist shortage. In many cases, therapists, unlike doctors, traditionally have to pay out of pocket for the expensive supervision hours required to get licensed. To combat this, Backpack launched its own two-year, paid residency program that covers those costs, creating a pipeline of dedicated, well-trained therapists. More than 500 people apply each year, and the program boasts an impressive 75% retention rate. In 2021, then-U.S. Surgeon General Dr. Vivek H. Murthy has called mental health 'the defining public health issue of our time' while referring at the time to the mental health crisis plaguing young people. Muhammad doesn't dodge the criticism that AI could make things worse. 'Either someone else will build this tech without the right guardrails, or I can, as a mom, make sure it's done right,' she says. Her son is now 11, thriving, and serves as Backpack's 'Chief Child Innovator.' 'If we do our job right, they don't need us forever,' Muhammad says. 'We give them the tools now, so they grow into resilient adults. It's like teaching them to ride a bike. You learn it once, and it becomes part of who you are.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store