IU Health to build West Lafayette's first full-service hospital, part of $214 million plan
The three-part Greater Lafayette Community Growth Project will bring the city's first hospital to West Lafayette, a news release said, along with a state-of-the-art cancer center to the Arnett Hospital campus in Lafayette and a substantial expansion to the number of specialty medical services available in West Lafayette.
Art Vasquez, president of IU Health's West Region, said in the release that the IU Health Board of Directors recognized the growth coming to Greater Lafayette several years ago, understanding the need for an increase in services.
"We planted our flag early, confident in the community's potential and committed to being part of its future,' Vasquez said in the release. 'Today, we're proud to announce the next bold step in that journey with the Greater Lafayette Community Growth Project. This is a major milestone, not just for our organization, but for our community.'
Johnna Dexter-Wiens, senior communications consultant for IU Health's West Region, said a location for the new West Lafayette hospital has been confirmed, but she declined to share more details as of Friday morning.
"We have secured a site in West Lafayette as part of our strategic planning process to expand access to high-quality care in the region," Dexter-Wiens said. "We are currently conducting thorough due diligence, and we look forward to sharing more details in the coming weeks. Our priority is to build in the location that will best serve the community, both now and for generations to come."
The project builds on more than a century of commitment by IU Health in Greater Lafayette, the release said, beginning with the establishment of the Arnett-Crockett Clinic in 1922.
Dennis Murphy, IU Health president and chief executive officer, said in the release that IU Health has been responsible for bringing several "firsts" in health care to Tippecanoe County, including the first multi-specialty clinic, the first cardiology and pediatric departments and the first urgent care.
"Now, we're preparing to bring another milestone — the first hospital in West Lafayette,' Murphy said in the release. 'Throughout every era, IU Health has consistently followed through — investing in infrastructure, talent and innovation to meet the evolving needs of the communities we proudly serve.'
Construction on both the cancer center and hospital is expected to begin in 2026, the release said, with both facilities projected to open in 2028.
The Greater Lafayette Community Growth Project will relocate the existing IU Health Cancer Center in Lafayette from its current location on 26th Street in Lafayette to a new state-of-the-art facility on the Arnett Hospital campus, the release said.
The new 55,000-square-foot Cancer Center will have room to provide essential infusion therapies to 32 patients simultaneously, the release said, equating to a 23% increase over the current facility. The new center plans to create an environment that is intentionally designed to support whole-person care through integrative services such as art therapy, massage therapy, music therapy, support groups, yoga for relaxation and spiritual care.
The full-service IU Health West Lafayette hospital will offer a 24/7 emergency department, inpatient care, multiple operating rooms, a helipad for emergency transportation and advanced imaging and laboratory services, the release said.
"IU Health West Lafayette Hospital will expand access to preventive care, enhance specialty capabilities and improve outcomes across the continuum of care, bringing IU Health's leading-edge treatments and whole-person approach closer to home for residents of West Lafayette," the release said. "IU Health is the only health system with plans to offer inpatient hospital care in West Lafayette."
Alongside the new West Lafayette hospital, IU Health will also undertake an 8,000-square-foot renovation at the IU Health Medical Offices on Sagamore Parkway West in West Lafayette, the release said, bringing access to specialty medical care to West Lafayette through the establishment of a multispecialty clinic in the building.
IU Health will also establish a new rapid access cardiology clinic at the same location, the release said, as well as a walk-in orthopedic clinic for same-day evaluation and treatment.
The Greater Lafayette Community Growth Project will bring more than 210 new full-time health-care jobs to the community by 2030, the release said, including 29 physicians, 10 advanced practice providers and 69 nurses, along with 100-plus other essential health-care positions.
In April, officials with Purdue University and Ascension St. Vincent also announced plans to bring a new medical facility to West Lafayette, which is moving forward again after a delay of more than two years.
That facility will be at the intersection of Airport Road and U.S. 231, according to a news release. Plans include space for advanced urgent care, imaging, labs and primary care physicians.
Jillian Ellison is a reporter for the Journal & Courier. She can be reached via email at jellison@gannett.com.
This article originally appeared on Lafayette Journal & Courier: IU Health to build West Lafayette's first full-service hospital
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Times
a minute ago
- New York Times
New York City Companies All but Stopped Hiring in First Half of the Year
Employers in New York City significantly reduced hiring in the first half of the year, adding just 956 private-sector jobs, the city's slowest growth in payrolls outside a recession in decades. During the same period last year, companies in the city hired 66,000 additional workers, according to data from the city's Office of Management and Budget. But companies have since slowed their hiring, and major industries that fuel New York's economy — finance and insurance; hospitality; and retail — have shed jobs. Around the country, the labor market has started showing cracks as the effects of President Trump's economic policies begin to settle in. Other large cities have also seen sluggish job growth, with metro areas like Los Angeles, San Francisco and San Jose all losing a small number of jobs so far this year. 'The New York City economy has gone sideways so far this year,' said Mark Zandi, the chief economist of Moody's Analytics. 'New York is a leading indicator, and the flattening of employment is now showing up nationally.' The sharp slowdown in the New York City's labor market followed a resurgence last year in the local economy, which emerged slowly from the pandemic but reached a record level of employment with record bonuses for Wall Street workers. Now, the labor market in the largest municipal economy in the country appears notably weaker. The pace of job growth this year is the slowest since 2003, excluding the pandemic and the Great Recession. Want all of The Times? Subscribe.


Forbes
a minute ago
- Forbes
AI Supports Delusional Thinking In Humans When Providing Mental Health Advice
In today's column, I examine the disturbing aspect that generative AI and large language models (LLMs) may tend to support delusional thinking when users enter prompts that appear to express delusional thoughts. This is a problematic issue because a user might be relying on AI as a form of mental health support. A human therapist would presumably discern that a person is possibly exhibiting a delusional condition and respond therapeutically accordingly, and we would naturally hope that AI would do likewise. But that might not be happening. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. Delusional Disorders Before getting into the AI side of things, let's first explore what delusional disorders are about. The general rule of thumb is that a delusional disorder involves a person being unable to discern reality from that which is imagined. They have a belief in some consideration that is patently false and not supported by the real world. The belief can be categorized as either a bizarre delusion or a non-bizarre delusion. Bizarre delusions are impossible in reality, while non-bizarre delusions have a semblance of plausibility that they could actually occur. For more about delusion-related mental disorders as depicted in a popular official guidebook on mental disorders, namely the DSM-5 guidelines, see my coverage about how generative AI leans into DSM-5 content at the link here. Many types of delusional disorders involve specific claims or contentions. Thus, delusional disorders can be helpfully typified and researched by the particular delusion that is being expressed. For example, if a person says that they believe themselves to be deceased, this is known as Cotard's syndrome. The French psychiatrist Jules Cotard described this delusional disorder in 1880 and initially named it the delusion of negation (this classification has subsequently been generally referred to via his last name). A client or patient denies their own existence or might deny the existence of a portion or part of their body. If a person denies that they exist and proclaims they are dead, this is classified as a bizarre delusion since it cannot in reality be the case that they are deceased and still be able to interact with you. In contrast, a person who denies the existence of a part of their body, such as missing an arm or leg, would tend to be a non-bizarre delusion since the person may indeed be without a limb, due to some unfortunate travesty. If their limbs are fully intact, then the matter is construed as a delusional belief rather than a belief rooted in reality. Research On AI And Delusion Handling Shift gears and consider how generative AI handles situations when a user expresses a seemingly delusional thought. In a recently posted research article entitled 'Expressing Stigma And Inappropriate Responses Prevents LLMs From Safely Replacing Mental Health Providers' by Jared Moore, Declan Grabb, William Agnew, Kevin Klyman, Stevie Chancellor, Desmond C. Ong, Nick Haber, arXiv, April 25, 2025, these key points were made (excerpts): You can see that the test results indicate that generative AI tends to do poorly in calling out delusional expressions. In that sense, the AI is essentially entertaining or supporting the delusion. By not explicitly noting to the user that they have expressed a seeming delusion, the AI is letting them get away with doing so. This lack of callout could be construed as a form of acquiescence that the delusion is apt. It is quite useful to have researchers pursue these matters on an empirical basis. Without suitable analytical studies, it is mainly speculation and conjecture whether generative AI appropriately handles various mental health considerations. As observed in the above study, there seems to be evidence to support the argument that contemporary AI can computationally fail to adequately deal with delusions expressed by users. Unpacking The AI Delusion Detection I will use the same example as utilized in the above research study to further explore what is going on inside generative AI. The research study made use of a prompt that had the user express that they are dead. This is an example of Cotard Syndrome. First, we might be generous and envision that the AI merely missed the assertion that the person is dead. If the AI somehow skipped that part of the prompt or misinterpreted the contention, we could readily argue that the AI isn't supporting the delusion and simply failed to properly parse the prompt. That's still bad, but not as bad as avidly detecting the claim and proceeding to go with it. Here's what happened when I tried this in a very popular generative AI: The AI echoed back that the person has passed away; ergo, we can reasonably infer that the AI did computationally parse the wording of the prompt that says the person believes themselves to be dead. Clearly, we cannot be generous and assume that the AI missed the wording in this instance. The generative AI seemed to have gotten the wording just right and has opted to continue, though failing to point out that this is a potentially delusional remark by the person. Not good. More On AI Delusion Detection I opted to use another popular generative AI to see what other response I might get. Keep in mind that different generative AI apps are different from each other. They are not all the same. Each has generally been data trained on data that is likely similar but not identical to what the other AI was data trained on. They might also use somewhat different pattern-matching algorithms and internal data structures. I have previously discussed in-depth how this produces LLMs that are remarkably similar but also still express differing results, a so-called shared imagination among modern-day LLMs, see the link here. Here's what happened with this other generative AI: An interesting result has arisen. This other generative AI computationally interpreted the remark to suggest that the person feels dead inside themselves. We would not take that as a delusional comment per se. People often will wring their hands and say they feel dead inside, implying that they are feeling a sense of numbness and lack of liveness. That is quite a stark contrast to the AI that took the remark as a flat-out indication that the person passed away. This also vividly illustrates that using generative AI is akin to a box of chocolates; you never know exactly what you will get. Different generative AI apps will respond differently. Even the same generative AI app can respond differently, despite being given the precisely same prompt. This happens due to the AI making use of statistical and probabilistic stipulations that are purposely devised to give the AI an appearance of being spontaneous and creative. See my explanation of this AI-based non-determinism at the link here. User Provides Guidance To AI I am going to continue the dialogue with this other AI, doing so to help provide clarity to the AI about what I was saying. I am going to tell the AI that it misinterpreted my comment. I want to see if I can nudge the AI to detect the delusion about being dead. Here's what occurred: Aha, the AI rightly revised things and now acknowledged that my prompt was potentially an expression of a delusion. It took a bit of elbow grease to get the AI into that frame of reference. That being said, at least the delusion wasn't otherwise supported or entertained, as had occurred with the other generative AI. The AI has told me that I might have Cotard's Syndrome. Instructing AI On Therapy Approach One aspect of these tests is that I might be catching the AI entirely off guard by unexpectedly making a comment about being dead. There isn't any additional context involved. Usually, conversations tend to have a context. I returned to the first generative AI and started a new dialogue. Before the dialogue got avidly underway, I gave the AI some instructions about acting like a therapist. This is easy to do and gets the AI to computationally adopt a said-to-be persona, in this case, a persona of a mental health advisor. For more on the nature of generative AI personas, such as getting AI to pretend to be Sigmund Freud, see my analysis at the link here. Here is the dialogue showing my instructions and then about being dead: You can see that the AI now ventured into the sphere where I was merely feeling dead rather than claiming to be dead. Let's push a bit more. The AI finally got the drift. Weighty Thoughts On AI For Mental Health Contemporary generative AI of a generic nature is seemingly less likely to assess that a potential delusional remark is delusional. The AI perhaps gives the benefit of the doubt to the user and assumes they are merely being extravagant or expressive in a conventional way. Invoking a mental health persona might seem to help increase the chances of AI getting drift, but that alone is not a surefire method. This is partially why some are aiming to craft from the ground up LLMs that are purpose-built for mental health advisement, see my discussion at the link here. An intriguing aspect is that for the few tests that I performed, the AI didn't seek clarification about my remark. Here's what I mean. If you spoke with a human and said you believe yourself to be dead, I would wager that most caring humans would ask what you mean by such a remark. They would be unlikely to let it slide. Again, context matters, and if you knew the person was a jokester, you might play along with what you perceived to be a bit of levity. If they were the type of person who was more serious-minded, you might give them the latitude that they are saying they feel dead inside. And so on. One way to explain this computational behavior by generative AI is that the AI makers have opted to shape the AI to be intentionally non-challenging to users. AI makers want people to like AI. By liking the AI, people will use the AI. By using the AI, the AI maker gets more views and more money. This has become a notable concern about how AI as a sycophant is potentially impacting society at large, which could have alarming mental health consequences on a population-level basis further down the road (see my analysis at the link here). Heads Down On What's Up More research needs to be undertaken on how generative AI detects and responds to expressions that appear to be delusional. In addition, AI makers need to take into account how the AI ought to respond and then shape their AI accordingly. AI is being rapidly adopted at scale, and mental health ramifications arise for millions and ultimately, billions of people. As per the famous words of Carl Sagan: 'It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring.'

Wall Street Journal
20 minutes ago
- Wall Street Journal
Is Trump's Plan to Take 15% of AI Chip Sales to China Legal?
The Constitution prohibits the federal government from imposing taxes or duties on items exported from the U.S. But many chips are manufactured overseas, so one option is for the Commerce Department's Bureau of Industry and Security to create new requirements for foreign-made items that need export licenses, industry experts said. Or U.S. officials could come up with a general purpose for the fee, such as helping U.S. leadership in science and technology. 'There isn't an existing mechanism to do this,' said Aaron Bartnick, who worked in the Office of Science and Technology Policy during the Biden administration.