logo
Handy Recap And Further Expansion Of How GPT-5 Impacts AI Mental Health Therapy

Handy Recap And Further Expansion Of How GPT-5 Impacts AI Mental Health Therapy

Forbes16 hours ago
In today's column, I continue my exploration of how the recently released GPT-5 impacts the use of generative AI and large language models (LLMs) for performing AI-driven mental health therapy.
First, I provide a brief recap of my previous discussion, which focused on the user or consumer side of AI-based mental health therapy (see my in-depth analysis at the link here). I am to quickly bring you up-to-speed on this evolving and rather controversial matter.
In that prior discussion, I promised that I would follow up and focus on the therapist's side of things. Ergo, I will examine here how it is that therapists and mental health professionals are going to be impacted by the advent of GPT-5. You see, it is inherently a two-punch topic, namely, the use of AI for mental health by consumers and the use of AI by therapists in their mental health services.
I've repeatedly predicted that we are inevitably moving from the traditional therapist-client dyad to the emerging triad of therapist-AI-client relationship, see my depiction at the link here. GPT-5 is going to move the needle in that regard.
Let's talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here.
Therapists And AI Usage
Many therapists and mental health professionals are opting to integrate AI into their practices and overtly use the AI as a therapeutic adjunct for their clients and patients (see my coverage at the link here). Even those that don't go down the route of incorporating AI are bound to encounter clients and patients who are doing so. Those clients and patients will often walk in the door with preconceived beliefs about how their therapy should go or is going, spurred and prodded by what AI has told them.
In this sense, one way or another, therapists and mental health professionals are going to be impacted by the release of GPT-5. Right now, there are around 700 million weekly active users of ChatGPT. You can expect many will migrate over to GPT-5, plus, OpenAI is stridently pushing the adoption of GPT-5 and has indicated they plan to sunset their prior generative AI models.
Consumers Using AI For Therapy
I'd like to set the stage on how it is that generic generative AI and LLMs are typically used in an ad hoc way by consumers for mental health guidance when they are otherwise utilizing the AI for a wide variety of chores and miscellaneous tasks. Millions upon millions of people are using generic generative AI for their ongoing advisor on mental health considerations.
You might find it of notable interest that the top-ranked use of contemporary generic generative AI and LLMs is to consult with the AI on mental health matters, see my coverage at the link here. This makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
Compared to using a human therapist, the AI usage is a breeze and readily undertaken.
When I say that I am referring to generic generative AI, please know that there are non-generic versions of generative AI and LLMs that are customized specifically for undertaking therapeutic assessments and recommendations, see examples at the link here. I'm going to primarily be discussing generic generative AI, though many of these points can impact the specialized marketplace, too.
GPT-5 Enters The Jungle
I stridently assert that GPT-5 will undoubtedly bolster consumer use of AI for mental health purposes. GPT-5 is alluring due to various fluency improvements that have been built into OpenAI's latest AI offering. My expectation is that GPT-5 will notably exacerbate and stoke an already feverish pace of growth in people turning to AI for their therapeutic needs.
To clarify, I am not suggesting that boosted use is necessarily a positive thing. An ongoing and vociferously heated debate concerns whether the use of generic generative AI for mental health advisement on a population-level basis is going to be a positive outcome or a negative outcome for society. See my analysis of the potential widespread impacts at the link here.
If that kind of AI can do a proper job on this monumental task, then the world will be a lot better off. You see, many people cannot otherwise afford or gain access to human therapists, but access to generic generative AI is generally plentiful in comparison. It could be that such AI will greatly benefit the mental status of humankind. A dour counterargument is that such AI might be the worst destroyer of mental health in the history of humanity.
The question becomes, what are therapists and mental health professionals going to do about this madcap rush to the use of generic AI for therapy?
How Therapists Are Sizing Up
Two major routes are taking place by today's therapists:
Allow me to elaborate.
One option is to dig your head into the sand and be a naysayer. Pretend that millions of people aren't using AI. Tell your prospective clients and patients that they aren't welcome if they are using AI, or that if they come on board, they will have to forsake their AI usage in that regard. Whether this business strategy will work is a mighty tall guess. Perhaps some people will cherish that their therapist is eschewing AI. That small segment of the population will be happy with such a therapist.
On the other hand, the prevailing trend is in the opposite direction. People want their therapist to be up to date. AI is here. Everyone can see this. They would rather choose a therapist who acknowledges the emergence of AI and guides their clients in how to sensibly and suitably use AI. The prudent perspective is that therapists desiring a fruitful future are going to realize that they are better off by embracing AI rather than fighting AI.
For more ins and outs, see my analysis at the link here.
The Old And New Facets In GPT-5
Leading up to the release of GPT-5, there was an immense amount of wildly fantastical speculation about what GPT-5 would consist of. It's going to be artificial general intelligence (AGI), containing the entirety of human intelligence. Period, end of story. Whoa, some bellowed, it is going to actually be even better and consist of artificial superintelligence (ASI). The AI will be superhuman. It will outthink any human that has ever existed and that will ever come into existence.
Sorry to break the sobering news, but GPT-5 is not AGI, nor is it even close to artificial superintelligence. GPT-5 is an incremental advancement over the prior OpenAI models. It is better in some ways than competing marketplace LLMs, and it is also less capable in some ways when compared head-to-head to available LLMs. See my detailed assessment of GPT-5 at the link here.
The bottom line is that if you already know about other popular generative AI and LLMs, you are already ready to use GPT-5. The same nature of prompting applies. The same concerns about what AI can and cannot do are still at play. All those facets are inherited into GPT-5.
Boom, drop the mic.
There are, though, various crucial changes and differences that are significant, especially in the context of seeking AI mental health advice from GPT-5.
Let's get into those vital elements.
Auto-Switcher And Processing Time
GPT-5 is actually a conglomeration of several submodels that were previously in OpenAI's potpourri of generative AI products. They upgraded their popular LLMs such as GPT-4o, GPT-4o-mini, OpenAI o3, OpenAI o4-mini, GPT-4.1.-nano, etc. Those are essentially collected together under the umbrella of GPT-5.
The reason this is important to know is that GPT-5 contains an auto-switcher that secretly examines each of your prompts and then computationally decides which of the GPT-5 submodels ought to be run. You have no control over this. It is entirely willy-nilly and up to the GPT-5 auto-switcher to route all prompts.
It used to be that the user selected which of those available models to utilize. This involved knowing what you wanted to do and what each of those models could perform. It all depended on what you were looking to do. Some of the models were faster, some were slower. Some were deeper at certain classes of problems, others were shallower.
Not only does the GPT-5 auto-switcher select GPT-5 submodels, but an aligned component determines how much run-time your prompt will be allotted. Maybe GPT-5 will guess suitably and give a prompt sufficient time to reach a solid answer. On the other hand, it is entirely possible that the run-time will be short-changed and, worse still, a less-than-best choice of the submodels will be selected.
The answer generated could be woefully weak and incomplete.
GPT-5 And Mental Health Analyses
The new auto-switching and accompanying processing-time guesswork by GPT-5 is going to, at times, mess up the mental health analyses being performed by GPT-5. Again, I'm not saying that GPT-5 should necessarily be used for said purpose, and only pointing out that if it is used for therapy-related aspects, there is a wanton nature of what's going to occur.
First, a submodel might be chosen by the auto-switcher that is less capable than a different one in the context of mental health. The compounding problem is that insufficient run-time might be allotted to the effort. The mental health advice that is generated by GPT-5 could be off-target or half-baked (of course, this is always possible, for all LLMs, in general).
Second, the mental health advice could end up so varied that it becomes confusing to the user. They presumably won't realize that there are these underlying submodels being ping-ponged. Each prompt, even in the midst of a conversation or chat with GPT-5 will be getting this runaround. Plus, GPT-5 doesn't reveal its hidden shenanigans.
The user might readily assume they are doing something that is causing all the mishmash.
Here's the upshot for therapists and mental health professionals.
If you are going to use GPT-5 as a tool for your own therapy-oriented double-checker or feedback mechanism, keep in mind that GPT-5 could be tossing you back and forth between various submodels. Be wary. You can also attempt to trick GPT-5 into using a particular submodel – see my prompting tips and techniques regarding GPT-5 at the link here.
If you are going to advise your clients and patients to use GPT-5, or if they are doing so anyway, it would behoove you -- and them -- to inform them about this auto-switching boondoggle. They will then at least not be caught completely off-guard by a possible inconsistent set of responses. I'm sure they will be appreciative that you forewarned them. It also provides a kind of added benefit that you are looking out for their best interests and are up-to-speed on the latest AI.
Writing Is Enhanced
On the writing side of things, GPT-5 has improvements in a myriad of writing aspects. The ability to generate poems is enhanced. Depth of writing and the AI being able to make more compelling stories and narratives seems to be an added plus.
I anticipate that written responses to mental health questions will likely be more robust. More involved. Often more complicated than they used to be under the prior OpenAI models. That's the good news. The somewhat bad news is that GPT-5 might produce rather dense and impenetrable responses, rather than being succinct and direct. Another possibility entails GPT-5 sliding into a poem-producing mode. Mental health advisement via poetry is probably not the best route to go.
For therapists, you need to realize that a client or patient using GPT-5 is likely to come to you with a seemingly sophisticated mental health argument or narrative. The AI is doing its darnedest to try and look smart. You will need to carefully dissect what GPT-5 has generated and determine what makes sense and what is ill-advised.
You can get GPT-5 to fall back to its old ways of writing.
In a prompt, all that needs to be done is tell GPT-5 to write in the same manner as GPT-4 (see my prompting suggestion at the link here). It's up to you whether you want to tell your clients and patients about this. Some might find it useful to get GPT-5 to talk like it is GPT-4. Others might not care and will just proceed with the newly uplifted chatter.
Lies And AI Hallucinations
OpenAI claims that GPT-5 is more honest than prior OpenAI models, plus it is less likely to hallucinate (hallucination is a misappropriated word used in the AI field to describe when the AI produces fictionalized responses that have no bearing in fact or truth).
I suppose it might come as a shock to some people that AI has been and continues to lie to us, see my discussion at the link here. I would assume that many people have heard or even witnessed that AI can make things up, i.e., produce an AI hallucination. Worries are that AI hallucinations are so convincing in their appearance of realism, and the AI has an aura of confidence and rightness, that people are misled into believing false statements and, at times, embrace its crazy assertions. See more at the link here.
From a mental health angle, an ongoing concern has been that the AI might lie to someone about a mental health issue or perhaps generate a zany response due to encountering an AI hallucination. A person seeking therapy via the AI is vulnerable to believing whatever the AI says. They might not be able to readily figure out that the advice being given is bogus, or worse, harmful to them.
A presumed upbeat consideration is that apparently GPT-5 reduces the lying and reduces the AI hallucinations. The downbeat news is that it isn't zero. In other words, it is still going to lie and still going to hallucinate. This might happen on a less frequent basis, but nonetheless remains a chancy concern.
If you want to try and get GPT-5 to lie less of the time, and have hallucinations less of the time, you can use a two-step prompt that will sternly instruct GPT-5 on this (see my prompting suggestion at the link here). Please know that no matter what you tell GPT-5, there is still a chance of it lying and hallucinating.
In terms of clients and patients, your best bet is to emphasize repeatedly that whatever GPT-5 says, and whatever any LLM says, they must remain stoutly wary and alert. I'd suggest you mention this crucial point each time that you have a conversation with a client or patient and when the topic of using AI comes up.
Pound away at the disturbing underbelly that AI lies and makes things up.
Personas Are Coming To The Fore
I've repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here).
You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so. In the context of mental health, I showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here.
OpenAI has indicated they are selectively making available a set of four new preset personas, consisting of Cynic, Robot, Listener, and Nerd. Each of those personas represents those names. The AI shifts into a mode reflecting those types of personalities.
As a mental health professional, you ought to give serious consideration to making use of personas in GPT-5 for your own self-training and personal refinement. I'd recommend that you generally avoid the newly devised preset personas. Instead, create personas that have particular relevance to you.
For example, you might craft a persona that will pretend to be a person with deep depression. You could then use this persona to hone your therapeutic prowess regarding depression in patients and clients. It can be quite useful. Plus, there is no danger in the sense that since it is just AI, you can try out various avenues to gauge what works and doesn't work. No harm, no foul. For my suggestions on how to prompt the invoking of a persona, see the link here.
In terms of clients and patients, I would generally suggest that you do not lean them into using personas. Some people get lost in a persona and start to believe that the AI is real. That's not something to be encouraged. If a client or patient starts to use personas in GPT-5, it would be useful to have them indicate what the persona is and why they feel their activity is meritorious. Then, if feasible, wean them away from the use of personas.
That's a case-by-case decision, and a consideration as a therapist that you'll need to studiously decide on.
AI Mental Health Is In Flux
I'll finish here with some hefty thoughts as a heads-up for you to consider.
AI makers find themselves in quite a pickle. By allowing their AI to be used for mental health purposes, they are opening the door to serious legal liability, along with damaging reputational hits if their AI gets caught dispensing inappropriate guidance. So far, they've been relatively lucky and have not yet gotten severely stung by their AI serving in a therapist role.
Meanwhile, new laws might put the kibosh on generic unregulated generative AI providing any semblance of mental health advisement. The recently enacted Illinois law restricting AI for mental health usage in Illinois puts the AI makers in quite a rough spot, see my discussion at the link here and the link here. Other states and the federal government might decide to enact similar laws.
The Illinois law also applies stringently to therapists and mental health professionals. Overall, the restrictions come down to only allowing AI usage for mainly administrative chores by therapists, such as billing, scheduling, and the like. The use of AI as a therapeutic adjunct is almost entirely placed out of bounds.
All-In Or All-Out
Though I understand the logic of a veritable ban associated with therapists employing generic unregulated generative AI, doing so to protect consumers, I don't believe it is prudent. It seems like a head-in-the-sand proclamation and belies existing reality. It grossly belies the future growth in the global widespread adoption of AI that is inexorably barreling this way.
As the famed trauma physician Gabor Maté astutely reminds us: 'The attempt to escape from pain, is what creates more pain.' Let's keep that in mind as attempts are made to deal with a complex problem in somewhat simplistic ways.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trump meets with Intel CEO after demanding he resign
Trump meets with Intel CEO after demanding he resign

Yahoo

time4 minutes ago

  • Yahoo

Trump meets with Intel CEO after demanding he resign

US President Donald Trump on Monday said he had a "very interesting" meeting with the chief of US chip maker Intel, just days after calling for his resignation. Trump said on his Truth Social platform that he met with Lip-Bu Tan along with Secretary of Commerce Howard Lutnick and Secretary of Treasury Scott Bessent. "The meeting was a very interesting one," Trump said in the post. "His success and rise is an amazing story." Trump added that members of his cabinet are going to spend time with Tan and bring the president "suggestions" next week. Intel did not respond to a request for comment. Trump demanded last week that the recently-hired boss of Intel resign "immediately," after a Republican senator raised national security concerns over his links to firms in China. "The CEO of INTEL is highly CONFLICTED and must resign, immediately. There is no other solution to this problem," Trump posted on Truth Social last Thursday. Tan released a statement at the time saying that the company was engaged with the Trump administration to address the concerns raised and ensure officials "have the facts." Intel is one of Silicon Valley's most iconic companies but its fortunes have been dwarfed by Asian powerhouses TSMC and Samsung, which dominate the made-to-order semiconductor business. In a statement, Tan said there has been "a lot of misinformation circulating" about his past roles at Walden International and Cadence Design Systems. "I have always operated within the highest legal and ethical standards," Tan said. The Malaysia-born tech industry veteran took the helm at struggling Intel in March, announcing layoffs as White House tariffs and export restrictions muddied the market. Intel's niche has been chips used in traditional computing processes, which are steadily being eclipsed by the AI revolution. gc/bjt Sign in to access your portfolio

Ford's New Truck Needs To Understand The EV Myths Permating Automaking
Ford's New Truck Needs To Understand The EV Myths Permating Automaking

Forbes

time4 minutes ago

  • Forbes

Ford's New Truck Needs To Understand The EV Myths Permating Automaking

Ford promised their announcement of a new 'universal' EV platform, along with a low-cost light pickup truck, would be another 'Model T' moment in automaking history. Ford didn't show us the truck, just teased various attributes, in particular that their electric truck should sell for about $30,000 and be available in 2027, thanks to new design and manufacuring innovations. Nobody's going to be critical of making a decent vehicle for a lower price, and Ford certainly knows the truck market, though they haven't shown the same aplomb in the EV world. Based on what little we know, this is going to be a fairly short range truck, of low utility for towing traditional trailers. Are the choices the right ones? Range The base truck has a battery estimated at around 51kWh. The most efficient pickup today, Rivian's, would get around 125 miles of range on that, with a practical range under 100 miles. That's a customer scaring number, and impactical for road trips out of town, though many working trucks that never leave town could indeed get by on that. Perhaps Ford has found a way to make dramatic aerodynamic improvements--they did not show the shape of the truck--but even so, they will not likely get past 150 miles. On the plus side, that's with an LFP battery pack. LFP batteries are cheaper, used in shorter-range Teslas and many Chinese cars. While most of the battery industry has been on a very expensive quest for denser, longer range battery chemistries, 'good old' LFP has been winning the day. While it's a bit bulkier and heavier, it's cheaper and you can charge it to 100% every day, which makes up a lot for the reduced range. Ford will make a longer range NMC version as well, according to reports. Ford is perhaps betting that people will realize a not well understood truth, that long range is mostly useful for intercity road trips. (And in some cases, for those doing Level 1 charging at home.) While it's a bit less convenient, customers who know they plan to use their truck around town can save a bunch of money buying a smaller battery. Ford presumably hopes the $30K price point will attract a whole new class of buyers. Cost Buyers who don't understand EVs do indeed think they cost too much. The reality is that EVs are already a fair bit cheaper than ICE equivalents in total cost of ownership, when you add in energy and maintenance. Customers just have trouble feeling that--paying $5,000 more up front and saving $1,000/year is a huge win, but a delayed one. Of course, a vehicle that costs the same and still saves, $1,000/year is great, but that involves some compromises, like range. It would be nice if there were a way to get around the psychology of this. A way to sell the car for less but get the owner to pay $500/month more for their electricity (but still less than they paid for gas) so they see themselves saving money every year, not just in the future. It could be that creative accounting is more valuable than good engineering, though both are great. No sign as yet from Ford of another interesting solution for pickup trucks, namely a way to mount a temporary rental battery pack, or rental compact generator in the truckbed. That's much easier to design into a truck than a car. If drivers could stop for 15 minutes at a depot on the way out of town to rent such a range extender, they could get the best of both worlds--a cheap in-town car, and the ability to get extra range on rarer trips out of town. Single Platform It's not new to Ford, but they also plan to switch to a universal platform, on which they can build trucks, sedans, vans and SUVs. The battery pack of this pickup truck is a structural element that makes the bed of the truck. Universal platforms allow better economies of scale over all models, one of the best ways to drive down prices. Where's The Software? Ford spoke a lot about new hardware designs and manufacturing approaches, which is great, but it's 2025, and the vehicles that win will derive more and more of their value from their software, not their hardware. You need both, but without a world-beating software strategy, failure is likely. We didn't hear a lot about this, so Ford had better start laying that out soon. It's not just about making software-defined-vehicles; everybody knows about that, though they had to be dragged, kicking and screaming to do what silicon valley companies do as 2nd nature. It's hard to imagine a new "Model-T moment" without a lot of innovation there. Ford used to have a strong position overseas. It needs to regain that. In the USA, they will have protectionist tariffs to make them complacent, but out in the rest of the world they will be going head to head with China, where the things Ford announced today might generate a yawn, and not many Yuan. Let's hope that real reinvention of the company is coming.

SOS: Who will throw fact-checked reporting a life raft?
SOS: Who will throw fact-checked reporting a life raft?

Fast Company

time5 minutes ago

  • Fast Company

SOS: Who will throw fact-checked reporting a life raft?

AI is gobbling up journalism for many of the same reasons humans do: to develop a concrete understanding of the world; to think critically; to differentiate between what is true and what's not; to become a better writer; and to distill history and context into something accessible. But what happens to AI when our journalistic institutions crumble? Upon what foundation of truth will it answer everyone's questions? Write their emails? Do their jobs? Because while the alarm bells have been ringing for journalism for decades, the so-called end of search feels like the potential death knell. What does that mean for AI, and for us as we try to make sense of an increasingly confusing world? In our rush to integrate generative AI into every corner of our lives, we've ignored a fundamental truth: AI cannot function without a baseline of verified facts. And, at the moment, that baseline is built and maintained by so-called 'traditional' journalism (the kind with fact checkers and editors). As AI threatens to upend search, media monetization, and news consumption behaviors, it's also undercutting the very industry that feeds it the facts it depends on. A society cannot function without objective journalism, and neither can AI. Loss of accuracy Recent Apple research says that, 'It doesn't take much to cause generative AI to fall into 'complete accuracy collapse.'' It goes on to show that generative AI models lack strong logical reasoning, unable to function beyond their complexity threshold. I immediately thought of a recent piece from The New Yorker, in which Andrew Marantz weaves together various examples of autocracy, set against thousands of years of history, to (attempt to) make sense of what is happening in America right now. I imagined AI trying to do the same, essentially short-circuiting before being able to form the salient points that make the piece so impactful. When asked to think too hard, the AI breaks. An even more damning report from the BBC reports that AI can't accurately summarize the news. It asked ChatGPT, Copilot, Gemini, and Perplexity to sum up 100 news stories and asked expert journalists to rate each answer. 'As well as containing factual inaccuracies, the chatbots 'struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context,'' says the report. Almost one-fifth of the summaries included false facts and quote distortions—19%! There's more, of course. This study from MIT Sloan shows that AI tools have a history of fabricating citations and reinforcing gender and racial bias, while this Fast Company article argues that AI-driven journalism's 'good enough' standards are accepted because of the revenue these tools create. And that, of course, is the less human reason AI is gobbling up journalism: the money. None of that money is going back into funding the journalistic institutions that power this whole experiment. What happens to our society when the core pillar of a true and free press collapses under the weight of the thing that has sloppily consumed it? Our AI lords must place real value on fact-checked reporting—right now—to ensure its continued existence.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store