logo
OpenAI Takes Calculated Move To Navigate Treacherous Waters Of ChatGPT AI Giving Out Mental Health Advice

OpenAI Takes Calculated Move To Navigate Treacherous Waters Of ChatGPT AI Giving Out Mental Health Advice

Forbes4 days ago
In today's column, I examine the newly announced changes to ChatGPT by OpenAI that are being undertaken to navigate the treacherous waters of AI producing mental health advice.
I say that this is a treacherous predicament because generative AI and large language models (LLMs) can potentially produce foul psychological advice and even generate therapeutically harmful guidance. The difficulty for AI makers is that, on the one hand, they relish that users are flocking to AI for mental health insights, but at the same time, the AI makers desperately want to avoid legal liability and reputational ruin if their AI spurs people toward mental ruin rather than mental well-being.
Let's talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here.
OpenAI Latest Announcement
An announcement on August 4, 2025, by OpenAI entitled 'What we're optimizing ChatGPT for' provides intriguing clues as to the business and societal struggles of allowing generative AI and LLMs to provide mental health advice. I will, in a moment, walk you through the key points and explain the backstory on what is really going on.
Here is some important context.
To begin with, it's important to recognize that one of the most popular ways people are using generative AI today is for addressing mental health concerns, see my coverage at the link here. This trend is understandable. Generative AI is widely accessible, typically low-cost or free, and available around the clock. Anyone feeling mentally unsettled or concerned can simply log in and start a conversation, any time of day or night. This is in sharp contrast to the effort and expense involved in seeing a human therapist.
Next, generative AI is willing to engage on mental health topics endlessly.
There are essentially no time limits, no hesitation, no steep hourly rates ticking away. In fact, these systems are often designed to be highly affirming and agreeable, sometimes to the point of excessive flattery. I've pointed out before that this kind of overly supportive interaction can sidestep the harder truths and constructive confrontation that are sometimes essential in genuine mental health counseling (see my analysis at the link here).
Finally, the companies behind these AI systems are in a tricky spot. By allowing their models to respond to mental health issues, they're potentially exposing themselves to serious legal risks and reputational damage. This is especially the case if the AI dispenses unhelpful or harmful advice. Thus far, they've managed to avoid any major public fallout, but the danger is constantly looming as these LLMs continue to be used in quasi-therapeutic roles.
AI Makers Making Do
You might wonder why the AI makers don't just shut off the capability of their AI to produce mental health insights. That would solve the problem of the business exposures involved. Well, as noted above, this is the top attractor for people to use generative AI. It would be usurping the cash cow, or like capping an oil well that is gushing out liquid gold.
An imprudent strategy.
The next best thing to do is to attempt to minimize the risks and hope that the gusher can keep flowing.
One aspect that the AI makers have already undertaken is to emphasize in their online licensing agreements that users aren't supposed to use the AI for mental health advice, see my coverage at the link here. The aim is that by telling users not to use the AI in this manner, perhaps the AI maker can shield itself from adverse exposure. The thing is, despite the warnings, the AI makers often do whatever they can to essentially encourage or support the use of their AI for this claimed-to-be don't use capacity.
Some would insist this is a wink-wink of trying to play both sides of the gambit at the same time, see my discussion at the link here.
In any case, AI makers are cognizant that since they are allowing their AI to be used for therapy, they ought to try and keep the AI somewhat in check. This might minimize their risks or at least be later evidence that they made a yeoman's effort to do the right thing. Meanwhile, they can hold their head high in taking overt steps to seemingly reduce the potential for harm and improve the chances of being beneficial.
ChatGPT Latest Changes
In the OpenAI official blog posting 'What we're optimizing ChatGPT for,' that was posted on August 4, 2025, these notable points were made (excerpts):
Let's go ahead and unpack some of these points.
Detecting Mental Health Signs
A frequent criticism of using AI to perform mental health therapy is that the AI might computationally overlook serious signs exhibited by a user, perhaps involving delusions, dependencies, and other crucial cognitive conditions.
For example, a user might say that they are interested in learning about depression, and at the same time be asking about ways that people have tried to hurt themselves. A human therapist would almost certainly pick up on the dark undertones and want to explore the potential for self-harm. AI might not make that kind of human behavioral connection and blab endlessly on the topics without catching the drift of the intentions involved.
Even if the AI does manage to discern a potential issue, the question arises as to what the AI should do about it.
Some would argue that the AI should immediately report the person to mental health authorities. The problem there is that a ton of false reports are bound to be made. It could be a tsunami of misreports. In addition, people are going to be steamed that the AI snitched on them, especially if they were innocently chatting and their efforts had no bearing on anything of a dire nature.
Another angle is that perhaps the AI should engage the user in a dialogue about their possible mental health condition. The problem there is that the AI is essentially rendering a therapeutic diagnosis. Again, this could be erroneous. In addition, people who aren't in that plight would undoubtedly be upset that the AI has turned in that direction. A strident case could be made that the AI might be causing greater mental disturbance than being avidly helpful.
Here's where things are.
Research is underway to improve how AI makes these kinds of tough calls and does so in a sensible, evidence-based fashion, see my discussion at the link here. Step by step, it is anticipated that advances in AI for mental health will gradually diminish the false positives and false negatives and be a reliable and steady diagnostic tool.
Asking Questions Vs. Giving Answers
The way that generative AI and LLMs have been shaped by AI makers is to generally give a user an answer, handing the answer to them on a silver platter. The belief is that users do not want to slog through a series of back-and-forth entreaties. They want the bottom line.
If a user asks a question, quickly and without hesitation, plop out an answer.
An issue with this approach is that the AI might not have sufficient info from the user to produce an adequate answer. There is a tradeoff involved. An AI that asks a lot of follow-up questions to a given question is likely to be irritating to users. The users will opt to use some other AI. They expect a fast-food delivery of answers. Period, end of story.
In the case of mental health circumstances, it is rare that a one-and-done approach will be suitable. Think about how therapists work. They use what is commonly known as talk therapy. They talk with a client or patient to get the needed details. The aim is not to just rush to judgment. But AI is programmed by AI makers to purposely rush to judgment in order to spit out an answer to any given question.
For more on the similarities and differences between AI-based therapy and the work of human therapists, see my analysis at the link here.
We are gradually witnessing a bit of a shift in the philosophy and business practice of having AI provide instantaneous knee-jerk responses. OpenAI recently released its ChatGPT Study Mode, a feature that allows learners and students to be led incrementally by AI toward arriving at answers. A complaint about AI used by students is that the students don't do any of the heavy lifting. The idea is that users can activate AI into a teaching mode and, ergo, no longer be trapped in the instant oatmeal conundrum. For the details of how the learner-oriented ChatGPT Study Mode works, see my explanation at the link here.
The same overarching principles of collaborative interactivity can readily be applied to mental health guidance. When a user says they have this or that mental concern, the AI doesn't necessarily have to leap to an abrupt conclusion. Instead, the AI can shift into a more interactive mode of trying to adroitly ferret out more details and remain engaging and informative.
This is referred to in the AI field as enabling new behaviors for AI that are particularly fruitful in high-stakes situations, such as when someone is faced with a mental health concern or other paramount personal decision. You can expect to see the major AI makers increasingly adopting these kinds of add-on or supplemental innovative approaches into their LLMs.
Mental Health Booster Or Buster
An ongoing debate entails whether the use of AI for mental health advisement on a population-level basis is going to be a positive outcome or a negative outcome.
We are immersed in a grand experiment right now. Millions and possibly billions of people across the globe are turning to everyday AI to get their mental health concerns aired and possibly seeking cures accordingly. Everyone is a guinea pig in an unguided and wanton impactful experiment.
What does this bode for the near-term mental health of the population? And what about the longer-term impacts on the human mind and societal interactions? See my analysis of the potential widespread impacts at the link here.
If AI can do a proper job on this heady task, then the world will be a lot better off. You see, many people cannot otherwise afford or gain access to human therapists, but access to AI is generally plentiful in comparison. It could be that AI for mental health will greatly benefit the mental status of humankind. A dour counterargument is that AI might be the worst destroyer of mental health in the history of humanity. Boom, drop the mic.
Nobody knows for sure, and only time will tell.
As per the famous words of Marcus Aurelius: 'The happiness of your life depends upon the quality of your thoughts.' Whether AI and our shaping of AI will take us in the proper direction is the zillion-dollar question of the hour.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trump reportedly signs order granting another 90-day extension on harshest China tariffs
Trump reportedly signs order granting another 90-day extension on harshest China tariffs

Yahoo

time13 minutes ago

  • Yahoo

Trump reportedly signs order granting another 90-day extension on harshest China tariffs

President Trump on Monday signed an executive order implementing another 90-day pause on additional tariffs on China that were set to take effect on Tuesday, reports said. The move again pushes off a deadline for imposing the harshest taxes on Chinese imports as the two sides continue negotiations on a deal. Reports from CNBC, The Wall Street Journal, and The Washington Post all said the president had signed the order, which will push the deadline for these tariffs back to Nov. 9. The White House didn't immediately respond to a request for comment. The extension appears to mean that headline rates agreed to in May of 30% on Chinese imports and 10% on American goods will continue for the immediate future and avoid a snap-back to previous triple-digit levels. Sector-specific tariffs on goods like steel and some medical supplies will keep the effective tariff rate between the two countries higher. The reported tariff agreement also comes amid an increased focus from Trump in recent days on Russia and the war in Ukraine, with Trump putting an additional 25% tariff on imports from India over that nation's consumption of Russian oil. The new pause could also increase the odds for a meeting later this year between Presidents Trump and Xi. The two men's teams have both floated the concept of a face-to-face meeting, perhaps as soon as the coming Asia-Pacific Economic Cooperation summit in South Korea at the end of October. Monday's announcement also came after three meetings of the two trade teams in recent months — gatherings that took place in Geneva in May, London in June, and Stockholm in July — resulted in signs of progress though tensions remain between the two sides. He Lifeng, China's vice premier for economic policy, has represented his country at all three meetings. A representative for He — Chinese trade negotiator Li Chenggang — said after the most recent round in Sweden that the two sides had 'candid exchanges over their economic concerns,' according to a translation. Certain issues like semiconductors, including a plan to allow the resumption of Nvidia's (NVDA) AI chip exports to China, as well as Chinese exports of rare earth minerals have been most in focus of the talks so far with a long list of issues between the two nations still to discuss. Over the weekend, the Trump administration struck a deal with Nvidia and AMD to allow those chipmakers to sell some chips into the Chinese market in exchange for a 15% cut of the sales.

Tesla Stock Nears Breakout as Model Y Demand Soars Ahead of Tax Credit Cut
Tesla Stock Nears Breakout as Model Y Demand Soars Ahead of Tax Credit Cut

Yahoo

time13 minutes ago

  • Yahoo

Tesla Stock Nears Breakout as Model Y Demand Soars Ahead of Tax Credit Cut

Aug 11 - Tesla (NASDAQ:TSLA) shares extended their winning streak Monday, climbing 3.5% to $341.8 and aiming for a fourth consecutive day of gains. The move puts the stock on the verge of a potential technical breakout, with chart watchers flagging $368 as the next key target if it can hold above the $338 resistance level, according to strategist Mark Newton. Warning! GuruFocus has detected 4 Warning Signs with PSKY. Fresh data on Tesla's order backlog shows Model Y wait times stretching to four-to-six weeks, up from one-to-three earlier this summer. The jump suggests rising demand, likely fueled by buyers rushing to secure the soon-to-expire $7,500 federal tax credit. The incentive, scaled back under President Donald Trump's July 4 tax bill, has created a sense of urgency among prospective EV owners. Year to date, Tesla shares remain down about 18%, reflecting a challenging first half. However, over the past 12 months, the stock has rallied roughly 65%, supported by improving production metrics, pricing stability, and renewed enthusiasm for the company's growth story. For many Tesla bulls, the real long-term catalyst remains the company's autonomous-driving technology. Advocates believe its progress in full self-driving capabilities could open up entirely new revenue streams and transform Tesla's earnings profile over the next decade. This article first appeared on GuruFocus.

Cask NX Rebrands to Ondaro, Appoints Veteran ServiceNow Ecosystem Executive Jeff Gregory as CEO
Cask NX Rebrands to Ondaro, Appoints Veteran ServiceNow Ecosystem Executive Jeff Gregory as CEO

Yahoo

time13 minutes ago

  • Yahoo

Cask NX Rebrands to Ondaro, Appoints Veteran ServiceNow Ecosystem Executive Jeff Gregory as CEO

New brand and leader to accelerate next wave of global growth SAN DIEGO, August 11, 2025--(BUSINESS WIRE)--Cask NX, a ServiceNow Elite Partner focused on advisory, operational, and transformation services, today announced a global rebrand with its new name "Ondaro," and the appointment of Jeff Gregory as Chief Executive Officer, effective August 1, 2025. Gregory joins Ondaro from Stova, where he served as Chief Operating Officer and led company operations, culture, and market positioning. Prior to Stova, Gregory served as Thirdera's CIO & Global Integrations leader and worked as the former CEO of NovoScale, which was acquired by Thirdera in January 2021. Gregory succeeds Jason Rosenfeld, Cask NX's former Chief Executive Officer. "Jeff is a visionary leader with deep roots within the ServiceNow ecosystem," said Dimitris Geragas, Chairman of the Board, Ondaro. "He has a proven track record of leading through change and scaling companies to new heights. The appointment of Jeff to CEO, paired with the introduction of our innovative new brand, Ondaro, aligns with our company's next phase of accelerated growth." Derived from the Spanish word "onda," which means "wave," Ondaro evokes the feeling of forward motion and progress. This new brand reflects the company's coastal roots, nods to its expanding Americas presence, and represents the direction it's headed. "We have entered an era of rapid technological change, and with our clients we are redefining how the ServiceNow platform delivers value," said Jeff Gregory, CEO of Ondaro. "As the landscape shifts, we're expanding our solutions and serving a broader range of customers. Ondaro leads this evolution as a trusted partner, guiding organizations through transformation. This name and brand reflect our people, our growth, and the opportunities ahead." For more information about Ondaro and to see the brand in action, please visit About Ondaro: Ondaro (formerly Cask NX) is a pure-play ServiceNow Elite Partner focused on designing, enabling and sustaining digital transformation on ServiceNow. With a focus on end-to-end business transformation, Ondaro is committed to helping its clients move with meaning, challenge the status quo and realize the full power of ServiceNow to achieve their strategic goals. Ondaro takes organizations from "this is how we've always done it" to "imagine what's next." For more information, visit View source version on Contacts Media Contact: Abi Blanchard, Greenough Agencyondaro@ M: 910-271-3034 Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store