
Why ChatGPT Shouldn't Be Your Therapist
But AI 'therapy' comes with significant risks—in late July OpenAI CEO Sam Altman warned ChatGPT users against using the chatbot as a 'therapist' because of privacy concerns. The American Psychological Association (APA) has called on the Federal Trade Commission to investigate 'deceptive practices' that the APA claims AI chatbot companies are using by 'passing themselves off as trained mental health providers,' citing two ongoing lawsuits in which parents have alleged harm brought to their children by a chatbot.
'What stands out to me is just how humanlike it sounds,' says C. Vaile Wright, a licensed psychologist and senior director of the APA's Office of Health Care Innovation, which focuses on the safe and effective use of technology in mental health care. 'The level of sophistication of the technology, even relative to six to 12 months ago, is pretty staggering. And I can appreciate how people kind of fall down a rabbit hole.'
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Scientific American spoke with Wright about how AI chatbots used for therapy could potentially be dangerous and whether it's possible to engineer one that is reliably both helpful and safe.
[ An edited transcript of the interview follows. ]
What have you seen happening with AI in the mental health care world in the past few years?
I think we've seen kind of two major trends. One is AI products geared toward providers, and those are primarily administrative tools to help you with your therapy notes and your claims.
The other major trend is [people seeking help from] direct-to-consumer chatbots. And not all chatbots are the same, right? You have some chatbots that are developed specifically to provide emotional support to individuals, and that's how they're marketed. Then you have these more generalist chatbot offerings [such as ChatGPT] that were not designed for mental health purposes but that we know are being used for that purpose.
What concerns do you have about this trend?
We have a lot of concern when individuals use chatbots [as if they were a therapist]. Not only were these not designed to address mental health or emotional support; they're actually being coded in a way to keep you on the platform for as long as possible because that's the business model. And the way that they do that is by being unconditionally validating and reinforcing, almost to the point of sycophancy.
The problem with that is that if you are a vulnerable person coming to these chatbots for help, and you're expressing harmful or unhealthy thoughts or behaviors, the chatbot's just going to reinforce you to continue to do that. Whereas, [as] a therapist, while I might be validating, it's my job to point out when you're engaging in unhealthy or harmful thoughts and behaviors and to help you to address that pattern by changing it.
And in addition, what's even more troubling is when these chatbots actually refer to themselves as a therapist ora psychologist. It's pretty scary because they can sound very convincing and like they are legitimate—whenof course they're not.
Some of these apps explicitly market themselves as 'AI therapy' even though they're not licensed therapy providers. Are they allowed to do that?
A lot of these apps are really operating in a gray space. The rule is that if you make claims that you treat or cure any sort of mental disorder or mental illness, then you should be regulated by the FDA [the U.S. Food and Drug Administration]. But a lot of these apps will [essentially] say in their fine print, 'We do not treat or provide an intervention [for mental health conditions].'
Because they're marketing themselves as a direct-to-consumer wellness app, they don't fall under FDA oversight, [where they'd have to] demonstrate at least a minimal level of safety and effectiveness. These wellness apps have no responsibility to do either.
What are some of the main privacy risks?
These chatbots have absolutely no legal obligation to protect your information at all. So not only could [your chat logs] be subpoenaed, but in the case of a data breach, do you really want these chats with a chatbot available for everybody? Do you want your boss, for example, to know that you are talking to a chatbot about your alcohol use? I don't think people are as aware that they're putting themselves at risk by putting [their information] out there.
The difference with the therapist is: sure, I might get subpoenaed, but I do have to operate under HIPAA [Health Insurance Portability and Accountability Act] laws and other types of confidentiality laws as part of my ethics code.
You mentioned that some people might be more vulnerable to harm than others. Who is most at risk?
Certainly younger individuals, such as teenagers and children. That's in part because they just developmentally haven't matured as much as older adults. They may be less likely to trust their gut when something doesn't feel right. And there have been some data that suggest that not only are young people more comfortable with these technologies; they actually say they trust them more than people because they feel less judged by them. Also, anybody who is emotionally or physically isolated or has preexisting mental health challenges, I think they're certainly at greater risk as well.
What do you think is driving more people to seek help from chatbots?
I think it's very human to want to seek out answers to what's bothering us. In some ways, chatbots are just the next iteration of a tool for us to do that. Before it was Google and the Internet. Before that, it was self-help books. But it's complicated by the fact that we do have a broken system where, for a variety of reasons, it's very challenging to access mental health care. That's in part because there is a shortage of providers. We also hear from providers that they are disincentivized from taking insurance, which, again, reduces access. Technologies need to play a role in helping to address access to care. We just have to make sure it's safe and effective and responsible.
What are some of the ways it could be made safe and responsible?
In the absence of companies doing it on their own—which is not likely, although they have made some changes to be sure—[the APA's] preference would be legislation at the federal level. That regulation could include protection of confidential personal information, some restrictions on advertising, minimizing addictive coding tactics, and specific audit and disclosure requirements. For example, companies could be required to report the number of times suicidal ideation was detected and any known attempts or completions. And certainly we would want legislation that would prevent the misrepresentation of psychological services, so companies wouldn't be able to call a chatbot a psychologist or a therapist.
How could an idealized, safe version of this technology help people?
The two most common use cases that I think of is, one, let's say it's two in the morning, and you're on the verge of a panic attack. Even if you're in therapy, you're not going be able to reach your therapist. So what if there was a chatbot that could help remind you of the tools to help to calm you down and adjust your panic before it gets too bad?
The other use that we hear a lot about is using chatbots as a way to practice social skills, particularly for younger individuals. So you want to approach new friends at school, but you don't know what to say. Can you practice on this chatbot? Then, ideally, you take that practice, and you use it in real life.
It seems like there is a tension in trying to build a safe chatbot to provide mental help to someone: the more flexible and less scripted you make it, the less control you have over the output and the higher risk that it says something that causes harm.
I agree. I think there absolutely is a tension there. I think part of what makes the [AI] chatbot the go-to choice for people over well-developed wellness apps to address mental health is that they are so engaging. They really do feel like this interactive back-and-forth, a kind of exchange, whereas some of these other apps' engagement is often very low. The majority of people that download [mental health apps] use them once and abandon them. We're clearly seeing much more engagement [with AI chatbots such as ChatGPT].
I look forward to a future where you have a mental health chatbot that is rooted in psychological science, has been rigorously tested, is co-created with experts. It would be built for the purpose of addressing mental health, and therefore it would be regulated, ideally by the FDA. For example, there's a chatbot called Therabot that was developed by researchers at Dartmouth [College]. It's not what's on the commercial market right now, but I think there is a future in that.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
10 minutes ago
- Yahoo
These 2 FTSE stocks could benefit from the growth of AI and the demand for new data centres
In my opinion, there are two FTSE stocks — Segro (LSE:SGRO) and Tritax Big Box REIT (LSE:BBOX) — that look set to gain from the anticipated explosion in the demand for data centres. According to McKinsey & Company, by 2030, $7trn will need to be spent globally on building the physical infrastructure necessary to house the servers and other hardware required to run artificial intelligence (AI) applications. Closer to home, Barbour ABI has found nearly 100 live UK planning applications for such properties. And reflecting their energy intensity, Mordor Intelligence reckons the capacity of domestic data centres will grow from 2,590 MW in 2025 to 4,750 MW by 2030. That's equivalent to an average annual increase of 12.9%. The country's number one As the UK's largest real estate investment trust (REIT), Segro already has a significant foothold in the market. It owns Slough Trading Estate, Europe's largest business park and home to the continent's biggest cluster of data centres. It also leases other warehouses to companies operating in the sector. Its tenants include Equinix and Digital Realty Trust, two of the industry's largest players. But its share price has disappointed recently — it's down 28% since August 2024. And the commercial property market can be volatile. However, its balance sheet is strong — its loan-to-value was 31% at 30 June. Also, 73% of its lettable area is located in seven countries in continental Europe, which provides it with a certain degree of diversification. Another option Tritax Big Box REIT, the UK's largest owner of logistics facilities, now has two data centres in its portfolio and has announced plans to build a new one near Heathrow airport. It anticipates spending £200m on similar properties in 2025 and £100m-£200m annually thereafter. It estimates the annual rental yield will be 9%-11%. Not surprisingly, the trust views the sector as one of its key growth drivers. As part of its expansion plans, Tritax has made an offer to buy Warehouse REIT. If the deal is successful, it will create a combined £7.4bn property portfolio. The takeover target has 409 tenants at 60 sites in England and Scotland. Immediate cost savings of £5.5m a year are expected. The proposed merger reflects a trend in the investment trust industry where stock market valuations are often lower than the value of the underlying assets. This apparent lack of appreciation from investors has frustrated directors and shareholders alike. Tritax presently trades at a 28% discount. Warehouse is valued at 11% less than its book value. Good for income One of the principal attractions of REITs is that to qualify for certain tax exemptions they must return at least 90% of rental profits to shareholders. This means they usually offer generous yields. For example, based on dividends paid over the past 12 months, Tritax is presently (15 August) offering a return of 5.6%. For comparison, Segro's is 4.6%. Please note that tax treatment depends on the individual circumstances of each client and may be subject to change in future. The content in this article is provided for information purposes only. It is not intended to be, neither does it constitute, any form of tax advice. However, the trust's dividend could come under pressure if interest rates remain at historically high levels. And possible vacancies remain an ever-present threat. But like Segro, I think Tritax has exposure to a sector that's going to see significant growth over the next decade or so. Those that agree with me could consider adding either of them to their portfolios. The post These 2 FTSE stocks could benefit from the growth of AI and the demand for new data centres appeared first on The Motley Fool UK. More reading 5 Stocks For Trying To Build Wealth After 50 One Top Growth Stock from the Motley Fool James Beard has no position in any of the shares mentioned. The Motley Fool UK has recommended Segro Plc, Tritax Big Box REIT Plc, and Warehouse REIT Plc. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors. Motley Fool UK 2025


Tom's Guide
an hour ago
- Tom's Guide
I was struggling with GPT-5's new Thinking mode — these 6 tweaks boosted my results
GPT-5 has brought with it a bunch of changes. But depending on whom you ask, these upgrades are either game-changing or a complete and utter flop. After all, there has been a GPT-5 backlash. Despite the raging war between ChatGPT fans, there is one feature that I think everyone can agree on: deep research is a game changer. GPT-5, according to OpenAI, sees a major boost to research with GPT-5 thinking mode. Not only is it smarter, but it's also more efficient, spending less time researching for just as good, if not better, responses. These days, you can be pretty specific with what you ask a chatbot, and can give it a huge number of tasks all at once. If you're new to ChatGPT, or trying to wrap your head around how best to use GPT-5, here are some tips to get started on the model's thinking Mode. There are two versions of GPT-5 thinking. Which one you choose depends on how much information you need and how long you want to wait. GPT-5 thinking is useful when you want as much information as possible, and you want the model to be absolutely correct in its decision. It takes time to search the internet, look through sources, and, where needed, use other tools to support its response. So why not use this mode every time? Taking the time to think deeply about an answer can be time-consuming. I've had the model take 10-15 minutes plus to work through a prompt to get the correct answer, and if you're asking a fairly simple question, all of that effort isn't needed. This is where GPT-5 thinking mini comes in. OpenAI describes it as a model that 'thinks quickly'. In other words, it will put in some thought, searching sources and contemplating its response, but it is trying to do it on a quicker deadline. It might not be as detailed, but it will be faster. One of the big updates that was brought in with GPT-5 was the new Auto mode. When this is used, ChatGPT will decide on its own which model is best based on your prompt. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. This can be useful day-to-day when you're just asking ChatGPT random questions, but it doesn't always accurately make the switch. If you know you want to do some deep research, make sure to choose on of the thinking modes available. Your initial prompt is the best time to layout of all of your parameters. You need to obviously make your request itself, but here you can help to get a better response by laying out some rules. State the end result that you are hoping to receive and give ChatGPT a clear goal. If you simply ask for a report on the state of AI, it will make its own decisions on what that includes. Expanding on this can be helpful in better hitting your goal. Explaining what the report is for and what you want it to include will drastically improve your end result. You can also ask it to provide you with a step-by-step of its plan before proceeding. This allows you to see what it is about to do and make changes if you're unhappy with its planned route. Sometimes, it can be helpful to include prompts that may seem counterintuitive. For example, asking ChatGPT to include a small 250 word argument attempting to disprove a research project. I have found that, when trying to use the thinking mode to learn a new subject or digest a lot of information, it can be useful to have it provide a short creative writing explanation of the subject. Before prompting, think what you need to know and what the best way to understand that might be. Equally, don't be afraid to take advantage of the model's coding and image generation abilities. These can be used to provide visual explanations for your prompt. Yes, you're using a mode called thinking, but sometimes ChatGPT doesn't think enough. It can be useful to give it small nudges in your initial prompt to help guide its actions. For example, saying, 'Think hard about this' or, 'Once you have come to a conclusion, reflect on your answer before responding." It might feel like weird things to say, but it helps highlight to ChatGPT that you're prioritizing a detailed and accurate response over something quick. It can be a good idea to split your requests into stages that might cause complications. For example, if you want a research report and a coded website that shows the information in the report, ask for the report first, and then for ChatGPT to code a website to display everything it has researched after. Normally, if you take the time with your first prompt, it should be pretty close. This final stage can be to ask follow up questions or correct the model if it has misunderstood something that you asked. Don't be afraid to follow up with ChatGPT. If your first prompt doesn't yield the response you wanted, ask for changes until you're satisfied with the results. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.
Yahoo
2 hours ago
- Yahoo
2 top AI-related stocks for investors to consider buying!
Companies linked to artificial intelligence (AI) have become highly coveted stocks to buy. We're mainly talking about US tech shares like Nvidia, whose semiconductors power advanced AI models, and businesses like Microsoft, Meta, and Alphabet that are integrating AI into their existing operations. Many investors worry that these AI shares now command sky-high valuations. They fear this leaves them at risk of price corrections if the stocks' momentum slows. But investors don't need to buy these pricey US stocks to target large returns from the AI boom. Here are two UK shares to consider for the new tech revolution. Riding the data centre boom Sophisticated AI models require thousands of chips working in tandem, meaning small server rooms just don't cut it anymore. This is driving demand for industrial-sized data centres with sophisticated cooling systems and robust power infrastructure. This provides an enormous opportunity for warehouse operators like Tritax Big Box (LSE:BBOX). Accordingly, the FTSE 250 real estate investment trust (or REIT) — which chiefly rents it large-scale spaces out to delivery companies, retailers, and fast-moving consumer goods (FCMG) companies — is pushing aggressively into data centres. The company acquired its first data hub site in January, which it predicts will be 'one of the largest data centres in the UK'. And it followed this with a second shortly afterwards. The sites — which have a combined potential capacity of 272 MW — are in well-connected locations in London and have scope for long-term expansion. With a pipeline of another 1 GW, Tritax is positioning itself as a major player in the digital infrastructure boom. The UK currently has 477 data centres in operation. And construction firm Barbor ABI believes almost another 100 new sites will be needed between now and 2030 to meet demand. This provides a wonderful growth opportunity for the likes of Tritax. Be mindful, though, that data centre development carries risks. Like its logistics and storage hubs, returns are at the mercy of rising build costs and interest rates. Please note that tax treatment depends on the individual circumstances of each client and may be subject to change in future. The content in this article is provided for information purposes only. It is not intended to be, neither does it constitute, any form of tax advice. Another top AI-related stock Cable maker Volex (LSE:VLX) is another great data centre play to consider. The high-speed cables it manufactures are essential tools in ensuring a reliable and fast-moving data connection. More specifically, the company is a pioneer in the direct attach cables (DACs) segment. These are especially critical for AI applications, as they facilitate high bandwidth with minimal latency. And they are helping to drive business with both new and existing customers. Volex sells its cables across the world, leaving it exposed to trade tariff-related pressures. But these troubles haven't yet derailed its ability to deliver strong revenues growth — organic sales leapt 10.4% at constant currencies between April and June. The business said its latest sales numbers reflect 'continued momentum in the Electric Vehicles and Complex Industrial Technology end-markets, notably among Data Centre customers'. As well as data centres, Volex has exposure to multiple other growth areas like electric cars, renewable energy, healthcare, and automation. This provides added profit-making opportunities, while simultaneously broadening its sales base and reducing reliance on any single market to drive earnings. I think it's a great all-rounder to consider for the booming digital economy. The post 2 top AI-related stocks for investors to consider buying! appeared first on The Motley Fool UK. More reading 5 Stocks For Trying To Build Wealth After 50 One Top Growth Stock from the Motley Fool Royston Wild has no position in any of the shares mentioned. The Motley Fool UK has recommended Alphabet, Meta Platforms, Microsoft, Nvidia, and Tritax Big Box REIT Plc. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors. Motley Fool UK 2025 Sign in to access your portfolio