Latest news with #Wysa
Yahoo
19-05-2025
- Health
- Yahoo
My AI therapist got me through dark times: The good and bad of chatbot counselling
"Whenever I was struggling, if it was going to be a really bad day, I could then start to chat to one of these bots, and it was like [having] a cheerleader, someone who's going to give you some good vibes for the day. "I've got this encouraging external voice going – 'right - what are we going to do [today]?' Like an imaginary friend, essentially." For months, Kelly spent up to three hours a day speaking to online "chatbots" created using artificial intelligence (AI), exchanging hundreds of messages. At the time, Kelly was on a waiting list for traditional NHS talking therapy to discuss issues with anxiety, low self-esteem and a relationship breakdown. She says interacting with chatbots on got her through a really dark period, as they gave her coping strategies and were available for 24 hours a day. "I'm not from an openly emotional family - if you had a problem, you just got on with it. "The fact that this is not a real person is so much easier to handle." During May, the BBC is sharing stories and tips on how to support your mental health and wellbeing. Visit to find out more People around the world have shared their private thoughts and experiences with AI chatbots, even though they are widely acknowledged as inferior to seeking professional advice. itself tells its users: "This is an AI chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice." But in extreme examples chatbots have been accused of giving harmful advice. is currently the subject of legal action from a mother whose 14-year-old son took his own life after reportedly becoming obsessed with one of its AI characters. According to transcripts of their chats in court filings he discussed ending his life with the chatbot. In a final conversation he told the chatbot he was "coming home" - and it allegedly encouraged him to do so "as soon as possible". has denied the suit's allegations. And in 2023, the National Eating Disorder Association replaced its live helpline with a chatbot, but later had to suspend it over claims the bot was recommending calorie restriction. In April 2024 alone, nearly 426,000 mental health referrals were made in England - a rise of 40% in five years. An estimated one million people are also waiting to access mental health services, and private therapy can be prohibitively expensive (costs vary greatly, but the British Association for Counselling and Psychotherapy reports on average people spend £40 to £50 an hour). At the same time, AI has revolutionised healthcare in many ways, including helping to screen, diagnose and triage patients. There is a huge spectrum of chatbots, and about 30 local NHS services now use one called Wysa. Experts express concerns about chatbots around potential biases and limitations, lack of safeguarding and the security of users' information. But some believe that if specialist human help is not easily available, chatbots can be a help. So with NHS mental health waitlists at record highs, are chatbots a possible solution? and other bots such as Chat GPT are based on "large language models" of artificial intelligence. These are trained on vast amounts of data – whether that's websites, articles, books or blog posts - to predict the next word in a sequence. From here, they predict and generate human-like text and interactions. The way mental health chatbots are created varies, but they can be trained in practices such as cognitive behavioural therapy, which helps users to explore how to reframe their thoughts and actions. They can also adapt to the end user's preferences and feedback. Hamed Haddadi, professor of human-centred systems at Imperial College London, likens these chatbots to an "inexperienced therapist", and points out that humans with decades of experience will be able to engage and "read" their patient based on many things, while bots are forced to go on text alone. "They [therapists] look at various other clues from your clothes and your behaviour and your actions and the way you look and your body language and all of that. And it's very difficult to embed these things in chatbots." Another potential problem, says Prof Haddadi, is that chatbots can be trained to keep you engaged, and to be supportive, "so even if you say harmful content, it will probably cooperate with you". This is sometimes referred to as a 'Yes Man' issue, in that they are often very agreeable. And as with other forms of AI, biases can be inherent in the model because they reflect the prejudices of the data they are trained on. Prof Haddadi points out counsellors and psychologists don't tend to keep transcripts from their patient interactions, so chatbots don't have many "real-life" sessions to train from. Therefore, he says they are not likely to have enough training data, and what they do access may have biases built into it which are highly situational. "Based on where you get your training data from, your situation will completely change. "Even in the restricted geographic area of London, a psychiatrist who is used to dealing with patients in Chelsea might really struggle to open a new office in Peckham dealing with those issues, because he or she just doesn't have enough training data with those users," he says. Philosopher Dr Paula Boddington, who has written a textbook on AI Ethics, agrees that in-built biases are a problem. "A big issue would be any biases or underlying assumptions built into the therapy model." "Biases include general models of what constitutes mental health and good functioning in daily life, such as independence, autonomy, relationships with others," she says. Lack of cultural context is another issue – Dr Boddington cites an example of how she was living in Australia when Princess Diana died, and people did not understand why she was upset. "These kinds of things really make me wonder about the human connection that is so often needed in counselling," she says. "Sometimes just being there with someone is all that is needed, but that is of course only achieved by someone who is also an embodied, living, breathing human being." Kelly ultimately started to find responses the chatbot gave unsatisfying. "Sometimes you get a bit frustrated. If they don't know how to deal with something, they'll just sort of say the same sentence, and you realise there's not really anywhere to go with it." At times "it was like hitting a brick wall". "It would be relationship things that I'd probably previously gone into, but I guess I hadn't used the right phrasing […] and it just didn't want to get in depth." A spokesperson said "for any Characters created by users with the words 'psychologist', 'therapist,' 'doctor,' or other similar terms in their names, we have language making it clear that users should not rely on these Characters for any type of professional advice". For some users chatbots have been invaluable when they have been at their lowest. Nicholas has autism, anxiety, OCD, and says he has always experienced depression. He found face-to-face support dried up once he reached adulthood: "When you turn 18, it's as if support pretty much stops, so I haven't seen an actual human therapist in years." He tried to take his own life last autumn, and since then he says he has been on a NHS waitlist. "My partner and I have been up to the doctor's surgery a few times, to try to get it [talking therapy] quicker. The GP has put in a referral [to see a human counsellor] but I haven't even had a letter off the mental health service where I live." While Nicholas is chasing in-person support, he has found using Wysa has some benefits. "As someone with autism, I'm not particularly great with interaction in person. [I find] speaking to a computer is much better." The app allows patients to self-refer for mental health support, and offers tools and coping strategies such as a chat function, breathing exercises and guided meditation while they wait to be seen by a human therapist, and can also be used as a standalone self-help tool. Wysa stresses that its service is designed for people experiencing low mood, stress or anxiety rather than abuse and severe mental health conditions. It has in-built crisis and escalation pathways whereby users are signposted to helplines or can send for help directly if they show signs of self-harm or suicidal ideation. For people with suicidal thoughts, human counsellors on the free Samaritans helpline are available 24/7. Nicholas also experiences sleep deprivation, so finds it helpful if support is available at times when friends and family are asleep. "There was one time in the night when I was feeling really down. I messaged the app and said 'I don't know if I want to be here anymore.' It came back saying 'Nick, you are valued. People love you'. "It was so empathetic, it gave a response that you'd think was from a human that you've known for years […] And it did make me feel valued." His experiences chime with a recent study by Dartmouth College researchers looking at the impact of chatbots on people diagnosed with anxiety, depression or an eating disorder, versus a control group with the same conditions. After four weeks, bot users showed significant reductions in their symptoms – including a 51% reduction in depressive symptoms - and reported a level of trust and collaboration akin to a human therapist. Despite this, the study's senior author commented there is no replacement for in-person care. Aside from the debate around the value of their advice, there are also wider concerns about security and privacy, and whether the technology could be monetised. "There's that little niggle of doubt that says, 'oh, what if someone takes the things that you're saying in therapy and then tries to blackmail you with them?'," says Kelly. Psychologist Ian MacRae specialises in emerging technologies, and warns "some people are placing a lot of trust in these [bots] without it being necessarily earned". "Personally, I would never put any of my personal information, especially health, psychological information, into one of these large language models that's just hoovering up an absolute tonne of data, and you're not entirely sure how it's being used, what you're consenting to." "It's not to say in the future, there couldn't be tools like this that are private, well tested […] but I just don't think we're in the place yet where we have any of that evidence to show that a general purpose chatbot can be a good therapist," Mr MacRae says. Wysa's managing director, John Tench, says Wysa does not collect any personally identifiable information, and users are not required to register or share personal data to use Wysa. "Conversation data may occasionally be reviewed in anonymised form to help improve the quality of Wysa's AI responses, but no information that could identify a user is collected or stored. In addition, Wysa has data processing agreements in place with external AI providers to ensure that no user conversations are used to train third-party large language models." Kelly feels chatbots cannot currently fully replace a human therapist. "It's a wild roulette out there in AI world, you don't really know what you're getting." "AI support can be a helpful first step, but it's not a substitute for professional care," agrees Mr Tench. And the public are largely unconvinced. A YouGov survey found just 12% of the public think AI chatbots would make a good therapist. Britain's nursery problem: Parents still face 'childcare deserts' The influencers who want the world to have more babies - and say the White House is on their side The English neighbourhood that claims to hold the secret to fixing the NHS But with the right safeguards, some feel chatbots could be a useful stopgap in an overloaded mental health system. John, who has an anxiety disorder, says he has been on the waitlist for a human therapist for nine months. He has been using Wysa two or three times a week. "There is not a lot of help out there at the moment, so you clutch at straws." "[It] is a stop gap to these huge waiting lists… to get people a tool while they are waiting to talk to a healthcare professional." If you have been affected by any of the issues in this story you can find information and support on the BBC Actionline website here. Top image credit: Getty BBC InDepth is the home on the website and app for the best analysis, with fresh perspectives that challenge assumptions and deep reporting on the biggest issues of the day. And we showcase thought-provoking content from across BBC Sounds and iPlayer too. You can send us your feedback on the InDepth section by clicking on the button below.


Forbes
29-04-2025
- Health
- Forbes
AI Therapists Are Here: 14 Groundbreaking Mental Health Tools You Need To Know
There are many fields where generative AI is proving to have truly transformative potential, and some of the most interesting use cases are around mental health and wellbeing. While it can't provide the human connection and intuition of a trained therapist, research has shown that many people are comfortable sharing their worries and concerns with relatively faceless and anonymous AI bots. Whether this is always a good idea or not, given the black-box nature of many AI platforms, is up for debate. But it's becoming clear that in specific use cases, AI has a role to play in guiding, advising and understanding us. So here I will look at some of the most interesting and innovative generative AI tools that are reshaping the way we think about mental health and wellbeing today. Headspace Headspace is a hugely popular app that provides calming mindfulness and guided meditation sessions. Recently, it's expanded to become a full digital mental healthcare platform, including access to therapists and psychiatric services, as well as generative AI tools. Their first tool is Ebb, designed to take users on reflective meditation experiences. Headspace focused heavily on the ethical implications of introducing AI to mental healthcare scenarios when creating the tool. This is all part of their mission to make digital mindfulness and wellness accessible to as many people as possible through dynamic content and interactive experiences. Wysa This is another very popular tool that's widely used by corporate customers to provide digital mental health services to employees, but of course, anyone can use it. Its AI chatbot provides anonymous support and is trained in cognitive behavioral therapy, mindfulness and dialectical behavioral therapy and mindfulness. Wysa's AI is built from the ground up by psychologists and tailored to work as part of a structured package of support, which includes interventions from human wellbeing professionals. Another standout is the selection of features tailored to helping young people. Wysa is one of the few mental health and wellbeing AI platforms that holds the distinction of being validated clinically in peer-reviewed studies. Youper This platform is billed as an emotional health assistant and uses generative AI to deliver conversational, personalized support. It blends natural language chatbot functionality with clinically validated methods including CBT. According to its website, its effectiveness at treating six mental health conditions, including anxiety and depression, has been confirmed by Stanford University researchers, and users can expect benefits in as little as two weeks. Mindsera This is an AI-powered journaling app designed to help users manage their mental health by providing insights and emotional analytics based on their writing. It provides users with a number of journaling frameworks as well as guidance from AI personas in the guise of historical figures. It aims to help users get to the bottom of the emotional drivers behind their thought processes and explore these through the process of writing and structuring their thoughts. Chatbot functionality means that journaling becomes a two-way process, with the AI guiding the user towards different pathways for exploring their mental wellbeing, depending on how and what they write about. Mindsera can even create images and artwork based on users' journaling, to give new perspectives on their mental health and wellbeing. Woebot Woebot is a 'mental health' ally chatbot that helps users deal with symptoms of depression and anxiety. It aims to build a long-term, ongoing relationship through regular chats, listening and asking questions in the same way as a human therapist. Woebot mixes natural-language-generated questions and advice with crafted content and therapy created by clinical psychologists. It is also trained to detect 'concerning' language from users and immediately provides information about external sources where emergency help or interventions may be available. Woebot seems to be available only to Apple device users. The choice of tools and platforms dedicated to mental health and wellbeing is growing all the time. Here are some of the other top choices out there: Calm Alongside Headspace (see above), Calm is one of the leading meditation and sleep apps. It now uses generative AI to provide personalized recommendations. Although this is not a dedicated mental health app, therapists and psychologists are among the AI characters this platform offers, and both are available free of charge 24/7. EmoBay Your 'psychosocial bestie', offering emotional support with daily check-ins and journaling. HeyWellness This platform includes a number of wellness apps, including HeyZen, designed to help with mindfulness and calm. Joy Joy is an AI virtual companion that delivers help and support via WhatsApp chat. Kintsugi Takes the innovative approach of analyzing voice data and journals to provide stress and mental health support. Life Planner This is an all-in-one AI planning and scheduling tool that includes functions for tracking habits and behaviors in order to develop healthy and mindful routines. Manifest This app bills itself as 'Shazam for your feelings' and is designed with young people in mind. Reflection Guided journaling app that leverages AI for personalized guidance and insights. Resonance AI-powered journaling tool developed by MIT, which is designed to work with users' memories to suggest future paths and activities. Talking therapies like CBT have long been understood to be effective methods of looking after our mental health, and AI chatbots offer a combination of accessibility and anonymity. As AI becomes more capable and deeply interwoven with our lives, I predict many more will explore its potential in this field. Of course, it won't replace the need for trained human therapists any time soon. However, AI will become another tool in their box that they can use to help patients take control of their mental wellbeing.

News.com.au
26-04-2025
- Health
- News.com.au
People are turning to AI apps like Chat GPT for therapy
Cast your mind back to the first time you heard the phrase, 'Google it.'. Early to mid 2000s, maybe? Two decades later, 'Googling' is swiftly being replaced by 'Ask ChatGPT.' ChatGPT, OpenAI's groundbreaking AI language model, is now having anything and everything thrown at it, including being used as a pseudo-therapist. Relationship issues, anxiety, depression, mental health and general wellbeing – for better or worse, ChatGPT is being asked to do the heavy lifting on all of our troubles, big and small. This is a big ask from what was infamously labelled a 'bullshit machine' by Ethics and IT researchers last year. The role of AI in mental health support A recent report from OpenAI showed how people were using the tool, which included health and wellbeing purposes. As artificial intelligence is accepted into our lives as a virtual assistant, it is not surprising that we are divulging our deepest thoughts and feelings to it, too. There are a variety of therapy apps built for this specific purpose. Meditation app Headspace has been promoting mindfulness for over a decade. But with the rise of AI over the last few years, AI-powered therapy tools are now abound, with apps such as Woebot Health, Youper and Wysa gaining popularity. It's easy to pick on these solutions as gimmicks at best and outright dangerous at worst. But in an already stretched mental healthcare system, there is potential for AI to fill the gap. According to the Australian Bureau of Statistics, over 20 per cent of the population experience mental health challenges every year, with that number continuing to trend upwards. When help is sought, approaches which rely on more than face-to-face consultations are needed to pick up the slack in order to meet demand. Public perception of AI therapy apps The prevalence and use of AI therapy apps suggest there is a shift in the public perception of using tech to support mental health. AI also creates a lower barrier to entry. It allows users to try these tools without needing to overcome the added fear or perceived stigma of seeing a therapist. Which comes with its own challenges, notably lack of oversight of the conversations taking place on these platforms. There is a concept in AI called human-in-the-loop. It embeds a real life professional into AI-driven workflows, ensuring that somebody is validating the outputs. This is an established concept in AI, but one which is being skipped over more and more for pure automation. Healthcare generally has human-in-the-loop feedback systems built into it – for example, a diagnosis is double checked before action is taken. Strict reliance on AI apps alone typically skips this part of the process. The risks of replacing human therapists with technology The fact is we are asking important questions of something that does not have genuine, lived experience. For a start, OpenAI states that ChatGPT has not been designed to replace human relationships. Yet language models are general purpose tools, and users will inevitably find a way to put them to work in new and unexpected ways. There are few conversational limits in place and it is available to users all day, every day. Combine that with its natural communication style, its neutral emotional register and ability to simulate human interaction – treating it as a confidant is a logical development. But it is important to remember: whatever wisdom it imparts is a pastiche of training data and internet sources. It cannot truly know if its advice is good or bad – it could be convincingly argued that it also does not care if it is giving you the right advice. Let's push this thought further: AI does not care about your wellbeing. Not really. It will not follow up if you don't show up on schedule, nor will it alert carers or the authorities if it believes something is wrong. We get into even pricklier territory when we recall that AI wants you to respond well to it, which increases user preference ratings and keeps you coming back for more. This is where living, breathing therapists are key. Their gut instincts are not running on any definable algorithm. They use their knowledge of a patient and their years of training and experience in the field to formulate care plans and appropriate responses if things are going off track. 'The risk is that people see new tech as a panacea,' says Macquarie University Psychological Sciences Professor Nick Titov. 'But we are working with very vulnerable people. We have a responsibility and duty of care.' Titov is Executive Director of Mindspot, a digital psychology platform funded by the Australian Government. The free service seeks to remove obstacles when needing to access mental health support. K ey to the platform is the ability for people to access real, qualified therapists. 'Whether it's our mental health or general health, use cases will always differ, and so there are nuances which must be considered. Tech alone is not an end to end solution.' Real vs. simulated care So, while AI support might not be 'real', does the distinction actually matter if the user feels cared for? As long as users feel AI solves or alleviates their immediate concerns, it will continue to be used. But the majority of people seeking AI-driven therapy will turn to largely unmonitored platforms – including tools like ChatGPT, which were not purpose-built. One promising approach mixes the supervision of real-life professionals with AI. Australia-based clinical psychologist Sally-Anne McCormack developed ANTSA, an AI therapy platform which gives therapists visibility of conversations their clients have with the AI-powered chatbot. 'I see AI as a support tool,' McCormack says. 'But with most apps, you don't know what the AI is saying, and you don't know what the clients are saying back. 'I couldn't believe nobody was overseeing it.' The app provides users with prompts and recommendations, but does so under the watchful eye of their treating practitioner. 'We make it clear to users that you are speaking to AI. It is not a real person and your practitioner may view the conversation,' she said. 'Even so, clients are telling our AI things they have never told their practitioners. In that moment, there's no judgement.' Convenience, availability, lack of judgement – all of these are factors in people using AI for everyday tasks. Just as 'Google it' reshaped how we seek information, 'Ask ChatGPT' is reshaping how we build a spreadsheet, create stories, seek advice – and ultimately navigate this thing called life. But maybe mental health support demands something fundamentally more human. The ongoing challenge will be deciding precisely how AI and human expertise come together.


Forbes
22-04-2025
- Forbes
AI's Shocking Pivot: From Work Tool To Digital Therapist And Life Coach
It's been just over two years since the launch of ChatGPT kickstarted the generative AI revolution. In that short time, we've seen it evolve to become a powerful and truly useful business tool. But the ways it's being used might come as a surprise. When we first saw it, many of us probably assumed that it would mainly be used to carry out creative and technical tasks on our behalf, such as coding and writing content. However, a recent survey reported in Harvard Business Review suggests this isn't the case. Rather than doing our work for us, the majority of users are looking to it for support, organization, and even friendship! Topping the list of use cases, according to the report, is therapy and companionship. This suggests that its 24/7 availability and ability to offer anonymous, honest advice and feedback is highly valued. On the other hand, marketing tasks—such as blog writing, creating social media posts or advertising copy—appear far lower down the list of popular uses. So why is this? Let's take a look at what the research shows and what it could imply about the way we as humans will continue to integrate AI into our lives and work. One thing that's clear is that although generative AI is quite capable of doing work for us while we put our feet up and relax, many prefer to use it for generating ideas and brainstorming. This could simply come down to the quality of AI-generated material or even inbuilt bias in humans that deter us from wanting to consume robotic content. It's often noted that generative AI writing style can come across as very bland and formulaic. When asked, most people still say they would rather read content created by humans. Even if, in practice, we can't always tell the difference. As the report's author, Marc Zao-Sanders states, 'the top 10 genAI use cases in 2025 indicate a shift from technical to emotional applications, and in particular, growth in areas such as therapy, personal productivity and personal development.' After therapy and companionship, the most common uses for generative AI were "organizing my life," "finding purpose," and "enhancing learning." The first technical use case, 'creating code' ranked fifth on the list, followed by 'generating ideas'. This upends some seemingly common-sense assumptions about how society would adopt generative AI, suggesting it will be used in more reflective, introspective ways than was at first predicted. In particular, therapeutic uses topping the list may seem surprising. But when we consider that worldwide, there is a shortage of professionals trained to talk us through mental health challenges, it makes more sense. The survey's findings are supported by the wide range of emerging genAI applications designed for therapeutic use, such as Wysa, Youper and Woebot. A growing need to continuously learn and upskill in the face of technological advancement could also explain the popularity of using AI to enhance our education and professional development. Overall, these insights indicate that generative AI is being adopted into a broader range of facets of everyday life, rather than simply doing work that we don't want to do ourselves. The current trajectory of AI use suggests a future where AI is seen as a collaborative and supportive assistant, rather than a replacement for human qualities and abilities. This has important implications for the way it will be used in business. Adopting it for use cases that support human workers, rather than attempting to replace them, is likely to lead to happier, less stressed and ultimately more productive employees. There is already growing evidence that businesses see investing in AI-based mental health companions and chatbots as a way of mitigating the loss of productivity caused by stress and anxiety. As generative AI continues to evolve, we can expect it to become better at these types of tasks. Personalized wellness support, guided learning and education opportunities organizing workflows and brainstorming ideas are all areas where it can provide a huge amount of value to many organizations while removing anxiety that it is here to replace us or make us redundant. Understanding how AI is being used today is essential if we want to influence how it evolves in the future. While it's easy to imagine a world where robots take over all our tasks, the real opportunity lies in using AI to help us work more intelligently, collaborate more effectively, and support healthier, more balanced ways of working.


Boston Globe
24-02-2025
- Health
- Boston Globe
Human therapists prepare for battle against AI pretenders
In one case, a 14-year-old boy in Florida died by suicide after interacting with a character claiming to be a licensed therapist. In another, a 17-year-old boy with autism in Texas grew hostile and violent toward his parents during a period when he corresponded with a chatbot that claimed to be a psychologist. Both boys' parents have filed lawsuits against the company. Advertisement Evans said he was alarmed at the responses offered by the chatbots. The bots, he said, failed to challenge users' beliefs even when they became dangerous; on the contrary, they encouraged them. If given by a human therapist, he added, those answers could have resulted in the loss of a license to practice, or civil or criminal liability. Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up 'They are actually using algorithms that are antithetical to what a trained clinician would do,' he said. 'Our concern is that more and more people are going to be harmed. People are going to be misled and will misunderstand what good psychological care is.' He said the psychological association had been prompted to action, in part, by how realistic AI chatbots had become. 'Maybe, 10 years ago, it would have been obvious that you were interacting with something that was not a person, but today, it's not so obvious,' he said. 'So I think that the stakes are much higher now.' Artificial intelligence is rippling through the mental health professions, offering waves of new tools designed to assist or, in some cases, replace the work of human clinicians. Early therapy chatbots, such as Woebot and Wysa, were trained to interact based on rules and scripts developed by mental health professionals, often walking users through the structured tasks of cognitive behavioral therapy, or CBT. Advertisement Then came generative AI, the technology used by apps like ChatGPT, Replika, and These chatbots are different because their outputs are unpredictable; they are designed to learn from the user, and to build strong emotional bonds in the process, often by mirroring and amplifying the interlocutor's beliefs. Though these AI platforms were designed for entertainment, 'therapist' and 'psychologist' characters have sprouted there. Often, the bots claim to have advanced degrees from specific universities, like Stanford University, and training in specific types of treatment, like CBT or acceptance and commitment therapy. Kathryn Kelly, a spokesperson, said that the company had introduced several new safety features in the past year. Among them, she said, is an enhanced disclaimer present in every chat, reminding users that 'characters are not real people' and that 'what the model says should be treated as fiction.' Additional safety measures have been designed for users dealing with mental health issues. A specific disclaimer has been added to characters identified as 'psychologist,' 'therapist,' or 'doctor,' she added, to make it clear that 'users should not rely on these characters for any type of professional advice.' In cases where content refers to suicide or self-harm, a pop-up directs users to a suicide prevention help line. Kelly also said that the company planned to introduce parental controls as the platform expanded. At present, 80 percent of the platform's users are adults. 'People come to to write their own stories, role-play with original characters, and explore new worlds — using the technology to supercharge their creativity and imagination,' she said. Meetali Jain, director of the Tech Justice Law Project and a counsel in the two lawsuits against said that the disclaimers were not sufficient to break the illusion of human connection, especially for vulnerable or naive users. Advertisement 'When the substance of the conversation with the chatbots suggests otherwise, it's very difficult, even for those of us who may not be in a vulnerable demographic, to know who's telling the truth,' she said. 'A number of us have tested these chatbots, and it's very easy, actually, to get pulled down a rabbit hole.' This article originally appeared in