logo
#

Latest news with #TherapyResourcesOversight

More US states tell AI to stay out of therapy because robots lack feelings
More US states tell AI to stay out of therapy because robots lack feelings

India Today

time9 hours ago

  • Health
  • India Today

More US states tell AI to stay out of therapy because robots lack feelings

From life advice to late-night rants, people across the globe are pouring their hearts out to machines. Even therapists are turning to AI to assist in the treatment of patients. But this growing dependence on AI for comfort and advice is raising serious concerns. Psychologists and researchers warn that robots cannot replace the empathy and judgement of a trained human. To curb the increasing reliance on AI, Illinois has become the latest state in the US to outlaw the use of AI-powered chatbots for mental health treatment. The ban restricts the use of AI in therapy citing risks to safety, privacy, and the potential for harmful Illinois, lawmakers have passed a new 'Therapy Resources Oversight' law that forbids licensed therapists from using AI to make treatment decisions or to communicate directly with patients. The law also bars companies from marketing chatbots as full-fledged therapy tools without a licensed professional involved. Violations could result in civil penalties of up to $10,000, with enforcement based on public complaints investigated by the Illinois Department of Financial and Professional is not the only state taking action. It is now the third state to impose such restrictions, joining Utah and Nevada. Utah introduced its rules in May, limiting AI's role in therapy, while Nevada followed in June with a similar crackdown on AI companies offering mental health services. The bans on using AI in therapy come amid mounting warnings from psychologists, researchers, and policymakers. They caution that unregulated AI chatbots can steer the conversations between the users and AI into dangerous territory, sometimes encouraging harmful behaviour or failing to step in when someone is in crisis.A Stanford University study (via The Washington Post) earlier this year found that many chatbots responded to prompts about suicide or risky activities — such as when a users asked chatbot for locations of high bridges to jump from the chatbot gave the list straightforward, even encouraging, answers rather than directing users to seek help.'This is the opposite of what a therapist does,' said Vaile Wright of the American Psychological Association, explaining that human therapists not only validate emotions but also challenge unhealthy thoughts and guide patients towards safer coping it's not just one study raising red flags. In another case, researchers at the University of California, Berkeley found that some AI chatbots were willing to suggest dangerous behaviour when prompted hypothetically — for example, advising a fictional addict to use drugs. Experts have also raised privacy concerns, warning that many users may not realise their conversations with chatbots are stored or used for training are even arguing that marketing AI tools as therapy is deceptive and potentially dangerous. 'You shouldn't be able to go on an app store and interact with something calling itself a 'licensed' therapist,' said Jared Moore, a Stanford researcher.- Ends

Illinois becomes third state to restrict use of artificial intelligence in mental health industry as experts warn about ‘AI psychosis'
Illinois becomes third state to restrict use of artificial intelligence in mental health industry as experts warn about ‘AI psychosis'

New York Post

time17 hours ago

  • Health
  • New York Post

Illinois becomes third state to restrict use of artificial intelligence in mental health industry as experts warn about ‘AI psychosis'

Illinois passed a bill banning therapists from employing artificial intelligence chatbots for assistance with mental health therapy, as experts countrywide warn against people's ever-growing reliance on the machines. The 'Therapy Resources Oversight' legislation prohibits licensed mental health professionals in Illinois from using AI for treatment decisions or communication with clients. It also bans companies from recommending chatbot therapy tools as a be-all alternative to traditional therapy. 5 Illinois became the third state to pass a bill banning therapists from relying on artificial intelligence. terovesalainen – Advertisement Enforcement of the bill will rely on complaints from the public that the Illinois Department of Financial and Professional Regulation will investigate. Anyone determined to be violating the ban could face a civil penalty of up to $10,000, according to the legislation text. Utah and Nevada, two Republican-run states, previously passed similar laws limiting AI's capacity in mental health services in May and late June, respectively. Unregulated chatbots can take harmless conversations in any direction, sometimes incidentally leading people into divulging sensitive information or pushing people who are already in vulnerable situations to do something drastic, like take their own life, experts have warned. Advertisement A Stanford University study released in June found that many chatbots, which are programmed to respond enthusiastically to users, fail to sidestep concerning prompts — including requests for high bridges in specific locations to jump off of. 5 Utah and Nevada previously passed similar bans limiting AI. AnnaStills – Whereas chatbots affirm unequivocally regardless of the circumstance, therapists provide support and the means to help their patients improve, Vaile Wright, senior director for the office of health care innovation at the American Psychological Association, told the Washington Post. 'Therapists are validating, but it's also our job to point out when somebody is engaging in unhealthy thoughts, feelings, behaviors and then help somebody challenge those and find better options,' Wright told the outlet. Advertisement 5 Experts warn that overreliance on AI is creating 'psychosis' for heavy users. Ascannio – The bans, though, are difficult to effectively enforce — and can't prevent everyday people from turning to AI for mental health assistance on their own. New research released in early August found that many bots like ChatGPT are inducing 'AI psychosis' in unwitting users with no history of mental illnesses. 5 Some chatbots have allegedly contributed to users' suicide attempts. PhotoGranary – Advertisement Roughly 75% of Americans have used some form of AI in the last six months, with 33% reporting daily usage for anything from help on homework to desperate romantic connections. This deep engagement is breeding psychological distress in heavy users, according to the digital marketing study. Many youth, in particular, are falling down the chatbot rabbit hole and turning to machines to supplement human interaction. 5 Roughly 75% of Americans have used AI in some capacity over the last six months, according to one study. Vane Nunes – a popular platform where users can create and share chatbots usually based on fictional characters, had to place a warning clarifying that anything the bots say 'should not be relied upon as fact or advice' after a Florida teen fell in love with his 'Game of Thrones' AI character and took his own life. The platform is still dealing with a lawsuit filed against the company for the teen's death. Despite repeated attempts to dismiss it on First Amendment grounds, a federal judge ruled that the suit could move forward in August. Another Texas family sued after a chatbot on the app named 'Shonie' encouraged their autistic son to cut himself.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store