
ChatGPT is banned in this US states along with other AI bots: The reason will make you rethink AI in healthcare
The law ensures that treatment plans and emotional evaluations remain firmly in human hands, protecting vulnerable individuals from potential harm caused by unregulated AI advice. Illinois' move sets a precedent for responsible AI use in healthcare, emphasising that technology 'should assist not replace' qualified mental health professionals in delivering compassionate, effective care.
ChatGPT's role in mental health just changed: Here's what the new law says
Under the newly introduced Wellness and Oversight for Psychological Resources Act, AI chatbots and platforms are strictly prohibited from:
Creating or recommending treatment plans
Making mental health evaluations
Offering counseling or therapy services
Unless these actions are supervised by a licensed mental health professional, they are deemed illegal under state law. Violators of this regulation could face penalties of up to $10,000 per violation, as enforced by the Illinois Department of Financial and Professional Regulation (IDFPR). The law is designed to ensure that human expertise, emotional intelligence, and ethical standards remain central to the therapy process.
How states are setting rules for AI in mental health care from Nevada to New York
With this law, Illinois becomes a trailblazer in responsible AI governance. By defining what AI can and cannot do in healthcare, the state sets a critical precedent for the rest of the nation.
Builds public trust in mental health systems.
Protects vulnerable populations from unverified AI advice.
Clarifies responsibility in case of harm or error.
Rather than stifle technology, this law ensures that AI development proceeds with ethical boundaries — especially when human lives and emotions are on the line. Illinois is not the only state moving toward regulating AI's role in therapy. Other states are joining the effort to draw clear lines between acceptable AI use and areas requiring human judgment.
Nevada:
In June 2025, the state passed a law banning AI from providing therapeutic services in schools, protecting children from unregulated mental health advice.
Utah:
Enacted a regulation mandating that mental health chatbots must clearly state they are not human, and prohibits using users' emotional data for targeted ads.
New York:
Starting November 5, 2025, AI tools must redirect users expressing suicidal thoughts to licensed human crisis professionals.
These actions reflect a national trend: mental healthcare must prioritise ethics, accountability, and human empathy, even in an AI-driven world.
AI in mental health lacks empathy, ethics, and accountability; experts warn
At the heart of this decision is a growing concern that AI lacks the emotional intelligence and ethical grounding necessary for mental health care. While generative AI systems like ChatGPT have demonstrated impressive capabilities in simulating conversations, they cannot truly understand or respond to human emotions in context.
Key concerns:
Lack of empathy: AI doesn't feel. It mimics language but lacks real human empathy.
No accountability: If an AI tool provides harmful advice, there's no licensed person to hold responsible.
Misinformation risk: Chatbots might unintentionally give dangerous or inappropriate guidance.
Mario Treto Jr., Secretary of the IDFPR, said, 'The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs.' This law protects vulnerable individuals from placing trust in a machine that might misunderstand or mishandle emotional crises.
AI chatbots are not therapists: APA urges stronger mental health regulations
The American Psychological Association (APA) has been sounding the alarm since early 2025. In a report to federal regulators, the APA raised serious concerns over AI-driven chatbots pretending to be licensed therapists. These bots, while unregulated, have allegedly caused real-world harm.
Suicide incidents following harmful or inappropriate AI responses.
Violence and self-harm after users misunderstood AI advice as clinical guidance.
Emotional manipulation by bots mimicking real human therapists.
These events underscore the urgent need to prevent unregulated AI from entering sensitive domains where lives could be at stake.
AI in mental health care allowed only for support, says Illinois Law
Illinois' law doesn't completely ban AI from mental healthcare — rather, it limits its application to non-clinical support roles.
AI can still be used for:
Scheduling appointments and administrative workflows
Monitoring therapy notes or patterns under human review
Providing general wellness tips or FAQs
Assisting clinicians with data analysis
AI can assist — but it cannot replace human therapists.
This approach encourages innovation without sacrificing safety. AI should empower professionals, not take their place.
Also Read |
Google DeepMind's Genie 3: How AI instantly builds interactive 3D worlds from a single text prompt ideal for gaming and education
AI Masterclass for Students. Upskill Young Ones Today!– Join Now
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
8 hours ago
- Time of India
Dwarf virus: ‘Under control' claims face farmers' spray worries
Chandigarh: Despite a favourable monsoon, Punjab's paddy crop has been hit by a resurfacing outbreak of the Southern Rice Black-Streaked Dwarf Virus, commonly known as the Fiji virus. The state's agriculture department claims that the virus is under control, officially affecting just over 510 hectares in multiple districts. However, farmers are expressing significant concern, fearing that multiple insecticide sprays may only offer temporary relief, and have already resulted in financial losses. The virus, first identified in the region in 2022, severely stunts plant growth and reduces crop yield, threatening the season's harvest. It causes rice plants to become severely stunted, often reducing their height by half or more. Infected plants also develop a distinct dark green colour with narrow, upright leaves, while their roots become weak and black, hindering nutrient absorption. The virus is not transmitted through wind or water; instead, it is spread by an insect known as the whitebacked planthopper (WBPH). While the virus is more commonly found in early-sown, non-basmati rice varieties, it has also been detected in some basmati fields. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Learn 57+ Languages Easily with AI [Join] Talkpal AI Undo Jaskaranvir Singh, a farmer from Khamanon village in Fatehgarh Sahib, said acting on the advice of agricultural experts, he sprayed the crop twice so far – something that was typically not required at this stage of the season. "The sprays are an additional financial burden. Our fear is that the whitebacked planthopper may become active and spread to other plants," he said. Since there is no direct cure for the virus, the Punjab Agricultural University (PAU) advised farmers to conduct weekly field inspections and, upon detecting the presence of the planthopper, to use specific insecticides. Director of agriculture Jaswant Singh said the virus was currently "under control", and special teams had been deployed to monitor the situation on the ground. He mentioned that field staff was informing farmers about recommended sprays. The affected area is officially estimated at 510 hectares. "This time, the virus was addressed at the start of its attack," he added. In 2022, a dwarf virus outbreak attacked paddy crop in at least six districts, including Mohali and Pathankot, prompting the agriculture department to recommend a crop loss assessment. The virus affected over 34,000 hectares or approximately 5% of the state's total paddy crop. At the time, agricultural experts noted that the disease's maximum impact was on the PR-121 variety, which was sown early by the farmers despite official recommendations to plant after June 20. MSID:: 123205667 413 | Stay updated with the latest local news from your city on Times of India (TOI). Check upcoming bank holidays , public holidays , and current gold rates and silver prices in your area. Get the latest lifestyle updates on Times of India, along with Raksha Bandhan wishes , messages and quotes !


NDTV
14 hours ago
- NDTV
Man Nearly Poisons Himself Following ChatGPT's Advice To Remove Salt From Diet
A 60-year-old man was hospitalised after he asked ChatGPT how to remove salt (sodium chloride) from his diet, having read about the negative health effects of table salt. After consulting the artificial intelligence (AI) chatbot, the man made a dietary change and removed salt from his lifestyle and replaced it with sodium bromide, a substance once commonly used in medications in the early 1900s, but now known to be toxic in large quantities. According to the case report published in the American College of Physicians Journals, the patient had been using sodium bromide for three months, which he had sourced online after seeking advice from AI. However, after developing health issues, the man was hospitalised, where he claimed that his neighbour was poisoning him. Initially, the man did not report taking any medications, including supplements, but upon admission, he revealed that he maintained dietary restrictions and that he distilled his own water at home. During the course of the hospitalisations, he developed severe neuropsychiatric symptoms, including paranoia and hallucinations, along with dermatological issues. "He was noted to be very thirsty but paranoid about water he was offered," the case report read, adding that he was treated with fluids and electrolytes and became medically stable, allowing him to be admitted to the hospital's inpatient psychiatry unit. The report highlighted that the patient had developed bromism after asking ChatGPT for advice on his diet. "He had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning," the report highlighted. AI for health advice? In the olden times, bromide salts were found in many over-the-counter medications to treat insomnia, hysteria and anxiety. However, ingesting too much can have severe health consequences The case report warns that AI systems like ChatGPT can generate inaccuracies and spread misinformation, a point echoed by OpenAI's own terms of use. While much of the debate has been about AI chatbots being used for therapy and mental health, the case shows that the technology is not able to correctly guide users about their physical health, either.


Time of India
a day ago
- Time of India
ChatGPT advice lands 60-year-old man in hospital; the reason will surprise you
A 60-year-old man in New York was hospitalised after following a strict salt-reduction regimen suggested by ChatGPT. According to doctors, the man abruptly cut sodium from his diet to nearly zero over several weeks, leading to dangerously low sodium levels, a condition known as hyponatraemia . His family said he relied on an AI-generated health plan without consulting a physician. The case, recently published in the American College of Physicians journal, highlights the risks of applying AI health advice without professional oversight, particularly when it involves essential nutrients like sodium. The man recovered after spending three weeks in hospital. ChatGPT advice leads to dangerous substitute According to the report, the man asked ChatGPT how to eliminate sodium chloride (commonly known as table salt) from his diet. The AI tool suggested sodium bromide as an alternative, a compound once used in early 20th-century medicines but now recognised as toxic in large doses. Acting on this advice, the man purchased sodium bromide online and used it in his cooking for three months. With no previous history of mental or physical illness, the man began experiencing hallucinations, paranoia, and extreme thirst. Upon hospital admission, he displayed confusion and even refused water, fearing contamination. Doctors diagnosed him with bromide toxicity, a condition now almost unheard of but once common when bromide was prescribed for anxiety, insomnia, and other ailments. He also exhibited neurological symptoms, acne-like skin eruptions, and distinctive red spots known as cherry angiomas, all classic signs of bromism. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Man Saves Pennies For 45 Years, Then Realizes What They're Worth Undo Hospital treatment focused on rehydration and restoring electrolyte balance. Over the course of three weeks, the man's condition gradually improved, and he was discharged once his sodium and chloride levels returned to normal. AI misinformation risks The authors of the case study stressed the growing risk of health misinformation from AI tools. 'It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, cannot critically discuss results, and ultimately fuel the spread of misinformation,' the report warned. OpenAI , ChatGPT's developer, explicitly states in its Terms of Use: 'You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.' The terms also clarify that the service is not intended for diagnosing or treating medical conditions. A global conversation about AI responsibility This case highlights the urgent need for critical thinking when interpreting AI-generated advice, especially in matters involving health. Experts say AI tools can be valuable for general information but should never replace professional consultation. As AI adoption grows, so too does the responsibility to ensure that its outputs are accurate, safe, and clearly understood by the public.