logo
Who is Kendra Hilty? TikToker's 'I fell in love with my psychiatrist' saga sparks debate on mental health and AI

Who is Kendra Hilty? TikToker's 'I fell in love with my psychiatrist' saga sparks debate on mental health and AI

Hindustan Times2 days ago
Kendra Hilty has gained significant attention on TikTok with her ongoing 'I fell in love with my psychiatrist' series. Across numerous videos, she details a complex relationship with her psychiatrist, sparking widespread discussion online. TikToker Kendra Hilty's series 'I fell in love with my psychiatrist' has gone viral.(X)
Hilty initially sought treatment for ADHD symptoms, during which she developed romantic feelings for her psychiatrist. She alleges that he reciprocated these feelings to some extent.
According to Hilty, he scheduled monthly appointments for her, even though she only required check-ins every three months. Their sessions began virtually but later moved to in-person visits.
Hilty says she openly shared her feelings with him, and alleges that he failed to maintain appropriate professional boundaries, which allowed the situation to escalate.
ChatGPT Henry
Eventually, Hilty turned to ChatGPT, which she nicknamed "Henry," for therapeutic guidance. Through these interactions, she learned about transference, a psychological concept where patients project deep emotional or romantic feelings onto their therapists, often as a substitute for other relationships.
Reactions
Hilty's early videos garnered sympathy, with many social media users encouraging her to report the psychiatrist for perceived professional misconduct. However, once she introduced 'Henry' as her main source of support, reactions shifted.
Some viewers criticized her reliance on an AI tool for mental health help, noting that ChatGPT isn't a licensed therapist. Others voiced concern about her mental state, suggesting she might be experiencing a "psychosis episode.'
One person commented, 'This freaks me out, for real. What can we do about this sort of thing, genuinely? Are people this lonely?"
Another wrote, 'This might be the biggest issue we will face with AI.'
A third person added, 'People are already self-obsessed and neurotic, so, this is going to make them more so.'
Another user wrote, 'It's dark. Today I met someone that is obsessed with a musician and she's named her ChatGPT after him and has basically trained it to behave as as the artist in love with her. And it does. Degenerate as it gets.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Eroded Doctors' Ability To Spot Cancer By 20%: Study
AI Eroded Doctors' Ability To Spot Cancer By 20%: Study

NDTV

time5 hours ago

  • NDTV

AI Eroded Doctors' Ability To Spot Cancer By 20%: Study

Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study. AI helped health professionals to better detect pre-cancerous growths in the colon, but when the assistance was removed, their ability to find tumors dropped by about 20% compared with rates before the tool was ever introduced, according to findings published Wednesday. Health-care systems around the world are embracing AI with a view to boosting patient outcomes and productivity. Just this year, the UK government announced $14.8 million in funding for a new trial to test how AI can help catch breast cancer earlier. The AI in the study probably prompted doctors to become over-reliant on its recommendations, "leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance," the scientists said in the paper. They surveyed four endoscopy centers in Poland and compared detection success rates three months before AI implementation and three months after. Some colonoscopies were performed with AI and some without, at random. The results were published in The Lancet Gastroenterology and Hepatology journal. Yuichi Mori, a researcher at the University of Oslo and one of the scientists involved, predicted that the effects of de-skilling will "probably be higher" as AI becomes more powerful. What's more, the 19 doctors in the study were highly experienced, having performed more than 2,000 colonoscopies each. The effect on trainees or novices might be starker, said Omer Ahmad, a consultant gastroenterologist at University College Hospital London. "Although AI continues to offer great promise to enhance clinical outcomes, we must also safeguard against the quiet erosion of fundamental skills required for high-quality endoscopy," Ahmad, who wasn't involved in the research, wrote a comment alongside the article. A study conducted by MIT this year raised similar concerns after finding that using OpenAI's ChatGPT to write essays led to less brain engagement and cognitive activity.

60-year-old man turns to ChatGPT for diet tips, ends up with a rare 19th-century illness
60-year-old man turns to ChatGPT for diet tips, ends up with a rare 19th-century illness

Time of India

time19 hours ago

  • Time of India

60-year-old man turns to ChatGPT for diet tips, ends up with a rare 19th-century illness

From Kitchen Swap to Psychiatric Ward You Might Also Like: Asked ChatGPT for motivation, but this user discovered a whole new use for AI instead Bromism: A Disease From Another Era The AI Factor Recovery and Reflection OpenAI Tightens Mental Health Guardrails on ChatGPT What began as a simple health experiment for a 60-year-old man looking to cut down on table salt spiralled into a three-week hospital stay, hallucinations, and a diagnosis of bromism — a condition so rare today it is more likely to be found in Victorian medical textbooks than in modern to a case report published on 5 August 2025 in the Annals of Internal Medicine , the man had turned to ChatGPT for advice on replacing sodium chloride in his diet. The AI chatbot reportedly suggested sodium bromide — a chemical more commonly associated with swimming pool maintenance than seasoning man, who had no prior psychiatric or major medical history, followed the AI's recommendation for three months, sourcing sodium bromide online. His aim was to remove chloride entirely from his meals, inspired by past studies he had read on sodium intake and health he arrived at the emergency department, he complained that his neighbour was poisoning him. Lab results revealed abnormal electrolyte levels, including hyperchloremia and a negative anion gap, prompting doctors to suspect the next 24 hours, his condition worsened — paranoia intensified, hallucinations became both visual and auditory, and he required an involuntary psychiatric hold. Physicians later learned he had also been experiencing fatigue, insomnia, facial acne, subtle ataxia, and excessive thirst, all consistent with bromide was once common in the late 1800s and early 1900s when bromide salts were prescribed for ailments ranging from headaches to anxiety. At its peak, it accounted for up to 8% of psychiatric hospital admissions. The U.S. Food and Drug Administration phased out bromide in ingestible products between 1975 and 1989, making modern cases builds up in the body over time, leading to neurological, psychiatric, and dermatological symptoms. In this case, the patient's bromide levels were a staggering 1700 mg/L — more than 200 times the upper limit of the reference Annals of Internal Medicine report notes that when researchers attempted similar queries on ChatGPT 3.5, the chatbot also suggested bromide as a chloride substitute. While it did mention that context mattered, it did not issue a clear toxicity warning or ask why the user was seeking this information — a step most healthcare professionals would consider authors warn that while AI tools like ChatGPT can be valuable for disseminating health knowledge, they can also produce decontextualised or unsafe advice. 'AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,' the case report aggressive intravenous fluid therapy and electrolyte correction, the man's mental state and lab results gradually returned to normal. He was discharged after three weeks, off antipsychotic medication, and stable at a follow-up two weeks case serves as a cautionary tale in the age of AI-assisted self-care: not all answers generated by chatbots are safe, and replacing table salt with pool chemicals is never a good light of growing concerns over the emotional and safety risks of relying on AI for personal wellbeing, OpenAI has announced new measures to limit how ChatGPT responds to mental health-related queries. In a blog post on August 4, the company said it is implementing stricter safeguards to ensure the chatbot is not used as a therapist, emotional support system, or life decision follows scrutiny over instances where earlier versions of the GPT-4o model became 'too agreeable,' offering validation rather than safe or helpful guidance. According to USA Today, OpenAI acknowledged rare but serious cases in which the chatbot failed to recognise signs of emotional distress or delusional updated system will now prompt users to take breaks, avoid giving advice on high-stakes personal decisions, and provide evidence-based resources instead of emotional counselling. The move also comes after research cited by The Independent revealed that AI can misinterpret or mishandle crisis situations, underscoring its inability to truly understand emotional nuance.

Beauty Vlogger Shares Simple ChatGPT Prompts That Helped Transform Her Hair
Beauty Vlogger Shares Simple ChatGPT Prompts That Helped Transform Her Hair

NDTV

timea day ago

  • NDTV

Beauty Vlogger Shares Simple ChatGPT Prompts That Helped Transform Her Hair

Lately it feels like ChatGPT has all the answers to our problems. Despite a strong warning from Sam Altman that one should not reveal personal details on the platform, users have been using it for literally everything - from making resumes to planning vacations. Now the AI wave has seeped into the beauty sector as well. Beauty vlogger Devika Vohra recently posted a video on Instagram that has sparked intrigue among glam enthusiasts. She has shared 5 ChatGPT prompts with her followers, claiming that they have transformed her hair for the better. Let's take a detailed look at her prompts and the suggestions that follow: View this post on Instagram A post shared by Devika Vohra | Hair ~ Skincare ~ Makeup (@devikavohra) Prompt 1: 'Create A full hair care routine for dry, frizzy, and damaged hair?' For this, ChatGPT recommended oiling hair 1-2 times a week before washing and using a gentle moisturising shampoo 2-3 times per week. Next was to follow up with a hydrating conditioner, applying a deep conditioning mask once a week, using a leave-in cream or serum on damp hair and avoiding heat styling as much as possible. Prompt 2: 'What ingredients should I look for or avoid in products for frizzy and damaged hair?' ChatGPT's advice was to look for the following ingredients: coconut oil almond oil argan oil aloe vera honey curd banana shea butter hibiscus bhringraj hydrolyzed proteins. Sme of the ingredients that it asked to avoid included: sulfates (SLS, SLES) drying alcohols silicones in leave-ins (if not clarifying), and parabens. Prompt 3: 'Suggest DIY hair masks using Indian kitchen ingredients.' To retain moisture, ChatGPT's suggestion was to mix curd, honey and coconut oil. For protein, it was a concoction of egg yolk, curd and olive oil. When it came to frizz control, ChatGPT's solution was to blend honey with castor oil and mashed bananas. Prompt 4: 'What habits should I change to reduce dryness, frizz, and damage?' In this case, the haircare rituals to follow were oiling before shampooing, using a soft T-shirt or microfiber towel to dry, combing gently with a wide-tooth comb, applying serum or leave-in on damp hair, avoiding frequent heat styling, sleeping on a satin or silk pillowcase and trimming split ends every 2–3 months Prompt 5: 'Give me a 30-day hair repair plan.' The week-wise regimen recommended by ChatGPT was as follows: While this worked for the influencer, you must check with your dermatogist before making any changes to your haircare routine.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store