Latest news with #emotionalsupport


Entrepreneur
07-07-2025
- Business
- Entrepreneur
Xbox Producer to Laid-Off Workers: If 'Overwhelmed,' Use AI
After Microsoft laid off 9,000 workers last week, 4% of its global workforce, one employee had an unusual message for those let go: turn to ChatGPT to help relieve the emotional burden. Xbox executive producer Matt Turnbull wrote last week in a now-deleted LinkedIn post that amid these "really challenging times," employees are "not alone" and they "don't have to go it alone." Instead, they can use AI tools like ChatGPT or Copilot "to help reduce the emotional and cognitive load that comes with job loss," Turnbull wrote. He completed the post with some prompt ideas and use cases of AI that could help if laid-off workers are "feeling overwhelmed." The use cases included resume tailoring, networking messages, and career planning. Related: Microsoft Claims Its AI Is Better Than Doctors at Diagnosing Patients, But 'You Definitely Still Need Your Physician' "No AI tool is a replacement for your voice or your lived experience," Turnbull wrote in the post. "But at a time when mental energy is scarce, these tools can help you get unstuck faster, calmer, and with more clarity." Turnbull experienced swift backlash from other professionals on LinkedIn for his post and eventually deleted it, according to Tech Radar. According to his LinkedIn profile, he was not impacted by Microsoft's layoffs last week. As Microsoft lays off workers, it continues to invest deeply in AI. The company is spending $80 billion this year to build data centers for AI models. Related: Amazon Intends to Spend Over $100 Billion on AI This Year. Here's How It Compares to Google, Meta, and Microsoft. AI is also taking over coding at Microsoft. Microsoft CEO Satya Nadella said in a sit-down chat with Meta CEO Mark Zuckerberg in April that Microsoft engineers are using AI to write 20% to 30% of code for company projects. He stated that the amount of code created by AI varies based on the programming language. AI writes "fantastic" Python code but "not that great" C++ code, Nadella noted. Other tech companies are spending heavily on AI and using it in daily operations, too. Google CEO Sundar Pichai said on an earnings call in April that Google was using AI to write "well over 30%" of new code, while Meta CEO Mark Zuckerberg said in January that Meta is developing AI to write code on par with a mid-level engineer. Google is spending up to $75 billion on AI this year, while Meta is spending up to $65 billion. Microsoft is one of the most valuable companies in the world at the time of writing, second only to Nvidia, with a market cap of $3.7 trillion.


The Guardian
30-06-2025
- The Guardian
ChatGPT, write my wedding vows: are we OK with AI in everyday life?
Earlier this spring, Nik Vassev heard a high school friend's mother had died. Vassev, a 32-year-old tech entrepreneur in Vancouver, Canada, opened up Claude AI, Anthropic's artificial intelligence chatbot. 'My friend's mom passed away and I'm trying to find the right way to be there for him and send him a message of support like a good friend,' he typed. Vassev mostly uses AI to answer work emails, but also for personal communications. 'I just wanted to just get a second opinion about how to approach that situation,' he says. 'As guys, sometimes we have trouble expressing our emotions.' Claude helped Vassev craft a note: 'Hey man, I'm so sorry for your loss. Sending you and your family lots of love and support during this difficult time. I'm here for you if you need anything …' it read. Thanks to the message, Vassev's friend opened up about their grief. But Vassev never revealed that AI was involved. People 'devalue' writing that is AI assisted, he acknowledges. 'It can rub people the wrong way.' Vassev learned this lesson because a friend once called him out for relying heavily on AI during an argument: 'Nik, I want to hear your voice, not what ChatGPT has to say.' That experience left Vassev chastened. Since then, he's been trying to be more sparing and subtle, 'thinking for myself and having AI assist', he says. Since late 2022, AI adoption has exploded in professional contexts, where it's used as a productivity-boosting tool, and among students, who increasingly use chatbots to cheat. Yet AI is becoming the invisible infrastructure of personal communications, too – punching up text messages, birthday cards and obituaries, even though we associate such compositions with 'from the heart' authenticity. Disclosing the role of AI could defeat the purpose of these writings, which is to build trust and express care. Nonetheless, one person anonymously told me that he used ChatGPT while writing his father of the bride speech; another wished OpenAI had been around when he had written his vows because it would have 'saved [him] a lot of time'. Online, a Redditor shared that they used ChatGPT to write their mom's birthday card: 'she not only cried, she keeps it on her side table and reads [it] over and over, every day since I gave it to her,' they wrote. 'I can never tell her.' Research about transparency and AI use mostly focuses on professional settings, where 40% of US workers use the tools. However, a recent study from the University of Arizona concluded that 'AI disclosure can harm social perceptions' of the disclosers at work, and similar findings apply to personal relationships. In one 2023 study, 208 adults received a 'thoughtful' note from a friend; those who were told the note was written with AI felt less satisfied and 'more uncertain about where they stand' with the friend, according to Bingjie Liu, the lead author of the study and an assistant professor of communication at Ohio State University. On subreddits such as r/AmIOverreacting or r/Relationship_advice, it's easy to find users expressing distress upon discovering, say, that their husband used ChatGPT to write their wedding vows. ('To me, these words are some of the most important that we will ever say to each other. I feel so sad knowing that they weren't even his own.') AI-assisted personal messages can convey that the sender didn't want to bother with sincerity, says Dr Vanessa Urch Druskat, a social and organizational psychologist and professor specializing in emotional intelligence. 'If I heard that you were sending me an email and making it sound more empathetic than you really were, I wouldn't let it go,' she says. 'There's a baseline expectation that our personal communications are authentic,' says Druskat. 'We're wired to pick up on inauthenticity, disrespect – it feels terrible,' she says. But not everyone draws the same line when it comes to how much AI involvement is tolerable or what constitutes deceit by omission. Curious, I conducted an informal social media poll among my friends: if I used AI to write their whole birthday card, how would they feel? About two-thirds said they would be 'upset'; the rest said it would be fine. But if I had used AI only in a supplementary role – say, some editing to hit the right tone – the results were closer to 50-50. Using AI in personal messages is a double gamble: first, that the recipient won't notice, and second, that they won't mind. Still, there are arguments for why taking the risk is worthwhile, and why a hint of AI in a Hinge message might not be so bad. For instance, AI can be helpful for bridging communication gaps rooted in cultural, linguistic or other forms of diversity. Plus, personal messages have never been totally spontaneous and original. People routinely seek advice from friends, therapists or strangers about disagreements, delicate conversations or important notes. Greeting cards have long come with pre-written sentiments (although Mother's Day founder Anna Jarvis once scolded that printed cards were 'lazy'). Sara Jane Ho, an etiquette expert, says she has used ChatGPT 'in situations where I've been like: 'Change this copy to make it more heartfelt.' And it's great copy.' Ho argues that using ChatGPT to craft a personal message actually shows 'a level of consideration'. Expressing sensitivity helps build relationships, and it makes sense that people who struggle with words would appreciate assistance. Calculators are standard digital tools; why not chatbots? 'I always say that the spirit of etiquette is about putting others at ease,' she says. 'If the end result is something that is nice for the other person and that shows respect or consideration or care, then they don't need to see how the sausage is made.' I asked Ho what she would say to a person upset by an AI-assisted note. 'I'd ask them: 'Why are you so easily offended?'' Ho says. Plus, she says using AI is convenient and fast. 'Why would you make yourself walk someplace if you have a car?' she asks. Increasingly, people are drifting through digitized lives that reject 'the very notion that engagement should require effort', at perceiving less value in character building and experiences like 'working hard' and 'learning well', author and educator Kyla Scanlon argued in an essay last month. This bias toward effortlessness characterizes the emotional work of relationships as burdensome, even though it helps create intimacy. 'People have sort of conditioned themselves to want a completely seamless and frictionless experience in their everyday lives 100% of the time,' says Josh Lora, a writer and sociologist who has written about AI and loneliness. 'There are people who Uber everywhere, who Seamless everything, who Amazon everything, and render their lives completely smooth.' Amid this convenience-maxxing, AI figures as an efficient way out of relational labor, or small mistakes, tensions and inadequacies in communication, says Lora. We use language to be understood or co-create a sense of self. 'So much of our experience as people is rendered in the struggle to make meaning, to self actualize, to explain yourself to another person,' Lora says. But when we outsource that labor to a chatbot, we lose out on developing self-expression, nuanced social skills, and emotional intelligence. We also lose out on the feelings of interpersonal gratitude that arise from taking the time to write kindly to our loved ones, as one 2023 study from the University of California, Riverside, found. Many people already approach life as a series of objectives: get good grades, get a job, earn money, get married. In that mindset, a relationship can feel like something to manage effectively rather than a space of mutual recognition. What happens if it stops feeling worth the effort? Summer (who requested a pseudonym for privacy), a 30-year-old university tutor, said she became best friends with Natasha (also a pseudonym) while pursuing their respective doctorate degrees. They lived four hours apart, and much of their relationship unfolded in long text message exchanges, debating ideas or analyzing people they knew. About a year ago, Natasha began to use ChatGPT to help with work tasks. Summer said she quickly seemed deeply enamoured with AI's speed and fluency. (Researchers have warned the technology can be addictive, to the detriment of human social engagement.) Soon, subtle tone and content changes led Summer to suspect Natasha was using AI in their personal messages. (Natasha did not respond to a request for comment.) After six years of lively intellectual curiosity, their communication dwindled. Occasionally, Natasha asked Summer for her opinion on something, then disappeared for days. Summer felt like she was the third party to a deep conversation happening between her best friend and a machine. 'I'd engage with her as a friend, a whole human being, and she'd engage with me as an obstacle to this meaning-making machine of hers,' Summer tells me. Summer finally called Natasha to discuss how AI use was affecting their friendship. She felt Natasha was exchanging the messy imperfections of rambling debate for an emotionally bankrupt facsimile of ultra-efficient communication. Natasha didn't deny using chatbots, and 'seemed to always have a reason' for continuing despite Summer's moral and intellectual qualms. Summer 'felt betrayed' that a close friend had used AI as 'an auxiliary' to talk to her. 'She couldn't find the inherent meaning in us having an exchange as people,' she says. To her, adding AI into relationships 'presupposes inadequacy' in them, and offers a sterile alternative: always saying the right thing, back and forth, frictionless forever. The two women are no longer friends. 'What you're giving away when you engage in too much convenience is your humanity, and it's creepy to me,' Summer says. Dr Mathieu Corteel is a philosopher and author of a book grappling with the implications of AI (only available in French) as a game we've all entered without 'knowing the rules'. Corteel is not anti-AI, but believes that overreliance on it alienates us from our own judgement, and by extension, humanity – 'which is why I consider it as one of the most important philosophical problems we are facing right now', he says. If a couple, for example, expressed love through AI-generated poems, they'd be skipping crucial steps of meaning-making to create 'a combination of symbols' absent of meaning, he says. You can interpret meaning retrospectively, reading intent into an AI's output, 'but that's just an effect', he says. 'AI is unable to give meaning to something because it's outside of the semantics produced by human beings, by human culture, by human interrelation, the social world,' says Corteel. If AI can churn out convincingly heartfelt words, perhaps even our most intimate expressions have always been less special than we'd hoped. Or, as tech theorist Bogna Konior recently wrote: 'What chatbots ultimately teach us is that language ain't all that.' Corteel agrees that language is inherently flawed; we can never fully express our feelings, only try. But that gap between feeling and expression is where love and meaning live. The very act of striving to shrink that distance helps define those thoughts and feelings. AI, by contrast, offers a slick way to bypass that effort. Without the time it takes to reflect on our relationships, the struggle to find words, the practice of communicating, what are we exchanging? 'We want to finish quickly with everything,' says Corteel. 'We want to just write a prompt and have it done. And there's something that we are losing – it's the process. And in the process, there's many important aspects. It is the co-construction of ourselves with our activities,' he says. 'We are forgetting the importance of the exercise.'


Forbes
28-06-2025
- Health
- Forbes
The AI Mental Health Market Is Booming — But Can The Next Wave Deliver Results?
AI tools promise scalable mental health support, but can they actually deliver real care, or just ... More simulate it? In April of 2025, Amanda Caswell found herself on the edge of a panic attack one midnight. With no one to call and the walls closing in, she opened ChatGPT. As she wrote in her piece for Tom's Guide, the AI chatbot calmly responded, guiding her through a series of breathing techniques and mental grounding exercises. It worked, at least in that moment. Caswell isn't alone. Business Insider reported earlier that an increasing number of Americans are turning to AI chatbots like ChatGPT for emotional support, not as a novelty, but as a lifeline. A recent survey of Reddit users found many people report using ChatGPT and similar tools to cope with emotional stress. These stats paint a hopeful picture: AI stepping in where traditional mental health care can't. But they also raise a deeper question about whether these tools are actually helping. A Billion-Dollar Bet On Mental Health AI AI-powered mental health tools are everywhere — some embedded in employee assistance programs, others packaged as standalone apps or productivity companions. In the first half of 2024 alone, investors poured nearly $700 million into AI mental health startups globally, the most for any digital healthcare segment, according to Rock Health. The demand is real. Mental health conditions like depression and anxiety cost the global economy more than $1 trillion each year in lost productivity, to the World Health Organization. And per data from the CDC, over one in five U.S. adults under 45 reported symptoms in 2022. Yet, many couldn't afford therapy or were stuck on waitlists for weeks — leaving a care gap that AI tools increasingly aim to fill. Companies like are trying to do just that. Founded by Sarah Wang — a former Meta and TikTok tech leader who built AI systems for core product and global mental health initiatives — BlissBot blends neuroscience, emotional resilience training and AI to deliver what she calls 'scalable healing systems.' 'Mental health is the greatest unmet need of our generation,' Wang explained. 'AI gives us the first real shot at making healing scalable, personalized and accessible to all.' She said Blissbot was designed from scratch as an AI-native platform, a contrast to existing tools that retrofit mental health models into general-purpose assistants. Internally, the company is exploring the use of quantum-inspired algorithms to optimize mental health diagnostics, though these early claims have not yet been peer-reviewed. It also employs privacy-by-design principles, giving users control over their sensitive data. Sarah Wang- Founder, Blissbot 'We've scaled commerce and content with AI,' Wang added. 'It's time we scale healing.' Blissbot isn't alone in this shift. Other companies, like Wysa, Woebot Health and Innerworld, are also integrating evidence-based psychological frameworks into their platforms. While each takes a different approach, they share the common goal of delivering meaningful mental health outcomes. Why Outcomes Still Lag Behind Despite the flurry of innovation, mental health experts caution that much of the AI being deployed today still isn't as effective as claimed. 'Many AI mental health tools create the illusion of support,' said Funso Richard, an information security expert with a background in psychology. 'But if they aren't adaptive, clinically grounded and offer context-aware support, they risk leaving users worse off — especially in moments of real vulnerability.' Even when AI platforms show promise, Richard cautioned that outcomes remain elusive, noting that AI's perceived authority could mislead vulnerable users into trusting flawed advice, especially when platforms aren't transparent about their limitations or aren't overseen by licensed professionals. Wang echoed these concerns, citing a recent Journal of Medical Internet Research study that pointed out limitations in the scope and safety features of AI-powered mental health tools. The regulatory landscape is also catching up. In early 2025, the European Union's AI Act classified mental health-related AI as 'high risk,' requiring stringent transparency and safety measures. While the U.S. has yet to implement equivalent guardrails, legal experts warn that liability questions are inevitable if systems offer therapeutic guidance without clinical validation. For companies rolling out AI mental health benefits as part of diversity, equity, inclusion (DEI) and retention strategies, the stakes are high. No If tools don't drive outcomes, they risk becoming optics-driven solutions that fail to support real well-being. However, it's not all gloom and doom. Used thoughtfully, AI tools can help free up clinicians to focus on deeper, more complex care by handling structured, day-to-day support — a hybrid model that many in the field see as both scalable and safe. What To Ask Before Buying Into The Hype For business leaders, the allure of AI-powered mental health tools is clear: lower costs, instant availability and a sleek, data-friendly interface. But adopting these tools without a clear framework for evaluating their impact can backfire. So what should companies be asking? Before deploying these tools, Wang explained, companies should interrogate the evidence behind them. 'Are they built on validated frameworks like cognitive behavioral therapy (CBT) or acceptance and commitment therapy (ACT), or are they simply rebranding wellness trends with an AI veneer?,' she questioned. 'Do the platforms measure success based on actual outcomes — like symptom reduction or long-term behavior change — or just logins? And perhaps most critically, how do these systems protect privacy, escalate crisis scenarios and adapt across different cultures, languages, and neurodiverse communities?' Richard agreed, adding that 'there's a fine line between offering supportive tools and creating false assurances. If the system doesn't know when to escalate — or assumes cultural universality — it's not just ineffective. It's dangerous.' Wang also emphasized that engagement shouldn't be the metric of success. 'The goal isn't constant use,' she said. 'It's building resilience strong enough that people can eventually stand on their own.' She added that the true economics of AI in mental health don't come from engagement stats. Rather, she said, the show up later — in the price we pay for shallow interactions, missed signals and tools that mimic care without ever delivering it. The Bottom Line Back in that quiet moment when Caswell consulted ChatGPT during a panic attack, the AI didn't falter. It guided her through that moment like a human therapist would. However, it also didn't diagnose, treat, or follow up. It helped someone get through the night — and that matters. But as these tools become part of the infrastructure of care, the bar has to be higher. As Caswell noted, 'although AI can be used by therapists to seek out diagnostic or therapeutic suggestions for their patients, providers must be mindful of not revealing protected health information due to HIPAA requirements.' That's especially because scaling empathy isn't just a UX challenge. It's a test of whether AI can truly understand — not just mimic — the emotional complexity of being human. For companies investing in the future of well-being, the question isn't just whether AI can soothe a moment of crisis, but whether it can do so responsibly, repeatedly and at scale. 'That's where the next wave of mental health innovation will be judged,' Wang said. 'Not on simulations of empathy, but on real and measurable human outcomes.'


National Post
26-06-2025
- Sport
- National Post
Video emerges of Arizona manager confronting Ketel Marte heckler: ‘You dumb f***'
Torey Lovullo wasn't messing around when he heard a fan heckling one of his players. Article content In a video first posted on TikTok, the Arizona manager was seen angrily pointing out the fan who allegedly had been taunting Ketel Marte about his late mother – bringing the Diamondbacks star to tears on the field. Article content Article content During the clip from Tuesday night's game against the White Sox in Chicago, Lovullo is seen yelling at the fan before gesturing towards security to have him thrown out. Article content 'Dumb f***,' Lovullo appears to say in video. 'His mom died, you dumb f***. Dumb f***.' Article content Footage of Torey Lovullo confronting the fan that spoke on Ketel Marte's mother last night — Jomboy Media (@JomboyMedia) June 26, 2025 Article content Article content Arizona bench coach Jeff Banister can also be seen pointing the fan out to security during the exchange. Article content TikTok user smartasskris, who posted the clip of Lovullo, also revealed more details about the fan in the caption of the post. Article content 'This was our first game at Rate Field. We were sitting about 7 rows behind home plate, heard bits and pieces of the heckling (had gotten progressively worse over time), saw Ketel Marte become visibly upset, teammates standing up for him, and saw Torey Louvello (sp) call for the fan's ejection,' she wrote. Article content 'Great job by White Sox security for understanding the gravity of the situation. While it hurt to watch Marte shaken up, it was a good lesson in sportsmanship for others (what we don't do and what we should not tolerate), and appreciation and maturity of these young players who grieve and have feelings like anyone else.' Article content This is the fan who yesterday in Chicago yelled a remark at the Dbacks' Ketel Marte about his mother, who sadly passed away in a car accident eight years ago. He got what he deserved for being a fucking hateful POS. Now you're banned for life from every @MLB ballpark. — Dodgers fan (@MaskaF56959) June 25, 2025

Associated Press
26-06-2025
- Health
- Associated Press
In stressful times, our anxiety can rub off on pets. Causes and cures for pet anxiety
In this age of heightened anxiety, many of us turn to our pets for emotional support. But is our behavior increasing our furry friends' fears? The answer isn't simple, says Frankie Jackson, a veterinary nurse and animal behavior consultant, and the owner of Canine Counseling in Smyrna, Georgia. She said she's seen an increase in anxiety among her animal and human clients, but that it's hard to unwind the cause and effect. 'Dogs are incredibly responsive to our expressions, our body language and our scent,' she says. 'There is a feedback loop — the owners are nervous; the dog gets nervous. Our cortisol levels rise and fall in tandem.' Dr. Becky Peters, a veterinarian and owner of Bath Veterinary Hospital in Bath, New York, has also noticed a link between the anxiety of pets and their owners, particularly in the exam room. 'If owners try to over comfort them — lots of 'you're OK!'' in anxious voices, the animals do get more anxious. If we stay calm and quiet, they do too,' Peters says. Peters attributes much of the rise in pet anxiety to the social upheaval of the COVID years. Many animals who were acquired during the pandemic had limited opportunities to socialize with other people and pets during their peak developmental stages. After COVID, pets who were used to having their family at home experienced separation anxiety as their owners returned to work and school. 'A lot of pet anxiety comes from changes to their households,' Peters says. 'It can also occur from a lack of routine and structure and not enough physical activity.' Other components that could contribute to our pets' anxiety include unmet needs, past trauma and insufficient open spaces. 'We are asking our dogs to live in a world that isn't made for them,' she says. Try to get at the cause of your pet's anxiety Low-level stress responses in dogs, such as eating less and excessive self-grooming, are forms of communication that precede lunging and barking, Jackson says. Trying to solve reactive behaviors through obedience training without addressing the root cause can make dogs' anxiety worse. 'It's important to understand what the dogs are saying and why they're behaving the way they are. Manners and life skills are important, but it won't create happy dogs,' Jackson says. Anxiety in cats can be harder to spot, according to Jackson, because they're hard-wired to hide it. While dogs seek out their support people, cats don't feel safe expressing their vulnerability. Urinating in the house, scratching, hiding under the bed and overgrooming can be signs that your kitty is anxious. First, see a vet If your animal companion is suddenly acting out or on a licking binge, Jackson advises seeing a veterinarian to rule out a medical cause, such as pain or allergies. Veterinarians can also prescribe anti-anxiety medication and complementary treatments to promote sleep and relaxation. Peters recommends supplements for her canine clients including probiotics and the amino acids l-theanine and tryptophan (yes, the turkey coma one). For cats, she suggests using a product like Feliway that diffuses calming pheromones into the air. Learn about the breed A dog's breed might also play a part in developing anxiety. Peters says that while every dog is different, the more active herding and working breeds like shepherds and border collies can become anxious and destructive without an outlet for their energy. 'Herding breeds need space to run and jobs to do,' she says. When Tacoma, Washington, resident Shelani Vanniasinkam got her Australian shepherd puppy, Roo, she didn't know about the breed's reputation for anxiety. Her previous dog had been an easygoing husky who enjoyed pats from strangers and visits to the dog park. She quickly realized Roo was not that type of dog. 'He had a lot more needs than we anticipated,' Vanniasinkam says. 'We couldn't leave him alone for more than 30 minutes.' Vanniasinkam and her husband, Jesus Celaya, reached out to a local pet behavioralist, but it became clear that Roo's anxiety was so acute he needed medication before he could start behavior training. Should you consider meds for an anxious pet? Medication can be important in behavioral treatment, but it shouldn't be the only approach, says Peters. She usually suggests that her clients try training and routine modification first, unless their pet is causing harm to themselves or others. 'If I'm going to use meds, it's part of a greater process,' Peters says. Roo's veterinarian put him on Fluoxetine, or 'doggie Prozac,' an antidepressant commonly given to anxious pets. He also prescribed Trazodone, another antidepressant, for particularly stressful events, such as trips to the vet or a night of fireworks. After starting his medication, Roo received eight months of behavior training, during which Vanniasinkam and her husband not only changed their own approach to dog parenting but also set boundaries with friends and family. They limited Roo's interaction with other dogs, asked people to stop using their doorbell and requested that others ignore Roo when he barks. Now, when they want to take Roo for off-leash play, they book time at a local Sniffspot, which Vanniasinkam describes as 'an Airbnb for anxious and reactive dogs.' The company, which launched in 2016, allows homeowners to rent out their yards or property by the hour for solo, off-leash play or doggie playdates. 'It's sad when you can't take your dog to a dog park,' Vanniasinkam says. 'So, this option is really nice.' She says that while it was initially difficult to navigate Roo's anxiety, he is loving, family-oriented and worth the effort. 'It's hard having an anxious dog, but you can figure it out,' she says. 'It's just important to understand your dog, so you're set up for success.'