
Nearly Third of Kids Now Look to AI for Emotional Support—Report
Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content.
A new study has found that nearly a third of children are turning to artificial intelligence (AI) for emotional support.
While therapy and school counselors have historically been keystones of mental health resources for kids, the current younger generation has a new technology option to turn to.
Why It Matters
AI has skyrocketed in popularity, offering tools to boost productivity in workplace tasks and school assignments.
While children often use technology to assist with research for their papers, AI has also proven particularly effective in helping humans address their mental health issues. However, limited research has been conducted about the possibility of long-term effects.
In a May 2024 YouGov survey, 50 percent of respondents said the 24/7 availability and immediate access made AI chatbots helpful for mental health purposes. And 38 percent cited the chatbots' non-judgmental interactions as a pro.
What To Know
A new report from Norton found that 28 percent of parents said that their children turn to AI for emotional support. This trend is also reflected in the experiences of mental health clinicians.
"I am seeing that a lot of children and young adults are returning to AI resources for emotional support," Kathryn Cross, a licensed professional counselor with Thriveworks, told Newsweek. "We also see this as a trend on social media. We are seeing people find comfort in AI responses, partly because they are receiving answers based on what they are looking for, rather than evidence-based advice."
Children are facing unique mental health challenges, with 24 percent of parents in the Norton report saying their child has been cyberbullied. Roughly 41 percent also said their children turn to AI for companionship.
Since many children are using tablets by the age of 2 and parents routinely give their kids phones before age 12, according to the Norton survey, the youngest generation may be facing loneliness and searching for meaningful relationships in a new and unprecedented technological environment.
An April Gallup poll found that 79 percent of Gen Z, those born between 1997 and 2012, had used AI tools; however, 41 percent reported experiencing anxiety with the technology. Adult Gen Z-ers were more likely to say that AI made them anxious (53 percent) than their younger, school-age Gen Z-ers (21 percent).
File photo of a smartphone displaying the ChatGPT logo resting on the keyboard of a laptop also displaying a ChatGPT logo.
File photo of a smartphone displaying the ChatGPT logo resting on the keyboard of a laptop also displaying a ChatGPT logo.What People Are Saying
Kathryn Cross, a licensed professional counselor with Thriveworks, told Newsweek: "While AI can provide what feels like useful insights on personal issues, it can also do damage, seeing as AI tools are unable to ensure long-lasting treatment based on evidence and real-life responses to crises. AI provides emotional support based on an algorithm, and it is programmed to give a response that is suitable for a person based on the wording used and the history that the program is picking up based on an individual's usage."
What Happens Next
The long-term risks of AI usage for therapy or emotional support are unclear, but experts warn that it cannot adequately support people in crisis like a trained human therapist.
"The risk is that if someone is using AI tools as a replacement for therapy or other mental health treatment, these tools are unable to be hands-on with someone who is really in need of an interpersonal relationship," Cross said. "Nothing really compares to human to human contact and support."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
16 minutes ago
- Yahoo
I Asked ChatGPT What ‘Generational Wealth' Really Means — and How To Start Building It
The term 'generational wealth' gets thrown around a lot these days, but what does it actually mean? And more importantly, how can regular Americans start building it? Read Next: Learn More: GOBankingRates asked ChatGPT for a comprehensive breakdown, and its response was both enlightening and surprisingly actionable. Also see five strategies high-net-worth families use to build generational wealth. Defining Generational Wealth: ChatGPT's Take When ChatGPT was asked to define generational wealth, it explained it as 'assets and financial resources that are passed down from one generation to the next, providing ongoing financial stability and opportunities for future family members.' But it went deeper, explaining that true generational wealth isn't just about leaving money behind; it's about creating a financial foundation that can grow and sustain multiple generations. The AI emphasized that generational wealth is more than just inheritance money. It's about creating a system where each generation can build upon the previous one's success, creating a compounding effect that grows over time. This includes not just financial assets, but also financial knowledge, business relationships and strategic thinking skills. Check Out: ChatGPT's Blueprint for Building Generational Wealth When asked for a practical roadmap, ChatGPT provided a comprehensive strategy broken down into actionable steps. Start With Financial Education ChatGPT emphasized that generational wealth begins with financial literacy — not just for yourself, but for your entire family. Here is what it recommended: Teach children about money management from an early age. Create family financial discussions and goal-setting sessions. Ensure all family members understand investment principles. Build a culture of financial responsibility. It stressed that many wealthy families fail to maintain their wealth across generations because they don't adequately prepare their children with the knowledge and mindset needed to manage money effectively. Build a Diversified Investment Portfolio ChatGPT recommended a multi-asset approach to wealth building: Real estate investments for appreciation and passive income Stock market investments through index funds and individual stocks Business ownership or equity stakes Alternative investments like real estate investment trusts or commodities. It explained that diversification is crucial because different asset classes perform differently in various economic conditions. This approach helps protect wealth from market volatility while providing multiple income streams. Establish Legal Protection Structures The AI strongly emphasized the importance of estate planning tools as well. Here are a few it highlighted: Wills and trusts to control asset distribution Life insurance policies to provide immediate liquidity Business succession planning for family enterprises Tax optimization strategies to minimize transfer costs. ChatGPT explained that without proper legal structures, wealth can be decimated by taxes, legal disputes or poor decision-making by inexperienced heirs. It stressed that these structures must be created while you're alive and able to make strategic decisions. Consider Dynasty Trusts For families with substantial assets, ChatGPT recommended exploring dynasty trusts. It explained these as vehicles that can preserve wealth across multiple generations while providing tax benefits. These trusts can potentially last forever in certain states, creating a truly perpetual wealth-building vehicle. Overcoming Common Obstacles ChatGPT identified several barriers to building generational wealth as well. First, it acknowledged that starting from different financial positions affects strategy. Those with limited resources need to focus first on building basic wealth before thinking about generational strategies. ChatGPT also warned against increasing spending as income grows. The AI suggested automating savings and investments to prevent lifestyle inflation from derailing wealth-building efforts. It also highlighted the complexity of tax planning for generational wealth, noting that improper planning can result in significant tax penalties that erode wealth transfer. This makes professional guidance particularly important for families with substantial assets, and the cost of professional advice is typically far outweighed by the value created through proper planning. Starting Small: ChatGPT's Practical First Steps For those just beginning, ChatGPT provided a few accessible starting points. Build an emergency fund (three to six months' worth of expenses). Maximize employer 401(k) matching. Start a Roth IRA for tax-free growth. Purchase adequate life insurance. Create a basic will. Begin investing in index funds. Consider real estate when financially ready. It emphasized that these steps can be started by anyone, regardless of income level, and that the key is consistency over time. The Importance of Values and Purpose One of ChatGPT's most interesting insights was about the importance of instilling values and purpose alongside wealth. The AI explained that families with strong values and a clear sense of purpose are more likely to maintain their wealth across generations. This can include teaching children about responsibility and work ethic and involving family members in charitable activities It also noted that generational wealth isn't primarily about the amount you leave behind. It's about creating a financial foundation and knowledge system that empowers future generations to build upon your efforts. The process of building generational wealth requires patience, discipline and strategic thinking, but the AI emphasized that with the right approach, any family can begin building wealth that will benefit generations to come. The key is to start now, stay consistent and always keep the long-term vision in mind. More From GOBankingRates 3 Luxury SUVs That Will Have Massive Price Drops in Summer 2025 The 10 Most Reliable SUVs of 2025 The 5 Car Brands Named the Least Reliable of 2025 This article originally appeared on I Asked ChatGPT What 'Generational Wealth' Really Means — and How To Start Building It


Tom's Guide
17 minutes ago
- Tom's Guide
How to spot AI writing — 5 telltale signs to look for
AI writing is everywhere now, flooding social media, websites, and emails—so you're probably encountering it more than you realize. That email you just received, the product review you're reading, or the Reddit post that sounds oddly corporate might all be generated by tools like AI chatbots like ChatGPT, Gemini or Claude. The writing often appears polished, maybe too polished, hitting every point perfectly while maintaining an unnaturally enthusiastic tone throughout. While AI detectors promise to catch machine-generated text, they're often unreliable and miss the subtler signs that reveal when algorithms have done the heavy lifting. You don't need fancy software or expensive tools to spot it. The clues are right there in the writing itself. There's nothing wrong with using AI to improve your writing. These tools excel at checking grammar, suggesting better word choices, and helping with tone—especially if English isn't your first language. AI can help you brainstorm ideas, overcome writer's block, or polish rough drafts. The key difference is using AI to enhance your own knowledge and voice rather than having it generate everything from scratch. The problems arise when people let AI do all the thinking and just copy-paste whatever it produces without adding their own insights, and that's when you start seeing the telltale signs below. AI writing tools consistently rely on the same attention-grabbing formulae. You'll see openings like "Have you ever wondered..." "Are you struggling with..." or "What if I told you..." followed by grand promises. This happens because AI models learn from countless blog posts and marketing copy that use these exact patterns. Real people mix it up more, they might jump straight into a story, share a fact, or just start talking about the topic without all the setup. When you spot multiple rhetorical questions bunched together or openings that feel interchangeable across different topics, you're likely reading AI-generated content. You'll see phrases like "many studies show", "experts agree", or "a recent survey found" without citing actual sources. AI tends to speak in generalities like "a popular app" or "leading industry professionals" instead of naming specific companies or real people. Human writers naturally include concrete details, actual brand names, specific statistics, and references to particular events or experiences they've encountered. When content lacks these specific, verifiable details, it's usually because AI doesn't have access to real, current information or personal experience. AI writing often sounds impressive at first glance but becomes hollow when you examine it closely. You'll find excessive use of business jargon like "game-changing", "cutting-edge", "revolutionary", and "innovative" scattered throughout without explaining what these terms actually mean. The writing might use sophisticated vocabulary but fail to communicate ideas clearly. A human expert will tell you exactly why one method works better than another, or admit when something is kind of a pain to use. If the content feels like it was written to impress rather than inform, AI likely played a major role. AI writing maintains an unnaturally consistent, enthusiastic tone throughout entire pieces. Every sentence flows smoothly into the next, problems are always simple to solve and there's rarely any acknowledgment that things can be complicated or frustrating. Real people get frustrated, go off on tangents, and have strong opinions. Human writing naturally varies in tone, sometimes confident, sometimes uncertain, occasionally annoyed or conversational. When content sounds relentlessly positive and avoids any controversial takes, you're probably reading AI-generated material. This is where the lack of real experience shows up most clearly. AI might correctly explain the basics of complex topics, but it often misses the practical complications that anyone who's actually done it knows about. The advice sounds textbook-perfect but lacks the yeah, but in reality... insights that make content actually useful. Human experts naturally include caveats, mention common pitfalls, or explain why standard advice doesn't always work in practice. When content presents complex topics as straightforward without acknowledging the messy realities, it's usually because real expertise is missing. People love to point at em dashes as proof of AI writing, but that's unfair to a perfectly good punctuation mark. Writers have used em dashes for centuries—to add drama, create pauses or insert extra thoughts into sentences. The real issue isn't that AI uses them, it's how AI uses them incorrectly. You'll often see AI throwing in em dashes where a semicolon would work better, or using them to create false drama in boring sentences. Real writers use em dashes purposefully to enhance their meaning, while AI tends to sprinkle them in as a lazy way to make sentences sound more sophisticated. Before you dismiss something as AI-written just because of punctuation, check whether those dashes actually serve a purpose or if they're just there for show. Now you've learned the tell-tale signs for spotting AI-generated writing, why not take a look at our other useful guides? Don't miss this tool identifies AI-generated images, text and videos — here's how it works and you can stop Gemini from training on your data — here's how Get instant access to breaking news, the hottest reviews, great deals and helpful tips. And if you want to explore some lesser known AI models, take a look at I write about AI for a living — here's my 7 favorite free AI tools to try now.


Newsweek
19 minutes ago
- Newsweek
Dog Develops Sudden Paralysis, What Owner Does Next Melts Hearts
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. A woman feared the worst when her senior dog collapsed, but nothing could have prepared her for the three-month whirlwind that was about to ensue. After spending a month in Florida, Amanda Mcsharry, 30, noticed her dog, named Ruby, acting a little differently. Within a week of her return, the 10-year-old Jack Russell and Patterdale mix seemed stiff and didn't even want to jump on the couch. Initially, Mcsharry, a registered veterinary nurse, wondered if Ruby had injured herself, telling Newsweek that she "wasn't her normal self." She gave Ruby an anti-inflammatory painkiller and planned to observe her. "When I came home on my lunch break the next day, she fell over and became stumbly," Mcsharry said. "I've seen lots of neurological cases and began to think she may have a spinal issue, although she didn't seem to be in pain." Amanda Mcsharry, 30, carrying Ruby outside using a harness and in a backpack. Amanda Mcsharry, 30, carrying Ruby outside using a harness and in a backpack. @amandam76 / TikTok Mcsharry, from Scotland, took Ruby to her local vet as she was deteriorating and had become incontinent. This was clearly more than a minor injury, so Ruby was referred to a larger veterinary hospital where she stayed overnight for tests. This left Mcsharry going home alone, facing the terrifying prospect that she might have to say goodbye to her soul dog. "They carried out blood tests, an ultrasound and X-rays to rule out various cancers and toxoplasmosis. She deteriorated further to the point she could no longer lift her head and wouldn't eat," Mcsharry said. Ruby was diagnosed with polyradiculoneuritis, also known as coonhound paralysis. Dr. Chad West, chief clinical officer and service head of neurology and neurosurgery at the Schwarzman Animal Medical Center, explained that it's a disease where the body's immune system attacks the nerve roots that exit the spinal cord. The progressive paralysis in dogs can be caused by vaccinations, a gastrointestinal or respiratory infection, exposure to bacteria from raw poultry, or raccoon saliva. Dogs usually start with a stiff-legged gait which rapidly progresses into paralysis in all four legs, according to VCA Animal Hospitals. The symptoms then progress in the following four or five days, leading to decreased reflexes, reduced muscle tone, and labored breathing. West told Newsweek: "Mildly affected dogs may require only supportive care, including mild physical therapy, while severely affected dogs may benefit from plasmapheresis or intravenous immunoglobulin administration that can block the immune cells from binding to the nerve rootlets. "In humans, this disease is called Guillain-Barre syndrome. It has a known association with certain bacterial and viral infections. Bacterial infection has also been implicated in dogs." Pain medication can be given, but most dogs need intensive physical therapy. Mcsharry had never come across any cases of polyradiculoneuritis, but she was seeing it for the first time in her own pup. Ruby was hospitalized for four nights to monitor her deterioration, which is usually worse in the initial days. Eventually, Mcsharry was able to take her bestie home and begin the recovery. "I was delighted but terrified I was going to miss something," Mcsharry said. "I knew we had a long road ahead, but I was just happy to have her back. At this point, she could lift her head when lying down, but her head would fall whenever I tried to stand her up. This made her quite stressed, and I had to come up with ways of supporting her." Most dogs recover from coonhound paralysis, but they can maintain nervous system deficits for several weeks or months. This was the case for Ruby, who was unable to walk for three months. Throughout that time, she had to be carried everywhere by Mcsharry, who used a harness, a dog wheelchair, and a backpack to assist Ruby. Gradually, Ruby started gaining some autonomy again. She was able to go to the toilet while Mcsharry held her hips upright. She also took Ruby's crate to work and would turn the dog over in bed hourly to prevent sores. Amanda Mcsharry, 30, using a harness to hold Ruby up, and Ruby managing to walk unassisted. Amanda Mcsharry, 30, using a harness to hold Ruby up, and Ruby managing to walk unassisted. @amandam76 / TikTok In addition to regular physio appointments, Ruby also began weekly hydrotherapy sessions. The vet was concerned that Ruby's recovery was too slow, but Mcsharry was adamant that she was making progress and they just needed to persevere. Mcsharry told Newsweek: "I could see tiny improvements in her as I know her so well. Every day I would do a session of physio before work, during my lunch break, and after work. Between those, I used a harness to carry her everywhere and simulate walking." Over time, Mcsharry noticed that Ruby was managing to place her paws on the ground, rather than swinging them aimlessly. Subsequently, Ruby sat up unassisted for a few seconds, proving that she was regaining her strength after all. Progress was slow, but Mcsharry built on each tiny milestone. She encouraged Ruby to sit up for longer each time, and even got her to stand unassisted. "About five days later, she took her first steps unassisted, and everything progressed from there. She got tired quickly, but I couldn't believe that she was walking. It was surreal that we had actually done it," Mcsharry said. It was a remarkable experience which Mcsharry documented in a viral TikTok video (@amandam76). At the time of writing, the video has generated over 164,800 views and 32,300 likes on TikTok. It was so rewarding for Mcsharry to know that her instincts were right. Seeing Ruby return to her usual self was "the best feeling" for Mcsharry, who says she's back to normal. She continued: "There were lots of tears throughout, especially when she she sat up, stood on her own, and walked. Now, you would never know anything had happened—she's back to normal and she's the same wee dog." Do you have funny and adorable videos or pictures of your pet you want to share? We want to see the best ones! Send them in to life@ and they could appear on our site.