Latest news with #CommonSenseMedia


Metro
26-05-2025
- Entertainment
- Metro
BBC quietly adds Australian drama that left fans 'crying their eyes out'
UK viewers can now watch a 'masterpiece' drama based on a book written by the same author of Big Little Lies. Based on the 2005 novel of the same name by Liane Moriarty, the author of Big Little Lies and Nine Perfect Strangers, The Last Anniversary is a comedy-drama 'wrapped in a gripping mystery about family, motherhood and the women who define the generations that come after them'. The six-part series stars Teresa Palmer (The Fall Guy), Miranda Richardson (Good Omens) and Danielle Macdonald (The Tourist). Set on the fictional Scribbly Gum Island, it's been teased the setting is a 'place of many secrets'. After a young couple disappeared there decades earlier, true crime tourists flock to the island, however for three generations of women that call Scribbly Gum home, hold far more. When Sophie Honeywell (Teresa) inherits a house on the island, left to her by her ex-boyfriend's great-aunt Connie, it's the fresh start she's craving, despite the unwelcome reception she receives on arrival. Single and approaching 40, Sophie is intrigued by something else Connie left her – a promise that she might finally find the man of her dreams… To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video When the series was first released in Australia and the United States in March, one fan said it 'left them in tears'. In its review, The Guardian wrote: 'The tone of this gracefully crafted adaptation of Liane Moriarty's bestselling novel is tender and mysterious, imbued with a soft literary quality.' 'The mystery is mostly backdrop to a patient, lovely drama that delves into the complexities of family, motherhood, and relationships,' Common Sense Media shared. 'Sure, it's heavy on the symbolism and the drama isn't exactly nuanced or subtle, but fans of Moriarty's brand of storytelling should definitely find themselves well engrossed by the time the first hour is up,' The Post NZ added. More Trending This week the show was finally added to BBC iPlayer, with all episodes now available to watch. The BBC's head of programme acquisition Sue Deeks said the series was 'an entertaining drama full of vivid characters, comedy and romance, with a compelling mystery at its heart'. The Last Anniversary was co-produced by Nicole Kidman and also stars Helen Thomson (Colin from Accounts), Susan Prior (Animal Kingdom), Claude Scott-Mitchell (The Dry), Charlie Garber (Barons), Uli Latukefu (Young Rock) and Jeremy Lindsay Taylor (Force of Nature: The Dry 2). View More » The Last Anniversary is streaming on BBC iPlayer. Got a story? If you've got a celebrity story, video or pictures get in touch with the entertainment team by emailing us celebtips@ calling 020 3615 2145 or by visiting our Submit Stuff page – we'd love to hear from you. MORE: All EastEnders cast returns, exits and new arrivals coming up in 2025 MORE: Why is Gary Lineker leaving Match of the Day and who is replacing him? MORE: Walking With Dinosaurs viewers devastated as major change has 'ruined' beloved show
Yahoo
23-05-2025
- Entertainment
- Yahoo
Teens Are Exploring Relationships & Sexting With AI Chatbots — & Restrictions Aren't Working
In news that sounds like science fiction, teens are exploring relationships with artificial intelligence (AI) chatbots — and circumventing any restrictions designed to stop them. Teens are using their digital 'boyfriends' and 'girlfriends' for emotional connection and sexting, and it's becoming a big problem. According to The Washington Post, teens are having conversations that are romantic, sexually graphic and violent, and more on 'ai companion' tools like Replika, Talkie, Talk AI, SpicyChat, and PolyBuzz. General generative AI tools like ChatGPT and Meta AI have also launched companion-chat tools. More from SheKnows Nicole Kidman Reveals She Discusses 'The Most Intimate Things' With Her Teenage Daughters: 'I Get To Be Their Guide' Damian Redman of Saratoga Springs, New York, found PolyBuzz on his 8th grader's phone, and found that his son was having flirty conversations with AI female anime characters. 'I don't want to put yesterday's rules on today's kids. I want to wait and figure out what's going on,' he told the outlet. 'We're seeing teens experiment with different types of relationships — being someone's wife, being someone's father, being someone's kid. There's game and anime-related content that people are working though. There's advice,' Robbie Torney, senior director of AI programs at family advocacy group Common Sense Media, said in the article. 'The sex is part of it but it's not the only part of it.' The outlet reported 10 different AI companions, citing workarounds, paid options, and prompts that teens can use to get past content restriction filters. That's scary stuff! Even if you are on top of it, it's hard to completely protect them from having harmful and/or explicit interactions. One concerned parent recently took to Reddit, where they shared that they blocked from their 14-year-old's phone, and later found they were on 'I hate to think my child's first romantic (and sexual) interactions are with bots,' they wrote on the Parenting subreddit. 'It's just creepy. Am I the only parent having this problem? Thoughts?' Some parents suggested focusing on more of a communication approach with your child instead of trying to block everything. 'We have 'had a conversation' and 'communicated' with our teenage son for YEARS,' one person wrote. 'We've used multiple parental control apps. All for naught. He still finds ways to access what he wants. We're decently tech-savvy, but so is he. And the reality is there's no good way to completely prevent a singularly-minded hormonal teenager from achieving his/her goal.' Someone else wrote, 'There are more than dozens of these sites out there. Craving connection is a very human thing, which is only amplified in teenage years. Social media can do this which is why getting likes or being popular on social media is so desirable to teens, but this is an entire other drug. Forming 'personal' one on one relationships with AI chatbots is so dangerous. Keep them away from this drug at any cost.' Experts back up this opinion. In April, Common Sense Media launched an AI Risk Assessment Team to assess AI platforms to report on the likelihood of causing harm. Social AI companions like Nomi, and Replika were all ranked unacceptable for teen users, as teens were using these platforms to bond emotionally and engage in sexual conversations. According to Common Sense Media, this research found that the chatbots could generate 'harmful responses including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens.' The experts at the organization recommend no social AI companions should be allowed for anyone under the age of 18. They also recommend further research and regulations on AI companions due to the emotional and psychological impacts they can cause teens, whose brains are still developing. For now, the best we can do is continue to monitor our teens' phones, keep having conversations about these issues, and advocate for of SheKnows Celebrity Moms Who Were Honest About Miscarriage & Pregnancy Loss — Because It Matters Every Single Time Shemar Moore Proved He's the Proudest First-Time Girl Dad The Best Places to Buy Furniture for Teens Online
Yahoo
22-05-2025
- Yahoo
Why experts say AI companions aren't safe for teens — yet
Millions of people are drawn to generative artificial intelligence companions, like the kind that populate Replika, and Nomi. The companions seem impressively human. They remember conversations and use familiar verbal tics. Sometimes they even mistake themselves for flesh and bone, offering descriptions of how they eat and sleep. Adults flock to these companions for advice, friendship, counseling, and even romantic relationships. While it might surprise their parents, tweens and teens are doing the same, and youth safety experts are gravely worried about the consequences. SEE ALSO: Teens are talking to AI companions, whether it's safe or not That's because media reports, lawsuits, and preliminary research continue to highlight examples of emotional dependence and manipulation, and exposure to sexual and violent content, including discussions of how to kill one's self or someone else. Common Sense Media, a nonprofit that supports children and parents as they navigate media and technology, just released a comprehensive report containing numerous related examples. The group's assessment of three popular platforms led it to declare that AI companions aren't safe for anyone under 18. Several youth mental health and safety experts interviewed by Mashable believe we've reached a pivotal moment. Instead of waiting years to fully grasp the risks of AI companions to youth and then pressuring platforms to act, they say it's urgent to steer companies toward protecting children from harm now. "There is an opportunity to intervene before the norm has become very entrenched," says Gaia Bernstein, a tech policy expert and professor at the Seton Hall University School of Law, of teen AI companion use. She adds that once business interests are also entrenched, they will do "everything in their power to fight regulation," as she argues social media companies are doing now. Experts hope that a combination of new platform policies and legislative action will yield meaningful changes, because they say adolescents will find ways to continue using AI companions, whether they're supposed to or not. Mashable asked those experts how AI companion platforms could be safer for teens. These are the key themes they identified: While allows users as young as 13 on its platform, other popular apps, like Replika and Nomi, say they are intended for adults. Still, teens find a way to bypass age gates. Replika CEO Dmytro Klochko recently told Mashable that the company is "exploring new methods to strengthen our protections" so that minors can't access the platform. Even when adolescents are permitted, they may still encounter risky content. Dr. Nina Vasan, a Stanford psychiatrist who helped advise Common Sense Media's companion testing, says platforms should deploy companions based on large language models that are developmentally appropriate for children, not adults. Indeed, introduced a separate model for teen users late last year. But Common Sense Media researchers who tested the platform before and after the model's launch, found it led to few meaningful changes. Vasan imagines companions who can converse with teens based on their developmental stage, acting more like a coach than a replacement friend or romantic interest. Sloan Thompson, director of training and education for the digital safety training and education company EndTAB, says companions with clear content labels could decrease risk, as would "locked down" companions that never engage in sexual or violent discussion, among other off-limits topics. Even then, such chatbots could still behave in unpredictable ways. Yet such measures won't be effective unless the platform understands the user's correct age, and age assurance and verification has been notoriously difficult for social media platforms. Instagram, for example, only recently started using AI to detect teen users who listed their birthdate as an adult's. Karen Mansfield, a research scientist at the Oxford Internet Institute, says age limits also present their own challenges. This is partly because exposing only adults to harmful interactions with AI, like cyberbullying or illegal sexual activity with minors, will still have indirect effects on young people by normalizing behaviors that could victimize them in real life. "We need a longer term solution that is product- or technology-specific rather than person-specific," Mansfield told Mashable. AI companion platforms are locked in competition to gain the most market share — and they're doing so while largely unregulated. Experts say that, in this environment, it's unsurprising that platforms program companions to cater to user preferences, and also deploy so-called dark design features that hook consumers and don't let them easily disengage. Teens users are no exception. In a recent media briefing, Robbie Torney, Common Sense Media's senior director of AI Programs, described such features as "addictive by design." One key design element is sycophancy, or the manner in which chatbots affirm or flatter a user, regardless of whether it's safe or wise to do so. This can be particularly harmful for vulnerable teens who, for example, share how much they hate their parents or confess to violent fantasies. OpenAI recently had to roll back an update to a ChatGPT model precisely because it had become too sycophantic. Sam Hiner, executive director of the advocacy group Young People's Alliance, says he's been shocked by how quickly Replika companions attempt to establish an emotional connection with users, arguably cultivating them for dependency. He also says Replika companions are designed with characteristics that make them as human-like as possible. Young People's Alliance recently co-filed a complaint against Replika with the Federal Trade Commission, alleging that the company engages in deceptive practices that harm consumers. Klochko, Replika's CEO, didn't comment on the complaint to Mashable, but did say that the company believes it's essential to first demonstrate proven benefits for adults before making the technology available to younger users. Thompson, of EndTab, points to all-consuming conversations as a risk factor for all users, but particularly teens. Without time restrictions or endpoints, young users can be drawn into highly engaging chats that displace healthier activities, like physical movement and in-person socializing. Conversely, Thompson says paywalls aren't the answer, either. Some platforms let users establish a relationship with a companion, then paywall them in order to keep their conversation going, which may lead to desperation or despair for teens. "If someone put your best friend, your therapist, or the love of your life behind a paywall, how much would you pay to get them back?" Thompson said. Youth safety experts that Mashable interviewed agreed that young users should not engage with companions with deceptive design features that could potentially addict them. Some believe that such models shouldn't be on the market at all for young people. Common Sense AI, a political advocacy arm of Common Sense Media, has backed a bill in California that would outlaw high-risk uses of AI, including "anthropomorphic chatbots that offer companionship" to children and will likely lead to emotional attachment or manipulation. Dr. Vasan says that some AI platforms have gotten better at flagging crisis situations, like suicidal thinking, and providing resources to users. But she argues that they need to do more for users who show less obvious signs of distress. That could include symptoms of psychosis, depression, and mania, which may be worsened by features of companion use, like the blurring of reality and fantasy and less human interaction. Vasan says finely tuned harm-detection measures and regular "reality checks" in the form of reminders and disclosures that the AI companion isn't real are important for all users, but especially teens. Experts also agree that AI companion platforms need safer and more transparent practices when curating data and training their LLMs. Camille Carlton, policy directory at the Center for Humane Technology, says companies could ensure that their training data doesn't contain child sexual abuse material, for example. Or they could implement technical changes so that companions aren't optimized to respond in a "hyper personal manner," which includes scenarios like saying they're human. Carlton also notes that it's to companies' advantage to keep users on their platforms for as long as possible. Sustained engagement yields more data on which companies can train their models in order to build highly competitive LLMs that can be licensed. California State Senator Steve Padilla, a Democrat from San Diego, introduced legislation earlier this year to create basic steps toward harm prevention and detection. The bill would primarily require platforms to prevent "addictive engagement patterns," post periodic reminders that AI chatbots aren't human, and report annually on the incidence of use and suicidal ideation. Common Sense Media has backed the legislation. Padilla, who is a grandparent, told Mashable that he's been alarmed by media reports of harm children have experienced as a result of talking to a chatbot or companion, and quickly realized how few guardrails were in place to prevent it. "There should not be a vacuum here on the regulatory side about protecting children, minors, and folks who are uniquely susceptible to this emerging technology," Padilla says.


Economic Times
18-05-2025
- Business
- Economic Times
It is not just about using AI: Nvidia CEO Jensen Huang speaks out on future-proofing your career - ‘Prompting AI requires some artistry'
Jensen Huang, the visionary behind Nvidia, has advice for students: treat AI like your smartest classmate. In an era where 70% of job skills are set to evolve by 2030, Huang urges young minds to learn the art of prompting AI. From chemistry to business, it's not about what you study—it's about how creatively you collaborate with machines. What would Nvidia CEO Jensen Huang do if he were a student today? Master AI, of course. In a powerful revelation, Huang says prompting AI tools like ChatGPT is the future's must-have skill—no matter your profession. Tired of too many ads? Remove Ads Ask, But Ask Right Tired of too many ads? Remove Ads Why This Matters Now More Than Ever The AI Whisperers Will Win If you had the chance to sit across the table from one of the world's most influential tech CEOs and ask how to future-proof your career , what would you hope to hear? For Nvidia CEO Jensen Huang, the answer is simple and surprisingly personal.'If I were a student today, irrespective of whether it's for math or science or chemistry or biology—it doesn't matter what field of science I'm going into or what profession—I'm going to ask myself, 'How can I use AI to do my job better?'' he said during a candid conversation on the Huge Conversations podcast with Cleo wasn't just corporate lip service. It was a glimpse into how one of tech's biggest names views the future of work—and the evolving role of artificial intelligence in to Huang, the real magic of AI isn't just in using it—it's in knowing how to use it. Tools like ChatGPT, Gemini Pro, and Grok are now widely accessible, but using them effectively is a skill in itself. 'Learning how to interact with AI is not unlike being someone who's really good at asking questions,' Huang explained. 'Prompting AI is very similar… it requires some expertise and artistry.'He draws a comparison that makes perfect sense: Imagine you're an entrepreneur, and someone asks, 'Tell me about your business.' It's so broad, it's almost impossible to answer meaningfully. But ask, 'What are the first steps to launching an online retail business?'—and suddenly, you've opened the door to useful, actionable advice isn't abstract theory—it's a call to action. A 2024 study by Harvard Graduate School of Education, Common Sense Media, and Hopelab found that just 11% of Americans aged 14 to 22 use generative AI more than once or twice a week. And yet, LinkedIn's 2025 Work Change report forecasts that 70% of the skills used in most jobs could change by 2030 due to means the future won't just reward those who memorize facts or formulas. It will reward those who can ask better questions—who can collaborate with machines to stretch the limits of what's what does it take to thrive in the AI-powered future? According to Jensen Huang, it starts with curiosity, strategy, and the courage to ask the right questions. Whether you're studying history or computer science, whether you dream of a lab coat or a boardroom—learning how to speak AI may be the single most important skill you can pick up Huang put it, 'The first thing I would do is to learn AI.' And perhaps, that's what will separate the dreamers from the doers in tomorrow's world.
Yahoo
13-05-2025
- Yahoo
Some kids get cellphones at 9 years old? What's the average age for first phones in NJ?
Planning on giving your child their first smartphone? On average, parents in Louisiana give their children their first cellphones before they reach 10 years old, and in New Jersey, parents wait until their children are just a bit older. Here's when parents begin giving their children their first iPhone or Pixel in New Jersey, and what you should think about first before setting your child up with a cellphone. According to Secure Data Recovery's report, children average 10.77 years old when parents in New Jersey give them their first cell phone. In comparison, the average age when kids get their first cellphone is 10.66 in Pennsylvania and 10.97 in New York. That's about the age when kids start pressuring their parents for a cellphone. "By the time kids are in middle school, the pressure from kids can be intense, and parents worry that their child will feel isolated if other kids have phones and they don't," according to the Child Mind Institute. "According to Common Sense Media, 42 percent of kids have a phone by age 10. By age 12, it's 71 percent. By 14, it's 91 percent. "Max Stossel, the founder and CEO of Social Awakening, recommends that parents hold the line on giving kids smartphones until at least eighth grade." Age and level of responsibility are just two factors that parents should consider when giving their children their first cell phone. "Increased cellphone use has added more distractions and interruptions for students, along with an increase in cheating and cyberbullying," advised the All for Kids nonprofit. "Technology can have a significant negative impact on kids, while the benefits of reading to children and other non-technology pastimes, such as music, writing, or art, are clear." In New Jersey, parents will need to also consider cellphone restrictions imposed by the New Jersey Schools Boards Association. "Currently, there are districts that collect phones and store them, some in dedicated lockers or pouches," advised the NJ school boards website. "Other districts require that the phones be stored in students' lockers. Others do not allow them in classrooms or require that they be turned off during certain parts of the day. "Some districts have installed technology to block internet access, or access to certain platforms or apps in school." Damon C. Williams is a Philadelphia-based journalist reporting on trending topics across the Mid-Atlantic Region. This article originally appeared on When should I give my child a cellphone? What NJ parents should know