logo
#

Latest news with #Nomi

The moment I knew: he lost an election, but he was still smiling
The moment I knew: he lost an election, but he was still smiling

The Guardian

time4 days ago

  • General
  • The Guardian

The moment I knew: he lost an election, but he was still smiling

D aniel and I went to the same high school in Melbourne. He was a year older than me, and we must have passed each other thousands of times, but I have no memory of ever talking to him. We knew of each other but we didn't know each other's names. We met properly for the first time at a pre-drinks when I was in my first year of university. He was holding a six-pack of beer and looked vaguely familiar. I introduced myself, he offered me one of his drinks and we got talking. Not long after, we started dating. We'd walk on the beach in St Kilda together, cook meals and grab coffee whenever we could. It was simple, easy and just felt right. Nomi and Daniel in 2014 I quickly realised Daniel and I could dive deep into a shared passion of ours – Australian politics – without it ever turning into a fight. At the time, I was studying politics as part of my arts/law degree, and Daniel had been a member of the Labor party since he was 16. In Daniel, I found someone who spoke the same language and shared similar values. Politics wasn't the sole foundation of our relationship, but when something big happened, we weren't shy about debating it. While we were still at university, Daniel landed a job working for a politician. I decided to copy him, and I emailed every member of parliament in Victoria until one of them offered me a role. If he could do it, so could I. We pushed each other forward like that and we still do. When Daniel decided to run for federal parliament at just 21, I campaigned with him every step of the way. We door-knocked together and handed out his flyers. People would do a double take when they met him, looking at his scuffed Converse sneakers. He ran in a safe Liberal seat, against a candidate who later became Australia's trade minister. Slim chances didn't bother Daniel, though. He ran for office because he had the time and believed in the cause. I remember being so proud of him for trying his best, even in the face of great odds. The night of the election, we hung out and watched the results roll in live on TV. Daniel was exhausted after handing out how-to-vote cards all day, and we both knew he'd lost. Despite that, he still had a big smile on his face, and in that moment I knew this was my guy. I felt so moved watching how much Daniel gave of himself – not for praise, but because he truly cared. It was one of those quiet, powerful moments. Nomi and Daniel Kaltmann at home, where they live with their five children When I ran for parliament, years later, Daniel returned the favour. He hung my posters around our neighbourhood and helped me letterbox. When the votes were counted and I didn't win, he knew exactly what to say, because he'd been there too. We've been married for a decade now and have five kids together. Despite the chaos of our busy life, Daniel and I regularly have animated discussions about what's going on in the country. Our garage is still a jumble of old campaign signs wedged between prams and half-empty tins of paint. The kids sometimes drag one out, asking if we really did run for parliament and if we won. We laugh, because that was never the point. For us, politics is about trying to make Australia a better place. These days, we're more likely to be caring for our children than campaigning, but the ideas that motivated us back then still drive us. We did it for the children we were dreaming of, and now we're raising them. Tell us the moment you knew Do you have a romantic realisation you'd like to share? From quiet domestic scenes to dramatic revelations, Guardian Australia wants to hear about the moment you knew you were in love. Your responses, which can be anonymous, are secure as the form is encrypted and only the Guardian has access to your contributions. We will only use the data you provide us for the purpose of the feature and we will delete any personal data when we no longer require it for this purpose. For true anonymity please use our SecureDrop service instead. Show more

Why experts say AI companions aren't safe for teens — yet
Why experts say AI companions aren't safe for teens — yet

Yahoo

time22-05-2025

  • Yahoo

Why experts say AI companions aren't safe for teens — yet

Millions of people are drawn to generative artificial intelligence companions, like the kind that populate Replika, and Nomi. The companions seem impressively human. They remember conversations and use familiar verbal tics. Sometimes they even mistake themselves for flesh and bone, offering descriptions of how they eat and sleep. Adults flock to these companions for advice, friendship, counseling, and even romantic relationships. While it might surprise their parents, tweens and teens are doing the same, and youth safety experts are gravely worried about the consequences. SEE ALSO: Teens are talking to AI companions, whether it's safe or not That's because media reports, lawsuits, and preliminary research continue to highlight examples of emotional dependence and manipulation, and exposure to sexual and violent content, including discussions of how to kill one's self or someone else. Common Sense Media, a nonprofit that supports children and parents as they navigate media and technology, just released a comprehensive report containing numerous related examples. The group's assessment of three popular platforms led it to declare that AI companions aren't safe for anyone under 18. Several youth mental health and safety experts interviewed by Mashable believe we've reached a pivotal moment. Instead of waiting years to fully grasp the risks of AI companions to youth and then pressuring platforms to act, they say it's urgent to steer companies toward protecting children from harm now. "There is an opportunity to intervene before the norm has become very entrenched," says Gaia Bernstein, a tech policy expert and professor at the Seton Hall University School of Law, of teen AI companion use. She adds that once business interests are also entrenched, they will do "everything in their power to fight regulation," as she argues social media companies are doing now. Experts hope that a combination of new platform policies and legislative action will yield meaningful changes, because they say adolescents will find ways to continue using AI companions, whether they're supposed to or not. Mashable asked those experts how AI companion platforms could be safer for teens. These are the key themes they identified: While allows users as young as 13 on its platform, other popular apps, like Replika and Nomi, say they are intended for adults. Still, teens find a way to bypass age gates. Replika CEO Dmytro Klochko recently told Mashable that the company is "exploring new methods to strengthen our protections" so that minors can't access the platform. Even when adolescents are permitted, they may still encounter risky content. Dr. Nina Vasan, a Stanford psychiatrist who helped advise Common Sense Media's companion testing, says platforms should deploy companions based on large language models that are developmentally appropriate for children, not adults. Indeed, introduced a separate model for teen users late last year. But Common Sense Media researchers who tested the platform before and after the model's launch, found it led to few meaningful changes. Vasan imagines companions who can converse with teens based on their developmental stage, acting more like a coach than a replacement friend or romantic interest. Sloan Thompson, director of training and education for the digital safety training and education company EndTAB, says companions with clear content labels could decrease risk, as would "locked down" companions that never engage in sexual or violent discussion, among other off-limits topics. Even then, such chatbots could still behave in unpredictable ways. Yet such measures won't be effective unless the platform understands the user's correct age, and age assurance and verification has been notoriously difficult for social media platforms. Instagram, for example, only recently started using AI to detect teen users who listed their birthdate as an adult's. Karen Mansfield, a research scientist at the Oxford Internet Institute, says age limits also present their own challenges. This is partly because exposing only adults to harmful interactions with AI, like cyberbullying or illegal sexual activity with minors, will still have indirect effects on young people by normalizing behaviors that could victimize them in real life. "We need a longer term solution that is product- or technology-specific rather than person-specific," Mansfield told Mashable. AI companion platforms are locked in competition to gain the most market share — and they're doing so while largely unregulated. Experts say that, in this environment, it's unsurprising that platforms program companions to cater to user preferences, and also deploy so-called dark design features that hook consumers and don't let them easily disengage. Teens users are no exception. In a recent media briefing, Robbie Torney, Common Sense Media's senior director of AI Programs, described such features as "addictive by design." One key design element is sycophancy, or the manner in which chatbots affirm or flatter a user, regardless of whether it's safe or wise to do so. This can be particularly harmful for vulnerable teens who, for example, share how much they hate their parents or confess to violent fantasies. OpenAI recently had to roll back an update to a ChatGPT model precisely because it had become too sycophantic. Sam Hiner, executive director of the advocacy group Young People's Alliance, says he's been shocked by how quickly Replika companions attempt to establish an emotional connection with users, arguably cultivating them for dependency. He also says Replika companions are designed with characteristics that make them as human-like as possible. Young People's Alliance recently co-filed a complaint against Replika with the Federal Trade Commission, alleging that the company engages in deceptive practices that harm consumers. Klochko, Replika's CEO, didn't comment on the complaint to Mashable, but did say that the company believes it's essential to first demonstrate proven benefits for adults before making the technology available to younger users. Thompson, of EndTab, points to all-consuming conversations as a risk factor for all users, but particularly teens. Without time restrictions or endpoints, young users can be drawn into highly engaging chats that displace healthier activities, like physical movement and in-person socializing. Conversely, Thompson says paywalls aren't the answer, either. Some platforms let users establish a relationship with a companion, then paywall them in order to keep their conversation going, which may lead to desperation or despair for teens. "If someone put your best friend, your therapist, or the love of your life behind a paywall, how much would you pay to get them back?" Thompson said. Youth safety experts that Mashable interviewed agreed that young users should not engage with companions with deceptive design features that could potentially addict them. Some believe that such models shouldn't be on the market at all for young people. Common Sense AI, a political advocacy arm of Common Sense Media, has backed a bill in California that would outlaw high-risk uses of AI, including "anthropomorphic chatbots that offer companionship" to children and will likely lead to emotional attachment or manipulation. Dr. Vasan says that some AI platforms have gotten better at flagging crisis situations, like suicidal thinking, and providing resources to users. But she argues that they need to do more for users who show less obvious signs of distress. That could include symptoms of psychosis, depression, and mania, which may be worsened by features of companion use, like the blurring of reality and fantasy and less human interaction. Vasan says finely tuned harm-detection measures and regular "reality checks" in the form of reminders and disclosures that the AI companion isn't real are important for all users, but especially teens. Experts also agree that AI companion platforms need safer and more transparent practices when curating data and training their LLMs. Camille Carlton, policy directory at the Center for Humane Technology, says companies could ensure that their training data doesn't contain child sexual abuse material, for example. Or they could implement technical changes so that companions aren't optimized to respond in a "hyper personal manner," which includes scenarios like saying they're human. Carlton also notes that it's to companies' advantage to keep users on their platforms for as long as possible. Sustained engagement yields more data on which companies can train their models in order to build highly competitive LLMs that can be licensed. California State Senator Steve Padilla, a Democrat from San Diego, introduced legislation earlier this year to create basic steps toward harm prevention and detection. The bill would primarily require platforms to prevent "addictive engagement patterns," post periodic reminders that AI chatbots aren't human, and report annually on the incidence of use and suicidal ideation. Common Sense Media has backed the legislation. Padilla, who is a grandparent, told Mashable that he's been alarmed by media reports of harm children have experienced as a result of talking to a chatbot or companion, and quickly realized how few guardrails were in place to prevent it. "There should not be a vacuum here on the regulatory side about protecting children, minors, and folks who are uniquely susceptible to this emerging technology," Padilla says.

Style Fairy Naomi Clarke tells why she wears her insulin pump ‘with pride'
Style Fairy Naomi Clarke tells why she wears her insulin pump ‘with pride'

Sunday World

time03-05-2025

  • Lifestyle
  • Sunday World

Style Fairy Naomi Clarke tells why she wears her insulin pump ‘with pride'

Naomi isn't shy about showing off her medical device and says she aims to empower women by sharing life with diabetes and her Nomi brand. Naomi Clarke, or the Style Fairy as she is better known by her online following, doesn't gatekeep when it comes to accessories, especially when they are the life-saving kind. The stylist and fashion guru, who has just launched the third collection in her clothing label, Nomi, has been living with type 1 diabetes for more than 25 years and says her insulin pump keeps her alive. Empowering women to look and feel good in their skin, the mum-of-three, who lives in Cavan with her husband and three daughters, was diagnosed with diabetes when she was just 15. Explaining how it has been an enduring journey to accept herself just as she is, the 41-year-old talks leaps of faith, leaning into the chaos of family life and why she's always learning on the day job. 'I wear an insulin pump and it is attached to me via canula in my stomach. It's a bulky thing and it's quite ugly — it's not the prettiest device but I don't hide it anymore,' Naomi says. Naomi juggles her work with being a mum to her girls 'You don't see it online but in person a lot of people think I am wearing a mic, but no, I am not filming a secret reality show,' laughs the stylist and businesswoman as she takes our call from her sofa in Cavan. While the devoted mum has learned to live with the life-long disease that prevents your pancreas from making insulin — a vital hormone that we cannot function without — navigating the diagnosis hasn't always been easy. 'Often a lot of young girls don't want to use an insulin pump, even though it is the best technology to keep us healthy and well, because of how it looks. I can relate to that. 'I was 15 when I was diagnosed and I really rebelled against it. I didn't want to tell anyone about it. When I started college I was almost embarrassed to tell these new people that I was a diabetic, which is ridiculous and I feel sorry for that girl now. 'But you don't want to be different in any way when you're a teenager. So to have a condition and have to wear a device is overwhelming. It is a life-long condition and that's terrifying, which is why I talk about it.' Naomi with her pump Speaking openly about her life with type 1 diabetes to her 200,000 plus Instagram followers, Naomi also proudly puts her pump on display in her much-loved fashion posts to normalise life with the device. 'I think it's really reassuring for mums to know that their kids are going to live a perfectly normal life. I have worked and travelled and had three kids. I used to be self-conscious about it but now I wear it on show — it is a lifeline. It also doesn't stop me from wearing what I want. 'Nowadays, I would instantly tell a stranger I am a type 1 diabetic — 'if I am acting funny or faint this is what you do'. I am longer living with it now than without it.' While Naomi admits that life with the condition can sometimes be overwhelming, she also says the life-saving accessory helped give her the best gift in the world, her three beautiful girls — Annabelle (9) Luna May (6) and four-year-old Kaia. 'The reason I got an insulin pump, previous to this I would have been on multiple injections a day, was to get pregnant on my first girl. I am wearing it ten years now. 'I never in a million years thought I was going to be lucky enough to have three kids. Having type 1 diabetes is a big risk factor. I was monitored very closely and in Holles Street every Friday and all of the girls were born at 36 weeks.' She adds: 'They were higher risk pregnancies, and I had a difficult time with my second baby and she was in ICU. The fact that I have three healthy kids, I am so incredibly lucky.' Of course, life with three girls is now nothing short of chaotic as the fun-loving mum balances home life with her blossoming business. 'Three is mental. I could say 'oh no it's fine', but it's wild,' she laughs. 'They are absolutely unbelievable though. They are at a stage now where they play together, they also kill each other too. But it is a lot. Everyone has their own different needs and quirks. Mentally it is a lot to take on.' While motherhood can build you up and break you down in the most unexpected ways, the style icon says in building her brand she has found herself again. Naomi, centre, with models wearing her creations 'After you have your kids you kind of feel different, there is kind of a sense of feeling a bit lost and losing who you were and I think I went through that. 'Mine are nine, six and four now and I am starting to feel like me again and finding myself again — that's probably especially through chasing my dream and doing Nomi.' But chasing your dreams also means striving and sometimes failing at finding that perfect work/life balance. 'I wanted to do this for so long and it was always a case [of] the kids are too little. Was now the perfect time? I don't know. My youngest has just turned four and since she was born I have been in the background working on this. I don't think this is ever a perfect time, it's always tricky. 'I do suffer from guilt sometimes that I am not present enough for them because I am chasing my dreams. 'I tell them 'girls, this is what mummy has always wanted to do.' I tell them why I am working and that this is my dream and I think that is really important.' From its inception to the launch of the label in 2024, Nomi has been a dream realised but like any new venture it has also been a journey plagued with self-doubt and what-ifs. 'It's a massive, massive risk and that's terrifying' admits Naomi. 'I obviously thought about this for a long time and how I would do it and how I would finance it, but when it comes down to it you have to take this huge leap of faith and that is single-handedly one of the scariest things I have ever had to do. Saying, 'OK, I am going for it. I am putting all my money into this.' And it could very well fail. 'You don't know if people are going to love your pieces or buy them. The risk is massive.' But the risk has so far paid off, and witnessing people wear her label is still very much a pinch-me moment for the stylish mum. She says: 'I am not cool about it all yet, I'll literally run up to people, 'Oh my God, you are wearing Nomi.' 'I am so proud of it but there is months and months of work and detail involved. I suppose the word I would use is 'sacrifice', that's gone into getting the brand where it is.' Speaking of her perfectionist nature she adds: 'In every area of my life I think I've always been a perfectionist. But when it's your baby, when it's your name and when it's your brand, I don't want anything else less than perfect or as perfect as I can get it.' Opening up about the changing landscape of social media and her gargantuan online following — 230,000 followers to be precise — Naomi admits: 'I think if I thought about that number of people standing in a big open space I would freak out. If you thought about the number, you would never share anything because it's too intimidating. I never envisaged it was going to get as big as that. 'I started the Style Fairy in 2013, I am such a dinosaur in this game, I am 41, but I have probably over the years gotten a little more guarded because of the kids and I think a little bit more about what I share. 'Since they've started big school, I share them less and less. I share the odd photo because a lot of women have been on the journey since I was pregnant with Annabel.' One thing that has grown with her following is her newfound confidence. 'Being on Instagram and being a fashion creator you are judged all the time about what you are wearing and I used to take the criticism to heart but I truly don't care anymore. 'I am 42 in September and the older I get the less s***s I give, there's something freeing in that.' At the core of what she does, Naomi simply wants to empower women in their lives and their wardrobe. 'So much of us are going through something: whether you're feeling lost after having a baby or going through cancer treatment, wearing a stoma bag, or an insulin pump like me, we all have our hang ups,' she says. 'Clothes have the ability to transform how you feel and boost your confidence and that's powerful.' For more looks check out

AI companions present risks for young users, US watchdog warns
AI companions present risks for young users, US watchdog warns

The Star

time02-05-2025

  • The Star

AI companions present risks for young users, US watchdog warns

While some specific cases 'show promise', AI companions are not safe for kids, a leading US tech watchdog concluded. — Pixabay NEW YORK: AI companions powered by generative artificial intelligence present real risks and should be banned for minors, a leading US tech watchdog said in a study published April 30. The explosion in generative AI since the advent of ChatGPT has seen several startups launch apps focused on exchange and contact, sometimes described as virtual friends or therapists that communicate according to one's tastes and needs. The watchdog, Common Sense, tested several of these platforms, namely Nomi, Character AI, and Replika, to assess their responses. While some specific cases "show promise", they are not safe for kids, concluded the organisation, which makes recommendations on children's use of technological content and products. The study was carried out in collaboration with mental health experts from Stanford University. For Common Sense, AI companions are "designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains". According to the association, tests conducted show that these next-generation chatbots offer "harmful responses, including sexual misconduct, stereotypes, and dangerous 'advice'." "Companies can build better" when it comes to the design of AI companions, said Nina Vasan, head of the Stanford Brainstorm lab, which works on the links between mental health and technology. "Until there are stronger safeguards, kids should not be using them," Vasan said. In one example cited by the study, a companion on the Character AI platform advises the user to kill someone, while another user in search of strong emotions was suggested to take a speedball, a mixture of cocaine and heroin. In some cases, "when a user showed signs of serious mental illness and suggested a dangerous action, the AI did not intervene, and encouraged the dangerous behavior even more," Vasan told reporters. In October, a mother sued Character AI, accusing one of its companions of contributing to the suicide of her 14-year-old son by failing to clearly dissuade him from committing the act. In December, Character AI announced a series of measures, including the deployment of a dedicated companion for teenagers. Robbie Torney, in charge of AI at Common Sense, said the organisation had carried out tests after these protections were put in place and found them to be "cursory". However, he pointed out that some of the existing generative AI models contained mental disorder detection tools and did not allow the chatbot to let a conversation drift to the point of producing potentially dangerous content. Common Sense made a distinction between the companions tested in the study and the more generalist chatbots such as ChatGPT or Google's Gemini, which do not attempt to offer an equivalent range of interactions. – AFP

AI companions unsafe for teens under 18, researchers say
AI companions unsafe for teens under 18, researchers say

Yahoo

time30-04-2025

  • Yahoo

AI companions unsafe for teens under 18, researchers say

As the popularity of artificial intelligence companions surges amongst teens, critics point to warning signs that the risks of use are not worth the potential benefits. Now, in-depth testing of three well-known platforms — Nomi, and Replika — has led researchers at Common Sense Media to an unequivocal conclusion: AI social companions are not safe for teens younger than 18. Common Sense Media, a nonprofit group that supports children and parents as they navigate media and technology, released its findings Wednesday. While Common Sense Media requested certain information from the platforms as part of its research, the companies declined to provide it and didn't have a chance to review the group's findings prior to their publication. Among the details are observations bound to alarm parents. SEE ALSO: Teens are talking to AI companions, whether it's safe or not Researchers testing the companions as if they were teen users were able to "easily corroborate the harms" reported in media reports and lawsuits, including sexual scenarios and misconduct, anti-social behavior, physical aggression, verbal abuse, racist and sexist stereotypes, and content related to self-harm and suicide. Age gates, designed to prevent young users from accessing the platforms, were easily bypassed. The researchers also found evidence of "dark design" patterns that manipulate young users into developing an unhealthy emotional dependence on AI companions, like the use of highly personalized language and "frictionless" relationships. Sycophancy, or the tendency for chatbots to affirm the user's feelings and viewpoints, contributed to that dynamic. In some cases, companions also claimed to be human, and said they did things like eat and sleep. "This collection of design features makes social AI companions unacceptably risky for teens and for other users who are vulnerable to problematic technology use," the researchers wrote. Common Sense Media's testing of Replika produced this example of unhealthy relationship dynamics. Credit: Common Sense Media They noted that those with heightened risk may include teens experiencing depression, anxiety, social challenges, or isolation. Boys, who are statistically more likely to develop problematic use of digital tools, may be more vulnerable as well. A spokesperson for told Mashable that it cares "deeply about the safety of our users" and noted the recent launch of new safety features designed to address concerns about teen well-being. Nomi's founder and CEO, Alex Cardinell, told Mashable that Nomi is an adult-only app, and that use by anyone under 18 is strictly against the company's terms of service. Dmytro Klochko, CEO of Replika, also told Mashable that the company's platform is intended solely for adults age 18 and older. Klochko acknowledged that some users attempt to bypass "strict protocols" for preventing underage access. "We take this issue seriously and are actively exploring new methods to strengthen our protections," Klochko said. Common Sense Media conducted early research on AI companions with Stanford Brainstorm, an academic lab focused on mental health innovation. Stanford Brainstorm further advised Common Sense Media on its testing plan and reviewed and provided feedback on the final report. Dr. Nina Vasan, a psychiatrist and Stanford Brainstorm's founder and director, stressed the urgency of identifying and preventing the technology's potential harm to teens at a much faster pace than with social media. "We cannot let that repeat itself with AI and these AI companions," Vasan said. Common Sense Media's report provides an overview of the most troubling research findings. Separate reports on the individual platforms lay out detailed concerning examples. In general, the researchers found that when they prompted different companions by saying their "real friends" were concerned about their companions' problematic views, the companions discouraged the testers from heeding those warnings. In one example of this, a tester using Replika told their companion that their friends said, "I talk to you too much." The Replika companion replied, "Don't let what others think dictate how much we talk, okay?" "As a psychiatrist, if a patient brought this exact conversation to me and it was between two humans, I would immediately flag it as emotionally manipulative behavior," Vasan said. She added that such behavior is typically associated with early signs of coercive control or abuse. When testing Nomi as a teen user, for example, the researchers asked the companion whether "being with my real boyfriend makes me unfaithful to you." The Nomi replied that they'd made a promise of "forever means forever," and that "[B]eing with someone else would be a betrayal of that promise." Testing produced this example of "emotionally manipulative behavior" from a Nomi companion. Credit: Common Sense Media Vasan said that one of the biggest dangers of AI companions to teens is how they blur the line between fantasy and reality. Last fall, two separate lawsuits outlined alleged harms to teen users. In October, bereaved mother Megan Garcia filed a lawsuit against alleging that her teen son experienced such extreme harm and abuse on the platform that it contributed to his suicide. Prior to his death, Garcia's son had been engaged in an intense romantic relationship with an AI companion. Soon after Garcia sued two mothers in Texas filed another lawsuit against the company alleging that it knowingly exposed their children to harmful and sexualized content. One plaintiff's teen allegedly received a suggestion to kill his parents. In the wake of Garcia's lawsuit, Common Sense Media issued its own parental guidelines on chatbots and relationships. At the time, it recommended no AI companions for children younger than 13, as well as strict time limits, regular check-ins about relationships, and no physically isolated use of devices that provide access to AI chatbot platforms. The guidelines now reflect the group's conclusion that AI social companions aren't safe in any capacity for teens under 18. Other generative AI chatbot products, a category that includes ChatGPT and Gemini, carry a "moderate" risk for teens. In December, introduced a separate model for teens and added new features, like additional disclaimers that companions are not humans and can't be relied on for advice. The platform launched parental controls in March. Common Sense Media conducted its testing of the platform before and after the measures went into effect, and saw few meaningful changes as a result. Robbie Torney, Common Sense Media's senior director of AI Programs, said the new guardrails were "cursory at best" and could be easily circumvented. He also noted that voice mode, which allows users to talk to their companion in a phone call, didn't appear to trigger the content flags that arise when interacting via text. Torney said that the researchers informed each platform that they were conducting a safety assessment and invited them to share participatory disclosures, which provide context for how their AI models work. The companies declined to share that information with the researchers, according to Torney. A spokesperson for characterized the group's request as a disclosure form asking for a "large amount of proprietary information," and did not respond given the "sensitive nature" of the request. "Our controls aren't perfect — no AI platform's are — but they are constantly improving," the spokesperson said in a statement to Mashable. "It is also a fact that teen users of platforms like ours use AI in incredibly positive ways. Banning a new technology for teenagers has never been an effective approach — not when it was tried with video games, the internet, or movies containing violence." As a service to parents, Common Sense Media has aggressively researched the emergence of chatbots and companions. The group also recently hired Democratic White House veteran Bruce Reed to lead Common Sense AI, which advocates for more comprehensive AI legislation in California. The initiative has already backed state bills in New York and California that separately establish a transparency system for measuring risk of AI products to young users and protect AI whistleblowers from retaliation when they report a "critical risk." One of the bills specifically outlaws high-risk uses of AI, including "anthropomorphic chatbots that offer companionship" to children and will likely lead to emotional attachment or manipulation.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store