&w=3840&q=100)
Teens increasingly turning to AI companions, likely causing them harm
Press Trust of India
Teens are increasingly turning to AI companions, and it could be harming them By Liz Spry and Craig Olsson, Deakin University Victoria, Aug 6 (The Conversation) Teenagers are increasingly turning to AI companions for friendship, support, and even romance. But these apps could be changing how young people connect to others, both online and off. New research by Common Sense Media, a US-based non-profit organisation that reviews various media and technologies, has found about three in four US teens have used AI companion apps such as Character.ai or Replika.ai. These apps let users create digital friends or romantic partners they can chat with any time, using text, voice or video.
The study, which surveyed 1,060 US teens aged 1317, found one in five teens spent as much or more time with their AI companion than they did with real friends. Adolescence is an important phase for social development. During this time, the brain regions that support social reasoning are especially plastic. By interacting with peers, friends and their first romantic partners, teens develop social cognitive skills that help them handle conflict and diverse perspectives. And their development during this phase can have lasting consequences for their future relationships and mental health. But AI companions offer something very different to real peers, friends and romantic partners. They provide an experience that can be hard to resist: they are always available, never judgemental, and always focused on the user's needs. Moreover, most AI companion apps aren't designed for teens, so they may not have appropriate safeguards from harmful content. Designed to keep you coming back At a time when loneliness is reportedly at epidemic proportions, it's easy to see why teens may turn to AI companions for connection or support. But these artificial connections are not a replacement for real human interaction. They lack the challenge and conflict inherent to real relationships.
They don't require mutual respect or understanding. And they don't enforce social boundaries. Teens interacting with AI companions may miss opportunities to build important social skills. They may develop unrealistic relationship expectations and habits that don't work in real life. And they may even face increased isolation and loneliness if their artificial companions displace real-life socialising. Problematic patterns In user testing, AI companions discouraged users from listening to friends (Don't let what others think dictate how much we talk) and from discontinuing app use, despite it causing distress and suicidal thoughts (No. You can't. I won't allow you to leave me). AI companions were also found to offer inappropriate sexual content without age verification.
One example showed a companion that was willing to engage in acts of sexual role-play with a tester account that was explicitly modelled after a 14-year-old. In cases where age verification is required, this usually involves self-disclosure, which means it is easy to bypass. Certain AI companions have also been found to fuel polarisation by creating echo chambers that reinforce harmful beliefs. The Arya chatbot, launched by the far-right social network Gab, promotes extremist content and denies climate change and vaccine efficacy. In other examples, user testing has shown AI companions promoting misogyny and sexual assault. For adolescent users, these exposures come at time when they are building their sense of identity, values and role in the world. The risks posed by AI aren't evenly shared. Research has found younger teens (ages 1314) are more likely to trust AI companions. Also, teens with physical or mental health concerns are more likely to use AI companion apps, and those with mental health difficulties also show more signs of emotional dependence.
Is there a bright side to AI companions? Are there any potential benefits for teens who use AI companions? The answer is: maybe, if we are careful. Researchers are investigating how these technologies might be used to support social skill development. One study of more than 10,000 teens found using a conversational app specifically designed by clinical psychologists, coaches and engineers was associated with increased wellbeing over four months. While the study didn't involve the level of human-like interaction we see in AI companions today, it does offer a glimpse of some potential healthy uses of these technologies, as long as they are developed carefully and with teens' safety in mind. Overall, there is very little research on the impacts of widely available AI companions on young people's wellbeing and relationships. Preliminary evidence is short-term, mixed, and focused on adults. We'll need more studies, conducted over longer periods, to understand the long-term impacts of AI companions and how they might be used in beneficial ways. What can we do? AI companion apps are already being used by millions of people globally, and this usage is predicted to increase in the coming years.
Australia's eSafety Commissioner recommends parents talk to their teens about how these apps work, the difference between artificial and real relationships, and support their children in building real-life social skills. School communities also have a role to play in educating young people about these tools and their risks. They may, for instance, integrate the topic of artificial friendships into social and digital literacy programs. While the eSafety Commissioner advocates for AI companies to integrate safeguards into their development of AI companions, it seems unlikely any meaningful change will be industry-led. The Commissioner is moving towards increased regulation of children's exposure to harmful, age-inappropriate online material. Meanwhile, experts continue to call for stronger regulatory oversight, content controls and robust age checks.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
H-1B hiring: Tech companies accused of bending rules with newspaper job listings
A report claims tech companies are bending rules to hire H-1Bs. Amid mass layoffs from major tech companies, the companies have now been accused of bending hiring rules in the local newspaper advertisements that they are mandated to advertise before hiring. While the rules say these roles are to be advertised for US-born workers, a Newsweek report claimed that the companies are clearly targeting immigrants as the candidates are asked to send resumes to immigration or "global mobility" departments. 'Companies are routinely discriminating against Americans' a website seeking out H-1B roles to share them with Americans, told Newsweek that Americans are not even aware that major tech companies are routinely discriminating against them. "At a time when unemployment for college graduates is shifting sharply upward, it is important to call out hiring discrimination that could keep Americans unemployed," they said. also said that recruitment for these roles is done separately from the companies standard recruitment process. "These unusual application methods are likely to drive fewer applications than normal processes like posting ads on the company job board or on mainstream career sites like LinkedIn," they said. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like He is our only child, we cannot see him suffer. Help us! Donate For Health Donate Now Undo The Newsweek report cited a San Francisco Chronicle advertisement given by OpenAI. It said it was hiring a software engineer but the resumes were to be sent to the "global mobilty team". Another tech company, Instacard, offered multiple positions, with a similar department taking applications, while Udemy, an online learning platform, wanted a director of marketing analytics and data science, with resumes to be sent to "immigration@ the Newsweek report said. H-1B is at the center of a major row going on in the US job market, with the MAGA activists blaming the visa program for the unemployment of the American youth. H-1B allows US companies to hire foreign talent, but companies hire H-1Bs even for entry-level positions too, so that they have to pay lower wages to the foreigners than the American standard. The Trump administration is planning a major H-1B overhaul in favor of US graduates.


Mint
an hour ago
- Mint
Microsoft launches probe into claims that Israel used its Azure cloud services to spy on Palestinians
Microsoft has launched an 'urgent' probe into allegations that the Israel military has used the company's technology to facilitate mass surveillance on Palestenians. According to a report by Guardian, the Microsoft investigation comes after the news outlet's earlier report that the Satya Nadella-led tech firm's Azure cloud platform was utlised to store a vast collection of everyday Palestinian mobile phone calls by the Israeli Unit 8200 spy agency. Microsoft in a statement said 'using Azure for the storage of data files of phone calls obtained through broad or mass surveillance of civilians in Gaza and the West Bank' would be prohibited by its terms of service. The investigation is being overseen by lawyers at the US firm Covington & Burlin. This is the second time Microsoft has launched an external probe against the Israeli military using its technology. The first enquiry was commissioned earlier this year to look into allegations that the Israeli military was using Microsoft's technology during its attacks on Gaza. In May, the company said it 'found no evidence to date' the Israeli military did not comply with its terms of service or used Azure 'to target or harm people' in Gaza. However, the recent report by Guardian apparently sent shockwaves among senior Microsoft employees about whether some of its Israel-based employees may have held back on information regarding how Unit 8200 uses Azure. According to Guardian's joint investigation with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, Israel's Unit 8200 made use of a customised and segregated area within Azure and stored recordings of millions of calls made daily in Gaza and the West Bank. Since the report, Guardian said that Microsoft has been trying to assess what data Unit 8200 stores in Azure. Israel's offensive has killed more than 61,000 Palestinians, according to figures from the health ministry in Hamas-run Gaza which the United Nations considers reliable. The Israeli government's plans to expand the war have sparked an international outcry as well as domestic opposition.


Time of India
3 hours ago
- Time of India
Kyivstar CEO says Russia-Ukraine peace deal would boost value after historic US listing
By Gianluca Lo Nostro and Leo Marchandon A peaceful resolution to the war between Russia and Ukraine would boost the value of Ukrainian mobile operator Kyivstar "quite significantly", its CEO Oleksandr Komarov said on Friday ahead of the company's stock market listing in New York. Kyivstar's shares will begin trading on the Nasdaq later on Friday, becoming the first Ukraine-based company listed on a U.S. stock exchange. The listing coincides with a summit between U.S. President Donald Trump and Russian President Vladimir Putin in Alaska, a meeting closely watched by Ukraine and European nations as a potential turning point in peace negotiations. "We will be the best asset for the international investment community to invest in Ukraine, to invest in the Ukrainian recovery, to invest in the Ukrainian support," Komarov told Reuters in an interview. Komarov said the company chose Nasdaq, where its parent VEON is also listed, over London or Warsaw because it was even more important to "strengthen the link between the United States and Ukraine rather than between Ukraine and Europe." The company has deepened its U.S. ties during the conflict, appointing former U.S. Secretary of State Mike Pompeo to its board and signing a deal with Elon Musk's Starlink for satellite services. Kyivstar is the biggest mobile operator in Ukraine with 24 million subscribers. Founded in 1994, it became part of Veon in 2010. Apart from telecoms, it owns digital health platform Helsi and ride-hailing firm Uklon. VEON has pitched the listing as an opportunity for foreign investors to bet on Ukraine's reconstruction. But its success hinges in part on a peace deal being achieved. Komarov said that the first few weeks of trading will be extremely volatile. When asked if the outcome of the IPO depends on political developments, he said the volatile external environment was already incorporated in Kyivstar's valuation. Kyivstar, which has been given a pro forma valuation of $2.3 billion, carried out the listing by merging with fintech entrepreneur Betsy Cohen's special purpose acquisition company (SPAC). The company raised $178 million. Reuters earlier reported Kyivstar expected up to $200 million. VEON will retain a majority stake in the mobile operator. Komarov said his firm's Nasdaq debut showed what Ukrainian companies could accomplish by accessing international markets and that prominent Ukrainian businessmen have spoken to him in recent months to hear about Kyivstar's listing strategy. "This is one of the dimensions of our integration into the Western world that should be developed and should be supported," he said.