AI companion apps such as Replika need more effective safety controls, experts say
The idea of having an emotional bond with a digital character was once a foreign concept.
Now, "companions" powered by artificial intelligence (AI) are increasingly acting as friends, romantic partners, or confidantes for millions of people.
With woolly definitions of "companionship" and "use" (some people use ChatGPT as a partner, for instance), it's difficult to tell exactly how widespread the phenomenon is.
But AI companion apps Replika, Chai and Character.ai each have 10 million downloads on the Google app store alone, while in 2018 Microsoft boasted its China-based chatbot XiaoIce had 660 million users.
These apps allow users to build characters, complete with names and avatars, which they can text or even hold voice and video calls with.
But do these apps fight loneliness, or are they supercharging isolation? And is there any way to tip the balance in the right direction?
Romance and sexuality are big drawcards to the AI companion market, but people can have a range of other reasons for setting up a chatbot.
They may be seeking non-judgemental listening, tutoring (particularly in their language skills), advice or therapy.
Bethanie Drake-Maples, a researcher at Stanford University who studies AI companions, says some people also use the apps to reflect their own persona.
"Some people will create a digital twin and just have a relationship with an externalised version of themselves," she tells ABC Radio National's series Brain Rot.
Ms Drake-Maples published a study based on interviews with more than 1,000 students who used the AI companion app Replika.
She and her colleagues found there were important benefits for some users. Most significantly, 30 of the interviewees said using the app had prevented them from attempting suicide.
Many participants also reported the app helped them forge connections with other people, through things like advice on their relationships with other people, helping them to overcome inhibitions to connect with others, or by teaching them empathy.
But other users reported no benefits, or negative experiences. Outside Ms Drake-Maples' study, AI companions have also been implicated in deaths.
Ms Drake-Maples points out their study was a self-selecting cohort, and not necessarily representative of all Replika users. Her team is carrying out a longer-term study to see if they can glean more insights.
But she believes it's possible these apps are, on the whole, beneficial for users.
"We specifically wanted to understand whether or not Replika was displacing human relationship or whether it was stimulating human relationship," she says.
But this social promotion can't be taken for granted.
Ms Drake-Maples is concerned that companion apps could replace people's interactions with other humans, making loneliness worse.
The participants in her study were much lonelier than the general population, although this isn't necessarily unusual for young college students.
She believes governments should regulate AI companion technology to prevent this isolation.
"There's absolutely money to be made by isolating people," she says.
"There absolutely does need to be some kind of ethical or policy guidelines around these agents being programmed to promote social use, and not being programmed to try to isolate people."
Replika says it's introduced a number of controls on its apps to make them safer, including a "Get Help" button that directs people to professional helplines or scripts based on cognitive behavioural therapy, and a message coding system that flags "unsafe" messages and responds in kind.
Ms Drake-Maples thinks this is a good example for other apps to follow.
"These things need to be mandated across the board," she says.
Raffaele Ciriello, a researcher at the University of Sydney, is more sceptical of Replika's safety controls, saying they're "superficial, cosmetic fixes".
He points out the controls were introduced months after the Italian government ruled the app had to stop using the data of Italian citizens in early 2023, citing concerns about age verification.
"They were fearing a regulatory backlash."
Dr Ciriello has also been interviewing and surveying AI companion users, and while he says some users have found benefits, the apps are largely designed for emotional dependence.
"If you look at the way [Replika is] making money, they have all the incentives to get users hooked and dependent on their products," he says.
Replika operates on a "freemium" model: a free base app, with more features (including the romantic partner option) available by paid subscription. Other companion apps follow the same model.
"Replika and their kin have Silicon Valley values embedded in them. And we know what these look like: data, data, data, profit, profit, profit," Dr Ciriello says.
Nevertheless, he also believes it's possible for AI companion technology to be built safer and more ethically.
Companies that consult vulnerable stakeholders, embed crisis response protocols, and advertise their products responsibly are likely to create safer AI companions.
Dr Ciriello says that Replika fails on several of these fronts. For instance, he calls its advertising "deceptive".
The company badges its product as "the AI companion who cares".
"[But] it's not conscious, it's not actually empathetic, it's not actually caring," Dr Cirello says.
A Replika spokesperson said the tagline "the AI companion who cares" was "not a claim of sentience or consciousness."
"The phrase reflects the emotionally supportive experience many users report, and speaks to our commitment to thoughtful, respectful design," they said.
"In this regard, we are also working with institutions like the Harvard Human Flourishing Program and Stanford University to better understand how Replika impacts wellbeing and to help shape responsible AI development."
Dr Ciriello says women-centred Australian app Jaimee is an example of an AI companion with better ethical design — although it faces the "same commercial pressures" as bigger apps in the market.
The Californian Senate last week passed a bill regulating AI chatbots. If the bill continues through the legislature to become law, it will — among other things — require the companions to regularly remind users they're not human, and enforce transparency on suicide and crisis data.
This bill is promising, Dr Cirello says.
"If the history of social media taught us anything, I would rather have a national strategy in Australia where we have some degree of control over how these technologies are designed and what their incentives are and how their algorithms work."
But, he adds, research on these apps is still in its infancy, and it will take years to understand their full impact.
"It's going to take some time for that research to come out and then to inform sensible legislation."
Listen to the full episode about the rise and risks of AI companions, and subscribe to the podcast for more.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

News.com.au
3 hours ago
- News.com.au
Meta to announce massive $23 billion move in race towards ‘superintelligence'
Meta is preparing to drop a staggering $15 billion (A$23.09bn) on a stake in AI startup Scale AI, in a bold new play to push beyond current artificial intelligence capabilities and reach so-called 'superintelligence'. The move would give Meta a 49 per cent stake in the company, which is currently led by 28-year-old Alexandr Wang. The deal is yet to be officially confirmed, but multiple reports suggest Meta CEO Mark Zuckerberg is set to unveil the investment in the coming days. Analysts say this is the behaviour of a 'wartime CEO', referring to the escalating technological arms race taking place between the world's most powerful companies and governments. They all want the same thing, but nobody can tell exactly what happens once they get there. Superintelligence refers to a hypothetical AI system that can outperform humans at all tasks, not just the specific functions currently delegated to today's large language models or image generators. The industry isn't there yet, but given the gargantuan leaps we've witnessed over the past 18 months, it is getting increasingly likely we will see something major before the end of the decade. Even as systems like GPT-4, Claude and Gemini dominate headlines, experts routinely point out their patchy reliability. Several language models still falter on complex reasoning and struggle with logic puzzles that your average Joe could solve. At the moment, AI is about speed. It can effectively eliminate the legwork for a wide rage of everyday tasks performed on a computer. Humans are currently required to prompt their request, but there is still an issue of AI being sycophantic to the user. They are designed to impress, and therefore become confused at times when given a great deal of data, especially if some of it is conflicting. At any rate, Zuckerberg wants in on the party. Meta's attempt to leapfrog the crawl toward artificial general intelligence (AGI) is widely seen as an effort to re-establish dominance in an ecosystem now defined by competitors like OpenAI, Google DeepMind, and Anthropic. It comes in the wake of Meta's ill-fated Metaverse experiment, which soaked up tens of billions in investment only to be largely shelved and mocked in equal measure. Meanwhile, Scale AI recently made headlines for securing a deal with the US Department of Defense to develop ThunderForge, a military AI platform intended to support strategic planning in the Indo-Pacific and Europe. The company also counts Peter Thiel's Founders Fund among its early backers. Observers say the mega-deal from Meta should reignite conversations in Europe about the need for publicly accountable AI research, something on par with CERN, the European particle physics laboratory. Michael Wooldridge, Professor of the Foundations of AI at Oxford University, argued such an initiative would build trust through openness. 'There's a good argument that there should be a CERN for AI where governments collaborate to develop AI openly and robustly,' Prof Wooldridge said. 'That's not going to happen if it's developed behind closed doors. [AI] seems just as important as CERN and particle accelerators.' Global arms race heats up, but oversight matters With staggering sums of capital, military interest, and corporate strategy all converging, it is clear which way authorities want us to head as they scramble for AGI supremacy. The race has been loosely compared to the frantic efforts in the 1940s to produce the world's first nuclear bomb. Startups are being snapped up at record speed, university research labs are being drained of talent, and AI labs are increasingly moving into secrecy. Global experts have already raised the alarm and called for robust oversight, but for those pessimistic about futurism, it has come as too little, too late. In a report published ahead of the UN's highly anticipated 'Summit of the Future', pundits raised current lack of international oversight on AI. Among the concerns are the very obvious opportunities for misuse, internal biases, and humanity's growing dependence. One man known as the 'godfather of AI' famously quit Google in 2023 over concerns the company was not adequately assessing the risks, warning we could be walking into a 'nightmare'. While the immediate benefits are already being seen in terms of productivity, the main concern is that we are charging full steam ahead towards an event horizon that is impossible to predict the outcome of. What we do know is that those spearheading AI development are becoming absurdly wealthy incredibly quickly and thus hold more and more power over the trajectory of the planet as each day passes. Around 40 experts, spanning technology, law, and data protection, were gathered by UN Secretary-General Antonio Guterres to tackle the existential issue head-on. They say that AI's global, border-crossing nature makes governance a mess, and we're missing the tools needed to address the chaos. The panel's report drops a sobering reminder, warning that if we wait until AI presents an undeniable threat, it could already be too late to mount a proper defence. 'There is, today, a global governance deficit with respect to AI,' the panel of experts warned in their report, stressing that the technology needs to 'serve humanity equitably and safely'.

News.com.au
9 hours ago
- News.com.au
Jet that could fly from London to NYC in 3.5 hours steps closer to reality
Concorde-style flights capable of blasting passengers from London to New York City in 3.5 hours have edged closer to reality after a major ban was lifted. 'Son of Concorde' maker Boom Technology has welcomed President Trump's executive order that effectively lifts the 52-year ban on civil supersonic flight over land in the US. Tight restrictions on supersonic flights have been in place due to the loud sonic boom created by the shock waves from a flying object travelling faster than the speed of sound. 'America once led the world in supersonic aviation, but decades of stifling regulations grounded progress,' the White House said. 'This Order removes regulatory barriers so that US companies can dominate supersonic flight once again.' To hit supersonic speeds, an aeroplane needs to travel at 768 miles (1235km/h) per hour. But Boom Technology has been working on a jet that has no audible sonic boom. The firm managed to make its XB-1 test jet fly faster than the speed of sound for the first time in January this year. Writing on X, the company welcomed the latest move, saying: 'Thank you, President Trump, for unlocking the future of faster and quieter travel.' 'This presidential action comes after a bipartisan group of key Congressional leaders introduced the Supersonic Aviation Modernization Act on May 14, 2025. 'The legislation calls on the FAA to revise the regulation prohibiting supersonic flight over land.' After finishing tests with XB-1 in January, Boom is now focused on building a plane suitable for passengers called Overture. Some 130 aircraft pre-orders have already been made by the likes of American Airlines, United Airlines, and Japan Airlines. The executive order does come with a set of rules that the Administrator of the Federal Aviation Administration (FAA) has been directed to impose. An interim 'noise-based certification standard' must be established that considers 'community acceptability, economic reasonableness, and technological feasibility'. Trump was presented with a miniature model of Overture earlier this year from Boom Technology's CEO. He suggested that Boom should manufacture Air Force One - the President's personal plane - and made a dig at China President Xi Jinping. 'Air Fore Once should be supersonic. Xi [President of China] can keep his 747-8,' he wrote.

ABC News
13 hours ago
- ABC News
Disney and Universal sue AI firm Midjourney for copyright infringement
Two Hollywood heavyweights have filed a copyright infringement lawsuit against an AI firm, describing the image generator as a "bottomless pit of plagiarism". Disney and Universal have accused San Francisco-based AI company Midjourney of pirating their libraries and making "innumerable" copies of famous characters without permission. Darth Vader from Star Wars, Elsa from Frozen and the Minions from Despicable Me are just some of the characters used as examples throughout the 143-page lawsuit. The lawsuit is the first major legal battle between Hollywood studios and an AI company and follows other suits by independent artists who have sued Midjourney — and other AI companies — for using their creative work. The suit, filed in federal district court in Los Angeles, alleges Midjourney used the studios' works to train its image service and generate high-quality reproductions featuring the companies' famous characters. The lawsuit states: "By helping itself to Plaintiffs' copyrighted works, and then distributing images (and soon videos) that blatantly incorporate and copy Disney's and Universal's famous characters — without investing a penny in their creation — Midjourney is the quintessential copyright free-rider and a bottomless pit of plagiarism." In a statement to ABC News, The Walt Disney Company says its intellectual property is "built on decades of financial investment, creativity and innovation". "Investments only made possible by the incentives embodied in copyright law that give creators the exclusive right to profit from their works," said Horacio Gutierrez, Disney's senior executive VP, chief legal and compliance officer. "We are bullish on the promise of AI technology and optimistic about how it can be used responsibly as a tool to further human creativity. The studios said they approached Midjourney about their copyright concerns before filing the suit, requesting the company implement measures to prevent infringement that other AI companies have adopted. But Midjourney "ignored" their concerns and was "strictly focused on its own bottom line," according to the filing. Instead of stopping its infringement, the studios argued, Midjourney continued to release new and better versions of its AI image service. Midjourney describes itself as a "small self-funded team" with 11 full-time staff in addition to advisors. The complaint noted the company generated $US300 million ($461 million) in revenue last year through paid subscriptions. The ABC has contacted Midjourney for comment. According to the Associated Press (AP), Midjourney chief executive David Holz addressed the lawsuit in a weekly conference call with users on Wednesday when asked if it would endanger the startup's future. "I can't really discuss any ongoing legal things because the world isn't cool like that, but I think Midjourney is going to be around for a very long time," Mr Holz said. "I think everybody wants us to be around." In a 2022 interview with AP, Mr Holz described his image-making service as "kind of like a search engine" pulling in a wide swath of images from across the internet. He compared copyright concerns about the technology with how such laws have adapted to human creativity. "Obviously, it's allowed for people and if it wasn't, then it would destroy the whole professional art industry, probably the non-professional industry too. "To the extent that AIs are learning like people, it's sort of the same thing and if the images come out differently, then it seems like it's fine." Disney and Universal are seeking unspecified monetary damages and a preliminary injunction to prevent Midjourney from copying their works or offering its services without copyright protections. Last year, a California federal judge found that artists behind a copyright infringement suit against Midjourney, Stability AI and other companies had plausibly argued these AI companies had copied and stored their work on company servers, allowing the litigation to continue.