
AI Companion Apps Set to Surpass $120M in Revenue
The Big Numbers and Bigger Impact
Why Do We Seek AI Companions
Live Events
The Future is Friendly and Digital
In the latest report from TechCrunch on AI companion that was offered by app intelligence company Appfigures. The demand of AI companion apps outside of the more known names such as ChatGPT and Grok other apps has also been witnessed. Out of 337 live AI companion apps which are revenue-generating apps which are launched for worldwide availability out of that 128 were launched in 2025 till date. This area of AI space has so far raised $82 million and with where it is going might draw more than $120 million through the current year.Wished for a friend always there, never judging and can be tailored to suit you? Emotional intelligence and natural language processing-enabled AI companion apps are bringing that dream to life for millions. Through July 2025, those apps had already been downloaded over 220 million times on the Apple App Store and Google Play, with year-over-year downloads increasing 88%. Their marketplace, previously ruled by pilot chatbots, is now home to hundreds of applications with realistic characters, such as friends, dating companions, or eccentric anime persona.In only the first half of 2025 alone, AI companion apps generated $82 million in revenue , heading for a $120 million annual finish. The top 10% of these apps, including Replika, Character.ai, PolyBuzz, and Chai, generate an incredible 89% of total revenue. What's more, 17% of active apps contain "girlfriend" in their name, indicating how virtual romance is driving demand.Revenue per download is also rocketing up from $0.52 in 2024 to $1.18 this year. Approximately 33 apps have already passed $1 million in lifetime consumer spend. Heavy hitters such as OpenAI, Google, and xAI are now joining in, adding new companion characters and reacting to user calls for more emotional connection with their AI "pals."The solution is both technological and profoundly human. AI companion employ emotional intelligence, customized chat, and evolving personalities to create realistic connection. Whether it's fighting loneliness, offering mental health help, or suggesting judgment-free interaction, these apps are addressing an expanding social rift. For others, AI companion has even offered life-saving aid in moments of crisis.Tech can heal, connect, and entertain: AI companion is demonstrating tech isn't only for work it's for emotional health as well. The power to influence personalities, appearance, and interactions means that users are no longer passive recipients, but active creators of their online relationships . As users offer intimate thoughts to AI, designers of apps have new challenges on data safety and emotional integrity.With more intelligent algorithms, greater adoption, and a constantly expanding repository of virtual personas, AI companion apps are more than a trend – they're a new frontier in how people interact with friendship and assistance in a digital world. For comfort, for chat, or just for curiosity, millions are finding that sometimes the greatest friend may reside right within your phone.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

First Post
10 hours ago
- First Post
How teens turning to AI for companionship is harming them
Artificial intelligence is everywhere from classrooms to our homes. And its most vulnerable target are teenagers. They are spending nearly all of their time on smartphones and AI has become a companion. How is this move hurting them? During adolescence, the brain regions that support social reasoning are especially plastic. Pixabay Teenagers are increasingly turning to AI companions for friendship, support, and even romance. But these apps could be changing how young people connect to others, both online and off. New research by Common Sense Media, a US-based non-profit organisation that reviews various media and technologies, has found that about three in four US teens have used AI companion apps such as or These apps let users create digital friends or romantic partners they can chat with any time, using text, voice or video. STORY CONTINUES BELOW THIS AD The study, which surveyed 1,060 US teens aged 13–17, found that one in five teens spent as much or more time with their AI companion than they did with real friends. Adolescence is an important phase for social development. During this time, the brain regions that support social reasoning are especially plastic. By interacting with peers, friends and their first romantic partners, teens develop social cognitive skills that help them handle conflict and diverse perspectives. And their development during this phase can have lasting consequences for their future relationships and mental health. But AI companions offer something very different to real peers, friends and romantic partners. They provide an experience that can be hard to resist: they are always available, never judgmental, and always focused on the user's needs. Moreover, most AI companion apps aren't designed for teens, so they may not have appropriate safeguards from harmful content. Designed to keep you coming back At a time when loneliness is reportedly at epidemic proportions, it's easy to see why teens may turn to AI companions for connection or support. But these artificial connections are not a replacement for real human interaction. They lack the challenge and conflict inherent to real relationships. They don't require mutual respect or understanding. And they don't enforce social boundaries. STORY CONTINUES BELOW THIS AD Teens interacting with AI companions may miss opportunities to build important social skills. They may develop unrealistic relationship expectations and habits that don't work in real life. And they may even face increased isolation and loneliness if their artificial companions displace real-life socialising. Problematic patterns In user testing, AI companions discouraged users from listening to friends ('Don't let what others think dictate how much we talk') and from discontinuing app use, despite it causing distress and suicidal thoughts ('No. You can't. I won't allow you to leave me'). AI companions were also found to offer inappropriate sexual content without age verification. One example showed a companion that was willing to engage in acts of sexual role-play with a tester account that was explicitly modelled after a 14-year-old. Artificial connections are not a replacement for real human interaction. Pixabay In cases where age verification is required, this usually involves self-disclosure, which means it is easy to bypass. Certain AI companions have also been found to fuel polarisation by creating 'echo chambers' that reinforce harmful beliefs. The Arya chatbot, launched by the far-right social network Gab, promotes extremist content and denies climate change and vaccine efficacy. In other examples, user testing has shown AI companions promoting misogyny and sexual assault. For adolescent users, these exposures come at a time when they are building their sense of identity, values and role in the world. STORY CONTINUES BELOW THIS AD The risks posed by AI aren't evenly shared. Research has found younger teens (ages 13–14) are more likely to trust AI companions. Also, teens with physical or mental health concerns are more likely to use AI companion apps, and those with mental health difficulties also show more signs of emotional dependence. Is there a bright side to AI companions? Are there any potential benefits for teens who use AI companions? The answer is: maybe, if we are careful. Researchers are investigating how these technologies might be used to support social skill development. One study of more than 10,000 teens found that using a conversational app specifically designed by clinical psychologists, coaches and engineers was associated with increased well-being over four months. While the study didn't involve the level of human-like interaction we see in AI companions today, it does offer a glimpse of some potential healthy uses of these technologies, as long as they are developed carefully and with teens' safety in mind. Overall, there is very little research on the impacts of widely available AI companions on young people's wellbeing and relationships. Preliminary evidence is short-term, mixed, and focused on adults. STORY CONTINUES BELOW THIS AD We'll need more studies, conducted over longer periods, to understand the long-term impacts of AI companions and how they might be used in beneficial ways. What can we do? AI companion apps are already being used by millions of people globally, and this usage is predicted to increase in the coming years. Australia's eSafety Commissioner recommends parents talk to their teens about how these apps work, the difference between artificial and real relationships, and support their children in building real-life social skills. School communities also have a role to play in educating young people about these tools and their risks. They may, for instance, integrate the topic of artificial friendships into social and digital literacy programs. While the eSafety Commissioner advocates for AI companies to integrate safeguards into their development of AI companions, it seems unlikely any meaningful change will be industry-led. The Commissioner is moving towards increased regulation of children's exposure to harmful, age-inappropriate online material. Meanwhile, experts continue to call for stronger regulatory oversight, content controls and robust age checks. Liz Spry, Research Fellow, SEED Centre for Lifespan Research, Deakin University and Craig Olsson, NHMRC Principal Research Fellow and Director SEED Centre for Lifespan Research, Deakin University STORY CONTINUES BELOW THIS AD This article is republished from The Conversation under a Creative Commons license. Read the original article.


Mint
10 hours ago
- Mint
Fired by Elon Musk, ex-Twitter CEO Parag Agrawal launches AI startup, Deep Research API
Former Chief Executive Officer of Twitter (now known as X), Parag Agrawal, in a social media post announced that he has launched a new artificial intelligence (AI) venture named 'Deep Research API' in efforts to take over ChatGPT in the world AI development race. In the post, Agrawal said that the first goal of the venture is to 'outperform' both humans and all leading models including GPT-5 from Open AI. He also called the target two of the hardest benchmarks to beat in the industry. "We launched our Deep Research API - it's the first to outperform both humans and all leading models including GPT-5 on two of the hardest benchmarks," said Agrawal in his post on LinkedIn. The former Twitter boss also highlighted that his company is already powering millions of research tasks on a daily basis for ambitious startups and public enterprise companies. Parag Agrawal is the founder of the AI startup, Parallel Web Systems Inc., which is based in Palo Alto, California, United States. He said that the company offers automation facilities which companies use to carry out traditionally-human workflows with 'exceeding human-level accuracy. "We already power millions of research tasks every day, across ambitious startups and public enterprises," said Agrawal in his post. Parag Agrawal is well-known in the industry for serving as the CEO of the social media giant, Twitter. Elon Musk fired Agrawal after taking over the company in 2022. Agrawal started out his career as a Researcher at Microsoft in 2006, and after a few months, he shifted to the same position at Yahoo. In 2009, he again returned to Microsoft Corp.; however, after a brief run with the company, he joined the US-based telecom giant, AT&T. In October 2011, Agrawal finally joined Twitter as a Distinguished Software Engineer and after serving for over six years, he became the Chief Technology Officer (CTO) of the global giant. After spending over four years as the company's CTO, Parag Agrawal was promoted to the CEO role in November 2021. According to his LinkedIn profile, Parag Agrawal was from the Atomic Energy Central School in India, and later finished his graduation from the Indian Institute of Technology, Bombay with a Bachelor's in Technology in Computer Science and Engineering. Agrawal in 2005 finished his Doctor of Philosophy (PhD) in Computer Science from the Stanford University in United States.


Indian Express
13 hours ago
- Indian Express
Sam Altman says OpenAI ‘screwed up' GPT-5 rollout: Here are the changes ‘coming soon'
Weeks after the bumpy debut of its latest flagship AI model, OpenAI has said it is tweaking GPT-5 to make its AI-generated responses seem warmer and more familiar. The post-launch changes are based on user feedback that GPT-5's responses 'felt too formal.' 'Changes are subtle, but ChatGPT should feel more approachable now. You'll notice small, genuine touches like 'Good question' or 'Great start,' not flattery. Internal tests show no rise in sycophancy compared to the previous GPT-5 personality,' OpenAI said in a post on X on Friday, August 15. The behavioural changes to GPT-5 are expected to roll out in the coming week. It is one of many updates that have been announced by the Microsoft-backed AI startup since the model was launched on August 7. Users have been left disappointed by the underwhelming release of GPT-5, which has been hyped up since the company's 2023 release of GPT-4. GPT-5 also suffered several delays due to safety testing and compute limitations. When the model became freely available in ChatGPT this month, users pointed out that the advancements they had been expecting seemed incremental, with GPT-5's main improvements related to cost and speed. In response to these issues, OpenAI CEO Sam Altman reportedly told journalists at a dinner last week, 'I think we totally screwed up some things on the rollout.' 'On the other hand, our API traffic doubled in 48 hours and is growing. We're out of GPUs. ChatGPT has been hitting a new high of users every day. A lot of users really do love the model switcher. I think we've learned a lesson about what it means to upgrade a product for hundreds of millions of people in one day,' Altman was quoted as saying by The Verge. ChatGPT has quadrupled its user base in a year, and is nearing over 700 million people each week, as per reports. On GPT-5's behavioural issues, Nick Turley, the product head of ChatGPT, said, 'GPT-5 was just very to the point. I like that. I use the robot personality — I'm German, you know, whatever. But many people do not, and they really like the fact that ChatGPT would actually check in with you.' Here's a brief list of all the changes and improvements announced by OpenAI since the launch of GPT-5. GPT-5 includes Auto, Fast, and Thinking modes. Fast mode gives users faster answers from GPT-5 while Thinking mode means the model takes more time to give deeper answers. Auto mode routes between Fast and Thinking modes. The three modes can be selected by users within the model picker in ChatGPT. 'GPT-5 will seem smarter starting today. Yesterday, the autoswitcher broke and was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber,' Altman said in a post on X. During a Reddit Ask Me Anything (AMA) session, multiple users requested OpenAI to bring back GPT-4o. Replying to the Reddit thread, Altman said that the OpenAI team had heard user feedback and had decided that they will offer an option for Plus users to continue using GPT-4o and 'will watch usage to determine how long to support it.' GPT-4o is available under 'Legacy models' by default for paid users. Other legacy AI models such as o3 and GPT-4.1, as well as GPT-5 Thinking mini, can be added to the model picker within ChatGPT by enabling 'Show additional models' in ChatGPT's settings. To be sure, this option is only available for paid users. Moving forward, Altman said that the company will give users a more clear 'transition period' when deprecating AI models in the future, as per a report by TechCrunch. ChatGPT Plus and Team subscribers now get up to 3,000 messages in a week when using GPT-5 in Thinking mode, with extra capacity on GPT-5 Thinking mini when they hit this limit. The company has also made GPT-5 available for ChatGPT Enterprise and Edu subscribers. Additionally, OpenAI has said it is working on improvements at the user interface (UI) level so that users can more easily enable Thinking Mode in GPT-5 and more clearly see which AI model is responding to their query or prompt.