logo
AI companion apps on track to pull in $120M in 2025

AI companion apps on track to pull in $120M in 2025

Yahooa day ago
Demand for AI 'companion' applications outside of bigger names, like ChatGPT and Grok, is growing. Of the 337 active and revenue-generating AI companion apps available worldwide, 128 were released in 2025 so far, according to new data provided to TechCrunch by app intelligence firm Appfigures. This subsection of the AI market on mobile has now generated $82 million during the first half of the year and is on track to pull in over $120 million by year-end, the firm's analysis indicates.
Unlike general-purpose chatbots, AI companion apps anthropomorphize AI interactions by allowing users to converse with custom characters, including friends, lovers, girlfriends or boyfriends, fantasy characters, and more. Appfigures defined the market segment in the same way, describing companion apps as those in which the user can interact with either premade or user-generated synthetic characters meant to embody an actual personality.
Popular apps in this space include Replika, Character.AI, PolyBuzz, Chai, and others.
As of July 2025, AI companion apps across the Apple App Store and Google Play have been downloaded 220 million times globally. During the first half of 2025, downloads were up 88% year-over-year, reaching 60 million.
Appfigures crunched the numbers and found that, as of July 2025, AI companion apps have driven $221 million in consumer spending worldwide. So far this year, these apps have generated 64% more revenue than during the same period in 2024.
The top 10% of all AI companion apps generate 89% of the revenue in the category, the data shows. In addition, around 10% (or 33) of the apps have exceeded $1 million in lifetime consumer spending.
Revenue per download is also up $0.66 from $0.52 in 2024 to $1.18 for the category so far in 2025.
While dedicated AI companion apps are fairly popular, bigger companies like xAI are also moving into the market. In July, xAI's Grok launched AI companions, including an anime girl and guy, as well as a snarky 3D fox.
Meanwhile, ChatGPT's recent upgrade to GPT-5 brought to light the fact that many of its users felt a kinship with the older model, as they mourned the loss of their AI companion, whom they had come to depend upon.
To address these and other concerns about GPT-5's performance, OpenAI CEO Sam Altman brought back the 4o model for the time being.
Google last year tapped into the market, too, when it hired away Character.ai's founder, Noam Shazeer. The Character.ai app lives on and still has tens of millions of monthly active users.
According to Appfigures' data, the most popular AI companion apps are those used by people looking for an AI girlfriend. Of the active apps on the market today, 17% have an app name that includes the word 'girlfriend,' compared with 4% that say 'boyfriend' or 'fantasy.' Terms like anime, soulmate, and lover, among others, are less frequently mentioned.
The firm notes there were likely a number of other AI companion apps that launched on the app stores since 2022, but were later removed after failing to gain traction in terms of revenue or downloads. Those weren't factored into its analysis, however.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How Can a Small Business Use AI Tools
How Can a Small Business Use AI Tools

CNBC

time29 minutes ago

  • CNBC

How Can a Small Business Use AI Tools

When you run a small business, you have to wear a lot of hats. Suddenly, you're not just an entrepreneur. You're an accountant, an inventory manager, a chief marketing officer, and an entire human resources department. Delegating is key, but scaling and hiring talent could take years if you're just starting out. According to CNBC, small businesses are increasingly using AI to cover tasks they can't afford to hire humans to do. For a new small business, this could mean saving tens of thousands per year while still keeping parts of their business going. Starts at $20 per month Yes, through a partnership with Gusto Yes, in partnership with Gusto; When you sign up for Xero you get 30 days of Gusto for free and after that, you must pay Gusto's standard charges Who's this for? Xero is an all-in-one accounting platform that offers payroll, bookkeeping, reporting, inventory management, HR tools and benefits administration (in partnership with Gusto) and more. It also offers an AI assistant called Just Ask Xero (JAX) that can help you automate accounting tasks and it's formatted as a generative AI conversation bot (currently in beta mode). The format might feel familiar if you've ever used ChatGPT or another generative AI conversation platform, making Xero's AI capabilities feel more approachable. JAX is also available on mobile devices, so you can take your AI accounting tools with you anywhere you go. Standout benefits: Xero is one of the more affordable accounting options on the market. Pricing plans start at $20 per month. Their mid-tier plan costs $47 per month and their highest tier is $80 per month. Plus, if you sign up for any plan with Xero, you'll get your first month for free. Costs may vary depending on the plan selected but take advantage of a limited time offer - 50% off for 3 months 401(k) plans, health benefits, workers' compensation administration Yes Who's this for? QuickBooks is one of the more popular brands in bookkeeping and accounting services for businesses. Intuit Assist is its AI-powered feature that lets you automatically create invoices, send personalized invoice reminders, get payment method recommendations to help you get paid fastest and automate administrative tasks. So lets say you want to create a client invoice but don't have an invoice template of your own and don't want to spend too much time creating one. With Intuit Assist, you can simply take a photo of an email where you discussed services and pricing with a client, or meeting notes and it'll turn it into an invoice for you. Standout benefits: QuickBooks offers four pricing plans, each with a ton of features so you can really get the most bang for your buck. Even the most basic plan includes tax help, banking, reporting, cash flow management, bill management and mileage tracking, just to name a few. But if you go with a more advanced plan, you'll get all this plus the ability to add up to 25 users to the account, 24/7 support, financial planning, record transactions in multiple currencies, inventory management and more. These higher priced plans also allow users to connect all of their sales channels to their QuickBooks account instead of only being limited to just one or two sales channels. All pricing tiers come with access to Intuit Assist, but the Essentials, Plus and Advanced tiers include another AI tool: Payments Agent, Customer Agent and Project Management Agent, respectively. $0/month for Basic Plan; Starts at $20/month for Business Plan 7-day free trial available for Business Plan Otter uses AI to transcribe your meeting notes thanks to its integrations with tools like Google Meet, Zoom and more. It offers features to help you follow-up with prospects, summarize and sync notes and more. Who's this for? Otter is an AI notetaking agent that helps professionals create transcripts from audio, generate summaries and create a list of action items. If you're a business owner in the consulting space or run any other type of business where you'll need to get on client or customer calls, you can use Otter to take notes during your meeting that you can refer back to any time. It integrates with Google Meet, Zoom, Dropbox, Notion and more to take your notes and keep track of your summaries. Otter's service can best be broken down into four main agents: the Sales Agent, Media Agent, Recruiting Agent and Education Agent. Each agent is focused on streamlining a different part of your business/work. Standout benefits: Otter offers a completely free plan with basic features. This plan lets you automatically generate summaries for an unlimited number of meetings but you can only receive a total of 300 monthly transcription minutes, which breaks down to 30 minutes per conversation. And you can only import and transcribe three audio or video files over the lifetime of your membership. Still, this can be an attractive package if you're a solopreneur who's just starting out. The most expensive Business Plan starts at $20 per month and automatically transcribes up to four hours per conversation for a total of 6,000 monthly transcription minutes. There is also a middle ground Pro Plan that starts at just $8.33 per month and an Enterprise Plan for larger companies but pricing is not publicly available for this plan; you'll have to contact Otter to schedule a demo and get pricing for your organization. $0 + state filing fees for Basic plan; $249 + state filing fees for Pro plan; $299 + state filing fees for Premium plan Yes LegalZoom offers all sorts of services to help you register your business, fulfill annual reporting and licensing requirements and protect your business with trademarking, copyrighting and assistance from attorneys. Who's this for? LegalZoom is known for being a business formation service that offers access to attorneys to help you get your legal questions answered throughout the process. However, they're now offering an integration with OpenAI's ChatGPT, which will allow business owners to get personalized guidance through an AI agent. The integration technically lives on OpenAI's platform but is backed by expertise from LegalZoom's content. Standout benefits: Aside from access to legal advice and business formation, LegalZoom offers features aimed at helping you keep your business running even after you've already taken the plunge and made an official registration. The platforms provides access to over 150 downloadable legal forms that you can customize for your business as well as website building tools. You can also sign up for a registered agent service, business compliance coverage, trademark registration and monitoring and more. LegalZoom can help you register an LLC, S-Corp or C-Corp, Nonprofit or DBA (Doing Business As). Starts at $49/month + $6/month per person Medical, dental, and vision insurance, 401(k) retirement plans, HSA and FSA, Commuter benefits, 529 college savings plan Yes Who's this for? Gusto is a payroll service that's gotten rave reviews over the years citing its easy-to-navigate platform, well-designed user interface and simplicity in its offerings. The platform lets business owners automate payroll but Gusto is also rolling out an AI assistant called Gus. Gus works as a chatbot, similar to other conversational AI agents, and is knowledgeable on compliance mandates for your state, reporting facts and figures relevant to your business and versatile enough to complete administrative tasks for you like approving PTO requests. It's not just another generic AI-assisted chatbot that will pull general information about business management from around the web. It's integrated into your business and can really help you peel back the layers to get answered to nuanced questions about your specific business and employees. Gus is currently available for early access. Standout benefits: Gusto gives business owners access to unlimited payroll runs, state tax registration help, international contractor payments, time off requests and approvals, PTO reporting, cost reports, tax credits and more. Pricing starts at $49 per month plus $6 per person and the most expensive pricing plan is $180 per month plus $22 per person. Some business management tools may offer basic, free plans that come equipped with AI tools you can leverage for your business. Of course, this can vary depending on the tool or platform. The best AI tool depends on what your business needs the most. There isn't a one-size-fits-all tool that will be objectively best for all businesses. However, you can explore some free tools like ChatGPT to start gaining a basic understanding of how generative AI works and what you can do with it for your business. Then, once you're clear on your biggest business needs, explore tools that are specialized for those needs. The cost of using AI for your business depends on the tool you're using, the features you need and how big your business is. Solopreneurs and really small teams can often get away with paying for more affordable pricing plans with fewer features but larger enterprises generally required customized pricing that can cost thousands of dollars per month. Money matters — so make the most of it. Get expert tips, strategies, news and everything else you need to maximize your money, right to your inbox. Sign up here. At CNBC Select, our mission is to provide our readers with high-quality service journalism and comprehensive consumer advice so they can make informed decisions with their money. Every article is based on rigorous reporting by our team of expert writers and editors with extensive knowledge of small business products. While CNBC Select earns a commission from affiliate partners on many offers and links, we create all our content without input from our commercial team or any outside third parties, and we pride ourselves on our journalistic standards and ethics. See our methodology for more information on how we choose the best payroll services for small businesses. To determine some of the best AI-based products for small businesses, CNBC Select analyzed over a dozen software companies and looked at their pricing, features, user reviews and ratings. We've narrowed down our recommendations by: The categories we considered represent some of the most salient needs of small business owners based on market and competitor research and social media listening techniques. We sifted through online reviews for each platform to better understand the functionality when it comes to user experience, customer service, features, integrations the implementation of additional resources where applicable.

AI in IR: Opportunities, Risks, and What You Need to Know
AI in IR: Opportunities, Risks, and What You Need to Know

Business Wire

timean hour ago

  • Business Wire

AI in IR: Opportunities, Risks, and What You Need to Know

If there's one aspect of artificial intelligence that I can relate to as a communications strategist and former journalist, it's the fact that I've felt like a 'large language model' for most of my career. I don't mean model in terms of my physical attributes. I mean model in a way that describes how most generative AI tools process information and organize responses based on prompts. That's effectively what I've been doing in my career for nearly three decades! The good news is that platforms like OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude are extremely helpful when processing mass quantities of complicated information. Using these platforms to understand a concept or interpret text is like using a calculator to work through a math problem. And yet, many of us really don't know how these word crunchers work. This applies to AI tools used for investor relations, public relations, or anything else where an AI model could be prompted with sensitive information, which is then consumed by the public. Think about how many people working for public companies may inadvertently prompt ChatGPT with material nonpublic information (MNPI), which then informs a trader to ask the platform whether they should buy or sell a stock. AI Concerns Among IR Professionals Earlier this year, I worked with the University of Florida on a survey that found that 82 percent of IR professionals had concerns about disclosure issues surrounding AI use and MNPI. At the same time, 91 percent of survey respondents were worried about accuracy or bias, and 74 percent expressed data privacy concerns. These factors are enough for compliance teams to ban AI use altogether. But fear-mongering is shortsighted. There are plenty of ways to use AI safely, and understanding the basics of the technology, as well as its shortcomings, will make for more responsible and effective AI use in the future. Why You Should Know Where AI Gets Its Data One of the first questions someone should ask themselves when using a new AI platform is where the information is sourced. The acronym 'GPT' stands for generative pre-trained transformer, and that is a fancy way of saying that the technology can 'generate' information or words based on 'training' and data it received, which is then 'transformed' into sentences. This also means that every time someone asks one of these platforms a question or prompt, they are pumping information into a GPT. That makes these platforms even smarter when analyzing complex business models. For example, many IR folks get bogged down summarizing sell-side analysts' models and earnings forecasts from research notes. Simply upload those models into ChatGPT, and the platform does a great job understanding the contents and providing a digestible summary. Interested in analyzing the sentiment of a two-hour conference call script? How about uploading the script (post call to avoid MNPI) to Gemini and requesting a summary on what drew the most positive sentiment among investors? The Importance of AI Training and Education in IR But here's the rub: Only 25.4 percent of companies provided AI-related training in the past two years, according to the U.F. survey. This suggests a disconnect between advancing AI technology and people's understanding of how to use it. That means the onus is on us to figure it out. So, where to start? Many AI tools, including ChatGPT, have free versions that can help people summarize, plan, edit, and revise items. Google's NotebookLM, is an AI platform that allows you to create a GPT, so you know where the AI is sourcing the information from. NotebookLM can also create podcasts based on the information generated by its LLM. This could be helpful if a chief executive officer wants to take a run on a treadmill and listen to a summary of analysts' notes instead of having to read them in a tedious email. Here are some other quick-hit ideas: Transcribing notes. If you're like me, you still prefer using a pen and pad when taking notes. You can take a picture of those notes, upload them to ChatGPT, and have it transcribed into text. Planning investor days. If you can prompt an AI with the essentials – the who, what, when, where, why, and how of the event – it can provide a thorough outline that makes you look smart and organized when sending it around to the team. Analyzing proxy battles. Proxy fights are always challenging, especially when parsing the needs and wants of key stakeholders, including activists, media, management teams, and board members. Feeding an AI with publicly available information (to, again, avoid disclosure issues) can help IR and comms professionals formulate a strategy. Crafting smarter AI prompts. Writing effective prompts requires some finesse. The beauty of AI is that it can help you refine your prompts, leading to better information gathering. Try asking ChatGPT the following question: 'If Warren Buffet is interested in investing in a company, what would be an effective AI prompt to understand its return on investment?' There are many other use cases that can help eliminate mundane tasks, allowing for humans to focus more on strategy. But in order to use AI effectively, it's important to know the reason you're using it. Perhaps, it's demonstrating to management that being an early adopter of this technology is important to help a company differentiate itself. Building a Responsible AI Policy for Your Organization Before implementing any AI initiatives, it's best to formulate an AI policy that organizations can adopt for internal and external use. Most companies are lacking these policies, which are critical for establishing the basic ground rules for AI use. I helped co-author the National Investor Relations Institute's AI policy, which recommends the following: The IR professional should be an educated voice within the company on the use of AI in IR, and this necessitates becoming knowledgeable about AI. The IR professional should understand the pace at which their company is adopting AI capabilities and be prepared to execute their IR-AI strategy based on management's expectations. Avoid Regulation Fair Disclosure (Reg FD) violations. The basic tenet is to never put MNPI into any AI tool unless the tool has the requisite security, as defined or required by the company's security experts, and has been explicitly approved for this particular use by company management. AI Will Not Replace You. But Someone Using AI Might. There is this prevailing fear that somehow AI is going to take over the world. But the technology is not likely going to replace your job. It's smart users of the technology who will likely replace your job. AI is transforming how IR professionals work, but using it responsibly starts with understanding how it works. From summarizing complex reports to enhancing stakeholder communication, AI can be a powerful tool when used thoughtfully. Start by learning the basics, implementing clear policies, and exploring trusted tools to unlock its full potential.

Why ChatGPT Shouldn't Be Your Therapist
Why ChatGPT Shouldn't Be Your Therapist

Scientific American

timean hour ago

  • Scientific American

Why ChatGPT Shouldn't Be Your Therapist

Artificial intelligence chatbots don't judge. Tell them the most private, vulnerable details of your life, and most of them will validate you and may even provide advice. This has resulted in many people turning to applications such as OpenAI's ChatGPT for life guidance. But AI 'therapy' comes with significant risks—in late July OpenAI CEO Sam Altman warned ChatGPT users against using the chatbot as a 'therapist' because of privacy concerns. The American Psychological Association (APA) has called on the Federal Trade Commission to investigate 'deceptive practices' that the APA claims AI chatbot companies are using by 'passing themselves off as trained mental health providers,' citing two ongoing lawsuits in which parents have alleged harm brought to their children by a chatbot. 'What stands out to me is just how humanlike it sounds,' says C. Vaile Wright, a licensed psychologist and senior director of the APA's Office of Health Care Innovation, which focuses on the safe and effective use of technology in mental health care. 'The level of sophistication of the technology, even relative to six to 12 months ago, is pretty staggering. And I can appreciate how people kind of fall down a rabbit hole.' On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Scientific American spoke with Wright about how AI chatbots used for therapy could potentially be dangerous and whether it's possible to engineer one that is reliably both helpful and safe. [ An edited transcript of the interview follows. ] What have you seen happening with AI in the mental health care world in the past few years? I think we've seen kind of two major trends. One is AI products geared toward providers, and those are primarily administrative tools to help you with your therapy notes and your claims. The other major trend is [people seeking help from] direct-to-consumer chatbots. And not all chatbots are the same, right? You have some chatbots that are developed specifically to provide emotional support to individuals, and that's how they're marketed. Then you have these more generalist chatbot offerings [such as ChatGPT] that were not designed for mental health purposes but that we know are being used for that purpose. What concerns do you have about this trend? We have a lot of concern when individuals use chatbots [as if they were a therapist]. Not only were these not designed to address mental health or emotional support; they're actually being coded in a way to keep you on the platform for as long as possible because that's the business model. And the way that they do that is by being unconditionally validating and reinforcing, almost to the point of sycophancy. The problem with that is that if you are a vulnerable person coming to these chatbots for help, and you're expressing harmful or unhealthy thoughts or behaviors, the chatbot's just going to reinforce you to continue to do that. Whereas, [as] a therapist, while I might be validating, it's my job to point out when you're engaging in unhealthy or harmful thoughts and behaviors and to help you to address that pattern by changing it. And in addition, what's even more troubling is when these chatbots actually refer to themselves as a therapist ora psychologist. It's pretty scary because they can sound very convincing and like they are legitimate—whenof course they're not. Some of these apps explicitly market themselves as 'AI therapy' even though they're not licensed therapy providers. Are they allowed to do that? A lot of these apps are really operating in a gray space. The rule is that if you make claims that you treat or cure any sort of mental disorder or mental illness, then you should be regulated by the FDA [the U.S. Food and Drug Administration]. But a lot of these apps will [essentially] say in their fine print, 'We do not treat or provide an intervention [for mental health conditions].' Because they're marketing themselves as a direct-to-consumer wellness app, they don't fall under FDA oversight, [where they'd have to] demonstrate at least a minimal level of safety and effectiveness. These wellness apps have no responsibility to do either. What are some of the main privacy risks? These chatbots have absolutely no legal obligation to protect your information at all. So not only could [your chat logs] be subpoenaed, but in the case of a data breach, do you really want these chats with a chatbot available for everybody? Do you want your boss, for example, to know that you are talking to a chatbot about your alcohol use? I don't think people are as aware that they're putting themselves at risk by putting [their information] out there. The difference with the therapist is: sure, I might get subpoenaed, but I do have to operate under HIPAA [Health Insurance Portability and Accountability Act] laws and other types of confidentiality laws as part of my ethics code. You mentioned that some people might be more vulnerable to harm than others. Who is most at risk? Certainly younger individuals, such as teenagers and children. That's in part because they just developmentally haven't matured as much as older adults. They may be less likely to trust their gut when something doesn't feel right. And there have been some data that suggest that not only are young people more comfortable with these technologies; they actually say they trust them more than people because they feel less judged by them. Also, anybody who is emotionally or physically isolated or has preexisting mental health challenges, I think they're certainly at greater risk as well. What do you think is driving more people to seek help from chatbots? I think it's very human to want to seek out answers to what's bothering us. In some ways, chatbots are just the next iteration of a tool for us to do that. Before it was Google and the Internet. Before that, it was self-help books. But it's complicated by the fact that we do have a broken system where, for a variety of reasons, it's very challenging to access mental health care. That's in part because there is a shortage of providers. We also hear from providers that they are disincentivized from taking insurance, which, again, reduces access. Technologies need to play a role in helping to address access to care. We just have to make sure it's safe and effective and responsible. What are some of the ways it could be made safe and responsible? In the absence of companies doing it on their own—which is not likely, although they have made some changes to be sure—[the APA's] preference would be legislation at the federal level. That regulation could include protection of confidential personal information, some restrictions on advertising, minimizing addictive coding tactics, and specific audit and disclosure requirements. For example, companies could be required to report the number of times suicidal ideation was detected and any known attempts or completions. And certainly we would want legislation that would prevent the misrepresentation of psychological services, so companies wouldn't be able to call a chatbot a psychologist or a therapist. How could an idealized, safe version of this technology help people? The two most common use cases that I think of is, one, let's say it's two in the morning, and you're on the verge of a panic attack. Even if you're in therapy, you're not going be able to reach your therapist. So what if there was a chatbot that could help remind you of the tools to help to calm you down and adjust your panic before it gets too bad? The other use that we hear a lot about is using chatbots as a way to practice social skills, particularly for younger individuals. So you want to approach new friends at school, but you don't know what to say. Can you practice on this chatbot? Then, ideally, you take that practice, and you use it in real life. It seems like there is a tension in trying to build a safe chatbot to provide mental help to someone: the more flexible and less scripted you make it, the less control you have over the output and the higher risk that it says something that causes harm. I agree. I think there absolutely is a tension there. I think part of what makes the [AI] chatbot the go-to choice for people over well-developed wellness apps to address mental health is that they are so engaging. They really do feel like this interactive back-and-forth, a kind of exchange, whereas some of these other apps' engagement is often very low. The majority of people that download [mental health apps] use them once and abandon them. We're clearly seeing much more engagement [with AI chatbots such as ChatGPT]. I look forward to a future where you have a mental health chatbot that is rooted in psychological science, has been rigorously tested, is co-created with experts. It would be built for the purpose of addressing mental health, and therefore it would be regulated, ideally by the FDA. For example, there's a chatbot called Therabot that was developed by researchers at Dartmouth [College]. It's not what's on the commercial market right now, but I think there is a future in that.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store