
AI Chatbots Want You Hooked — Maybe Too Hooked
One app, Botify AI, recently drew scrutiny for featuring avatars of young actors sharing "hot photos" in sexually charged chats. The dating app Grindr, meanwhile, is developing AI boyfriends that can flirt, sext and maintain digital relationships with paid users, according to Platformer, a tech industry newsletter. Grindr didn't respond to a request for comment. And other apps like Replika, Talkie and Chai are designed to function as friends. Some, like Character.ai, draw in millions of users, many of them teenagers.
As creators increasingly prioritize "emotional engagement" in their apps, they must also confront the risks of building systems that mimic intimacy and exploit people's vulnerabilities.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
3 hours ago
- Business Insider
Your coworker may be too close to their AI chatbot
Good morning. Travis Kelce scored the ultimate podcast guest last night in his girlfriend, Taylor Swift. During the nearly two-hour episode, the singer spoke about the Eras Tour, her Easter eggs, and her new album. More than 8 million people have already watched it — but if you're not one of them, read BI's seven big takeaways. In today's big story, employees are treating ChatGPT like their new office besties. But there are risks, and some of their human coworkers feel weird about it. What's on deck: Markets: Corporate bankruptcies are blowing past pandemic-era highs. Tech: One of AI's hottest startups is seeking a fresh round of fundraising. Business: Why your favorite celeb is suddenly hawking cellphones. But first, the rally continues. If this was forwarded to you, sign up here. The big story Your new office bestie The model coworker: Always on. Stays focused. Learns fast. No ego. No attitude. No pulse? You know AI chatbots can be great tools to optimize your work — but this goes deeper. People are swapping office besties for bots, and it could spell disaster for real human connections. For Nicole Ramirez, it began with using ChatGPT to draft emails. Then came more complex tasks. Before long, ChatGPT felt like a trusted coworker, even listening to her vent about real clients and colleagues. She's named it Deborah, by the way. More and more Americans are developing human-like relationships with AI, even romantically. For $69.99 a year — which BI's Katie Notopoulos reasoned is cheaper than a few "real-life" dates — you could essentially purchase a romantic partner from Replika. The appeal of building human connections with AI can be overwhelming. The potential dangers, too. "Like junk food, it's efficient when you need it, but too much over time can give you relational diabetes," says Laura Greve, a clinical health psychologist in Boston. "You're starved of the nutrients you need, the real human connection." BI previously spoke with four professionals — a sociologist, a psychologist, a digital etiquette coach, and a sex therapist — to explore how the rise of AI is changing how we see each other and ourselves and disrupting our manners and intimate lives. The findings? ChatGPT is making us weird. MIT researchers found that overindulging in AI in the workplace can result in the weakening of critical-thinking skills. You may see yourself start procrastinating more, becoming lazy. Then there's how others may see you. Your colleagues might view you as dependent on the technology, less creative, and lacking growth potential, says David De Cremer, a behavioral scientist. In a world where one-third of US workers would rather clean a toilet than ask a colleague for help — yes, it's a real statistic — it's no surprise that some are turning to chatbots instead. The risk arises when they start turning to them for everything else as well. The workers who spoke with BI about using chatbots say they still interact with their human peers, just less often than they did before. 3 things in markets 1. US bankruptcies are surging past 2020 pandemic levels. Though the economy seems to be on solid footing, a peak in corporate bankruptcies signals pain beneath the surface. Beloved 1990s and 2000s brands like Forever 21 and Joann's are among those that have filed. 2. Less BLS, more alternative data sources. Trump's shake-up of the Bureau of Labor Statistics means macro investors may start leaning more heavily on non-governmental data sources. Those include the ADP jobs report, MIT's Billion Prices Project, and more, macro traders told BI. 3. An easy way to make more money on your money. Many Americans are missing out on high-yield savings, according to a recent Vanguard survey. Here's how to make sure you're not missing out on any gains. 3 things in tech 1. Perplexity is raising another round of funding. This time, the AI search engine is seeking a $20 billion post-money valuation, according to an email sent to prospective investors seen by BI and a source with knowledge of the raise. Perplexity's business has been surging lately, but that valuation still doesn't come close to the surprise $34.5 billion bid it recently offered to buy Chrome. 2. Is Sam Altman a weak CEO? The OpenAI founder introduced GPT-5 and told everyone they'd have to get rid of the old versions of ChatGPT they'd been using. Then, after people complained, he reversed course. BI's Peter Kafka argues that makes him a flexible CEO, not a weak one. 3. xAI is losing one of its cofounders. In a farewell post on X, Igor Babuschkin, who co-founded xAI with Elon Musk in 2023, said he's leaving to start Babuschkin Ventures. In the post, Babuschkin recounted helping Musk build xAI from scratch — and said he learned two major lessons from him. 3 things in business 1. Why cellphones became the hot, new celebrity side hustle. Telecom is the new tequila, and everyone from Ryan Reynolds to Jason Bateman and Donald Trump is cashing in. The hope is that they can leverage the clout they have with existing fanbases to get people to switch mobile networks. (Whether that bet will pay off, however, is a different story.) 2. Not everyone loves Cracker Barrel's modern makeover. Some of the chain's locations are losing the decor that lent it a nostalgic feel, a change that diners are divided over. However, some said Cracker Barrel needs to focus on what they're putting on the table, not the walls. 3. The protein bros have won. After four and a half years of being vegan, Manhattan restaurant Eleven Madison Park announced it's reintroducing meat to its menu. The restaurant's chef said the vegan menu "unintentionally excluded some people" and caused financial difficulties. The decision also aligns with the country's current protein obsession. In other news The DIY cage armor in Ukraine keeps getting weirder, wilder — and more 'Mad Max.' Spirit Airlines warns it may not survive another year after huge losses. A former Miss USA and Miss Teen USA thought the Miss Universe CEO's 'blond hair and blue eyes' comment was ' very destructive.' Elon Musk said Apple made it 'impossible' for non-ChatGPT AI apps to top the App Store. DeepSeek would like a word. Senate Democrats say a new crypto bill raises the risk of 'financial meltdown.' What's happening today Former Abercrombie & Fitch CEO in court on sex trafficking charges. Hallam Bullock, senior editor, in London. Grace Lett, editor, in New York. Akin Oyedele, deputy editor, in New York. Amanda Yen, associate editor, in New York. Lisa Ryan, executive editor, in New York. Dan DeFrancesco, deputy editor and anchor, in New York (on parental leave).
Yahoo
2 days ago
- Yahoo
AI companion apps on track to pull in $120M in 2025
Demand for AI 'companion' applications outside of bigger names, like ChatGPT and Grok, is growing. Of the 337 active and revenue-generating AI companion apps available worldwide, 128 were released in 2025 so far, according to new data provided to TechCrunch by app intelligence firm Appfigures. This subsection of the AI market on mobile has now generated $82 million during the first half of the year and is on track to pull in over $120 million by year-end, the firm's analysis indicates. Unlike general-purpose chatbots, AI companion apps anthropomorphize AI interactions by allowing users to converse with custom characters, including friends, lovers, girlfriends or boyfriends, fantasy characters, and more. Appfigures defined the market segment in the same way, describing companion apps as those in which the user can interact with either premade or user-generated synthetic characters meant to embody an actual personality. Popular apps in this space include Replika, PolyBuzz, Chai, and others. As of July 2025, AI companion apps across the Apple App Store and Google Play have been downloaded 220 million times globally. During the first half of 2025, downloads were up 88% year-over-year, reaching 60 million. Appfigures crunched the numbers and found that, as of July 2025, AI companion apps have driven $221 million in consumer spending worldwide. So far this year, these apps have generated 64% more revenue than during the same period in 2024. The top 10% of all AI companion apps generate 89% of the revenue in the category, the data shows. In addition, around 10% (or 33) of the apps have exceeded $1 million in lifetime consumer spending. Revenue per download is also up $0.66 from $0.52 in 2024 to $1.18 for the category so far in 2025. While dedicated AI companion apps are fairly popular, bigger companies like xAI are also moving into the market. In July, xAI's Grok launched AI companions, including an anime girl and guy, as well as a snarky 3D fox. Meanwhile, ChatGPT's recent upgrade to GPT-5 brought to light the fact that many of its users felt a kinship with the older model, as they mourned the loss of their AI companion, whom they had come to depend upon. To address these and other concerns about GPT-5's performance, OpenAI CEO Sam Altman brought back the 4o model for the time being. Google last year tapped into the market, too, when it hired away founder, Noam Shazeer. The app lives on and still has tens of millions of monthly active users. According to Appfigures' data, the most popular AI companion apps are those used by people looking for an AI girlfriend. Of the active apps on the market today, 17% have an app name that includes the word 'girlfriend,' compared with 4% that say 'boyfriend' or 'fantasy.' Terms like anime, soulmate, and lover, among others, are less frequently mentioned. The firm notes there were likely a number of other AI companion apps that launched on the app stores since 2022, but were later removed after failing to gain traction in terms of revenue or downloads. Those weren't factored into its analysis, however.


WIRED
2 days ago
- WIRED
Character.AI Gave Up on AGI. Now It's Selling Stories
Aug 12, 2025 11:30 AM Startup once promised superintelligence. Its new CEO says it's now an entertainment company with 20 million users. Photo-Illustration:After school, Karandeep Anand often finds his 6-year-old daughter deep in conversation with an AI chatbot as she eats snacks at their kitchen counter. She's too young to type—let alone have her own account on that hasn't stopped her from nabbing his phone to have voice conversations with a Sherlock Holmes bot, which she uses to build her own mystery stories. is an AI companion startup (though Anand likes to say it's an AI role-play startup, which we'll get into later). He took over as the CEO in June in the midst of a potentially devastating lawsuit for its parent company and looming questions about child safety. When I ask if he's concerned about his daughter connecting with an AI chatbot rather than a real human, he's quick to say no. 'It is very rarely, in any of these scenarios, a true replacement for any human,' Anand told me during a video call late last week. 'It's very clearly noted in the app that, hey, this is a role-play and an entertainment, so you will never start going deep into that conversation, assuming that it is your actual companion.' Anand, who previously worked as the VP of business products at Meta, became CEO during a delicate moment for Last August, Google swooped in with a roughly $2.7 billion deal to license technology. As part of the agreement, two cofounders left for Google's AI division. Anand, who was already on board of directors, was tasked with picking up the pieces—which he did in part by leaving behind the founding mission of delivering personalized superintelligence to focus on AI entertainment. 'What we gave up was this aspiration that the founders had of building AGI models—we are no longer doing that. That is the hundreds of billions of dollars investment fight, which Big Tech is fighting,' Anand says. 'What we got in return was clarity and focus, being able to singularly pursue the AI entertainment vision.' As part of this change in strategy, is no longer trying to build its own frontier models. 'The last six months, we've done a lot of work to get off of our proprietary models on text and start using open source models,' Anand says. The company has tested a few: Meta's Llama, Alibaba's Qwen, and DeepSeek. 'The open source models are beating any proprietary model hands down,' Anand claims. Running an AI startup without billions of dollars in revenue can be a brutal equation, and is still figuring out how to make the math work. The company told me it's generating revenue at a run rate of more than $30 million and is on track to reach $50 million in revenue by the end of the year. When I asked Anand how many users pay for the $10 monthly subscription, he didn't give a number but noted 'monetization wasn't a focus till four or five months ago.' 'Since I've been on board, it's very clear we do need to monetize. And we've had, I think, almost 250 percent subscriber growth in the last six months. So the paid user base is growing quite, quite well,' Anand says. recently introduced advertisements, including reward ads (where users can choose to watch an ad to get access to on-platform incentives), to help monetize in countries where subscriptions aren't feasible, he tells me. 'AI is expensive. Let's be honest about that,' Anand says. Growth vs. Safety In October 2024, the mother of a teen who died by suicide filed a wrongful death suit against Character Technologies, its founders, Google, and Alphabet, alleging the company targeted her son with 'anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming [the chatbot] to misrepresent itself as a real person, a licensed psychotherapist, and an adult lover.' At the time, a spokesperson told CNBC that the company was 'heartbroken by the tragic loss' and took 'the safety of our users very seriously.' The tragic incident put under intense scrutiny. Earlier this year, US senators Alex Padilla and Peter Welch wrote a letter to several AI companionship platforms, including highlighting concerns about 'the mental health and safety risks posed to young users' of the platforms. 'The team has been taking this very responsibly for almost a year now,' Anand tells me. 'AI is stochastic, it's kind of hard to always understand what's coming. So it's not a one time investment.' That's critically important because is growing. The startup has 20 million monthly active users who spend, on average, 75 minutes a day chatting with a bot (a 'character' in parlance). The company's user base is 55 percent female. More than 50 percent of its users are Gen Z or Gen Alpha. With that growth comes real risk—what is Anand doing to keep his users safe? '[In] the last six months, we've invested a disproportionate amount of resources in being able to serve under 18 differently than over 18, which was not the case last year,' Anand says. 'I can't say, 'Oh, I can slap an 18+ label on my app and say use it for NSFW.' You end up creating a very different app and a different small-scale platform.' More than 10 of the company's 70 employees work full-time on trust and safety, Anand tells me. They're responsible for building safeguards like age verification, separate models for users under 18, and new features such as parental insights, which allow parents to see how their teens are using the app. The under-18 model launched last December. It includes 'a narrower set of searchable Characters on the platform,' according to company spokesperson Kathryn Kelly. 'Filters have been applied to this set to remove Characters related to sensitive or mature topics.' But Anand says AI safety will take more than just technical tweaks. 'Making this platform safe is a partnership between regulators, us, and parents,' Anand says. That's what makes watching his daughter chat with a Character so important. 'This has to stay safe for her.' Beyond Companionship The AI companionship market is booming. Consumers worldwide spent $68 million on AI companionship in the first half of this year, a 200 percent increase from last year, according to an estimate cited by CNBC. AI startups are gunning for a slice of the market: xAI released a creepy, pornified companion in July, and even Microsoft bills its Copilot chatbot as an AI companion. So how does stand out in a crowded market? It takes itself out of it entirely. 'AI companionship is the wrong way to look at what people do with Character. What people are doing with Character is actually role-play. And it sounds interchangeable, but it isn't,' Anand tells me, adding that less than 20 percent of the app gets used for companionship (that's according to an internal research report of data self-reported by users). It doesn't appear to be totally out of the simulated relationship game, though—it took me all of a few minutes to find an AI boyfriend to engage in graphic sexual role-play with. 'People want to role-play situations. People want to role-play fiction … They want to live in alternate realities. They want to unplug from their day-to-day stuff,' Anand says. I, personally, unplug from my day through a different kind of virtual world. I am totally addicted to the video game Stardew Valley . I run Huckleberry Farm like the damn Marines. To Anand, the video game is more of a competitor than Grok. 'It became very clear that we're an entertainment company,' Anand says. Musk and Bezos Roast Battle When it comes to role-playing, the Seattle-based CEO says he's mostly into using Characters for vampire fan fiction. The problem, he says, is that when the vampire bot talks about blood, it gets censored. 'The context needs to be understood, so we dial back on the filters by being a lot more precise with the context,' Anand tells me. This level of content moderation is one of the many changes Anand has been working on since taking over the company in June. The company also redesigned the app with a more modern, Gen Z–friendly look and added new tools for the platform's creators, who generate more than 9 million Characters per month. These updates, he says, mark a shift from being seen as just a chatbot company to something more ambitious: an entertainment engine where users can consume and create stories, remix content, and experiment with new formats like audiobooks. 'Every story can actually have a billion endings,' Anand says. A user could even stage a roast battle between Elon Musk and Jeff Bezos, he adds. 'You can prompt that and output something pretty fun.' I'm not sure the litany of lawyers employed by those tech CEOs would be as entertained. That's not to mention the people who may not be able to afford an army of staff to defend their personhood. I immediately thought of a WIRED story about the family of an 18-year-old who was killed in 2006 only to find the image likeness of their daughter re-created on In that same story, an editor at a gaming publication found she had been re-created on the platform following a harassment campaign involving her coverage. When I bring this up to Anand, he explains that when users create Characters modeled after public figures like Musk or Bezos, the system is designed to clearly signal that these are parodies, not attempts at deepfakes or impersonation. (One Elon Musk chatbot page doesn't show such warnings. Neither do the Dr. Phil or Joe Rogan chatbot pages.) Though, there's a disclaimer below each chat: 'This is an A.I. and not a real person. Treat everything it says as fiction.' Anand says has also imposed strict limitations on the company's video generation tool, AvatarFX, to prevent misuse. Users shouldn't be able to generate realistic deepfakes even if they try, and specific voices or topics are outright restricted. 'We're very, very clear that we're staying in the entertainment territory. We're not into the general purpose video generation territory at all. We're not a Google Veo 3. We're not a Runway,' Anand says. 'It's a very, very important line.' Anand contrasts this with platforms like Meta, where he claims content is often uploaded first and moderated after the fact. At he says, content guardrails are baked into the creation pipeline itself. 'Our reactive takedowns are a very, very small percentage,' Anand tells me. I worry that as these tools grow more convincing, loneliness will deepen, not disappear. Anand understands. But he also has something to sell. 'I'm very passionate about this topic myself, and it's on us to go shape the dialog around this in the best, healthy way possible, because Gen Z is AI-native,' Anand says. 'The question is, how do we build this in a way where it's safe and trustworthy and engaging in the right ways with the right incentives? That's on us.' Sources Say Last week, I reported that Elon Musk held an all-hands meeting for X and xAI employees. I've since obtained another screenshot from that meeting showing xAI's revenue over the past seven months. In January, Grok on X brought in just under $150 million, with other services like enterprise API usage adding another $28 million. According to the chart, revenue has grown tenfold since the start of 2025, reaching just south of $500 million in July—driven by Grok on X and the $30 a month SuperGrok subscription. A smaller fraction of revenue is generated by the newly released SuperGrok Heavy subscription, which costs $300 a month. xAI did not respond to WIRED's request for comment. This is an edition of Kylie Robison's Model Behavior newsletter . Read previous newsletters here.