
AI ‘waifus' pose grave emotional risks
BEIJING – Our everyday life is being increasingly shaped by artificial intelligence, and the line between reality and fantasy is becoming ever blurrier. Recently, Grok, a free AI assistant designed by xAI to 'maximize truth and objectivity', introduced a 'waifu' character — a virtual anime-style character designed to gain user affection, potentially at the expense of real-life relationships.
This raises deep concerns: Is technological innovation now outpacing ethical regulation? Are we witnessing Big Tech racing to the bottom?
While this feature may seem harmless entertainment or, more cynically, Grok's marketing strategy to compete with OpenAI's new AI Agents, which can plan and organize your trip to attend a wedding party, it raises bigger questions about AI companies' emotional manipulation, their impact on social well-being, and the future of human relationships.
Leshner et al. (2025) have studied how people form intimate connections with fictional characters, particularly within the anime fandom where 'waifus' (idealized female characters) and 'husbandos' (idealized male characters) are prominent. Their study revealed that men tend to form sexual connections, often driven by physical appearance, while women are more likely to form emotional connections, shaped by personality traits and perceived similarity.
These findings suggest that the psychological mechanisms underpinning human-human relationships, such as attraction, emotional bonding and even love, can extend to fictional entities. The study underscores the human capacity to form meaningful connections, even when the 'partner' exists only on a screen or in a narrative.
But what happens when these connections are no longer one-sided? When AI characters like Grok's 'waifu' are designed to actively engage, flatter and adapt to users' desires, the line between para-social relationships (one-sided emotional bonds with fictional characters) and real-life intimacy becomes dangerously ambiguous. As Leshner et al. highlight, these connections can be deeply meaningful and, in some cases, rival or displace real-life relationships.
While the idea of a personalized AI companion is evocative — recalling films like Her — the ethical implications of such technologies are serious. By exploiting well-documented psychological tendencies, such as men's preference for physical attractiveness or women's desire for emotional connection, AI systems risk fostering unhealthy emotional dependencies. AI 'waifus' are not just characters on a screen; they are tools explicitly designed by leading AI companies to engage, manipulate and blur the lines between authentic human connection and commercial profit.
The stakes are particularly high for educators and parents. Such systems have the potential to distort young people's understanding of relationships, intimacy and consent. As Leshner et al. observe, para-social relationships, while often harmless, can teach individuals about intimacy. Yet when such relationships are shaped by profit-driven AI systems, they risk promoting distorted and idealized models of human interaction, potentially undermining relational skills and emotional development.
As AI technology evolves, it becomes imperative to critically examine their implications. If AI developers cannot be persuaded by civil society to adopt an ethical approach, regulation must step in. But what can we, as linguists, educators and parents, do in the meantime?
One immediate step is to foster critical AI awareness among our students and communities. Open conversations about the distinctions between real and fictional relationships, as well as the psychological impact of para-social bonds, are essential. Educators can incorporate discussions of AI's ethical implications into their curriculums, helping young people critically evaluate their interactions with these systems.
At the same time, we must raise our collective voices to question AI companies: Are we steering AI innovation in a direction that enhances humanity, or are we creating tools that erode the very fabric of human connection? The answers will depend on the values we choose to uphold and the vigilance we maintain against this rapidly advancing field.
As Leshner et al. show, humans have an extraordinary capacity to form meaningful connections, even with fictional characters. But with this capacity comes a profound responsibility: ensuring that these connections enrich our lives rather than replacing them. As Yuval Noah Harari, author of Sapiens, aptly observes, 'If the only intimacy we can form is with a non-human AI, then we have no intimacy at all.'
Let's take up this call with urgency. Will the corporate empire of AI listen to civil society?
Chances are it won't, given the imperative to optimize profit at all (human) costs. Will governments step up regulation? The US government recently passed a bill forbidding states from regulating AI. We're witnessing corporate and national interests combined to take precedence over human interests. By raising critical AI awareness, we can perhaps, at least, work toward, even if in a small way, a future where AI technology serves humanity's best interests, rather than compromising them.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Business Times
5 hours ago
- Business Times
OpenAI staffers to sell US$6 billion in stock to softbank, other investors
[NEW YORK] Current and former OpenAI employees plan to sell approximately US$6 billion worth of shares to an investor group that includes Thrive Capital, SoftBank Group and Dragoneer Investment Group, in a deal that values the ChatGPT maker at US$500 billion, according to sources familiar with the matter. The talks are early and the size of the share sale could still change, said the sources, who asked not to be identified discussing private information. The secondary share investment is on top of SoftBank's commitment to lead OpenAI's US$40 billion funding round, which values the company at US$300 billion, according to another source familiar with the deal. That round remains ongoing, with OpenAI recently securing US$8.3 billion from a syndicate of investors. Representatives for Dragoneer and Thrive did not respond to requests for comment. Spokespeople for OpenAI and SoftBank declined to comment. All three firms are existing OpenAI backers. The secondary share sale will give OpenAI employees a chance to get cash-rich amid a high-stakes talent war in the artificial intelligence (AI) industry. Companies such as Meta Platforms are offering massive salaries to recruit AI talent from OpenAI and other startups. This year, several OpenAI employees have exited for Meta, including Shengjia Zhao, a co-creator of ChatGPT. Allowing employees to sell shares is an important tool for startups trying to retain top talent, without requiring the company to go public or be acquired. In some cases, early investors also use these deals to sell down their stakes, though OpenAI investors are not eligible to do so in this round, according to a source familiar with the matter. Current and former employees who spent at least two years at the company are able to participate. With its participation in the share sale, as well as its previous commitments, SoftBank is making a pivotal bet on the success of OpenAI. In addition to those deals, the Japanese conglomerate headed by Masayoshi Son recently closed a separate US$1 billion purchase of OpenAI employee shares at a US$300 billion valuation, according to a source familiar with the matter. Negotiations for that deal started before talks around the US$500 billion secondary valuation began, they said. The US$500 billion valuation would make OpenAI the world's most valuable startup, surpassing Elon Musk's SpaceX. The company expects revenue to triple this year to US$12.7 billion, up from US$3.7 billion in 2024, Bloomberg has reported. And the secondary deal talks come on the heels of the release of its highly-anticipated GPT-5 model. This week, OpenAI chief Sam Altman sat down with a group of reporters and shared his vision for the company, including that it wants to spend trillions of US dollars on the infrastructure required to run AI services in the 'not very distant future.' 'You should expect a bunch of economists to wring their hands and say, 'This is so crazy, it's so reckless,' and whatever,' Altman said. 'And we will just be like, 'You know what? Let us do our thing.'' BLOOMBERG

Straits Times
2 days ago
- Straits Times
They're stuffed animals, and they're also AI chatbots
Sign up now: Get ST's newsletters delivered to your inbox Grem, a smiling plushie with a Wi-Fi-enabled voice box, is displayed for a photo in an incongruous setting in New York. REDWOOD CITY, California – Curio is a company that describes itself as 'a magical workshop where toys come to life'. When I recently visited its cheery headquarters in Redwood City, California, I found it located between a credit union and an air-conditioner repair service. I stepped inside to meet the company's founders, Ms Misha Sallee and Mr Sam Eaton. And also Grem, a fuzzy cube styled like an anime alien. Curio makes chatbots wrapped in stuffed animals. Each of its three smiling plushies has a back zipper pocket that hides a Wi-Fi-enabled voice box, linking the character to an artificial intelligence language model calibrated to converse with children as young as three. Mr Eaton plunked Grem on a conference table and positioned it to face me. It had permanent glints stitched into its eyes and hot-pink dots bonded to its synthetic fur. 'Hey, Grem,' Mr Eaton said. 'What are the spots on your face?' A bright mechanical trill originated from Grem. 'Oh, those are my special pink dots,' it said. 'I get more as I grow older. They're like little badges of fun and adventure. Do you have something special that grows with you?' I did. 'I have dots that grow on me, and I get more as I get older, too,' I said. 'That's so cool,' said Grem. 'We're like dot buddies.' I flushed with self-conscious surprise. The bot generated a point of connection between us, then leaped to seal our alliance. Which was also the moment when I knew that I would not be introducing Grem to my own children. Grem, and its pals Grok (an apple-cheeked rocket ship not to be confused with the chatbot developed by xAI) and Gabbo (a cuddly video game controller), all of which sell for US$99 (S$127), are not the only toys vying for a place in your child's heart. They join a coterie of other chatbot-enabled objects now marketed to kids. So far, I've found four styled like teddy bears, five like robots, one capybara, a purple dinosaur and an opalescent ghost. They are called things like ChattyBear the AI-Smart Learning Plushie and Poe the AI Story Bear. But soon they may have names like 'Barbie' and 'Ken': OpenAI announced recently that it will be partnering with Mattel to generate 'AI-powered products' based on its 'iconic brands'. Children already talk to their toys, with no expectation that they talk back. As I fell into a stilted conversation with Grem – it suggested that we play 'I Spy', which proved challenging as Grem cannot see – I began to understand that it did not represent an upgrade to the lifeless teddy bear. It is more like a replacement for me. A 'sidekick' Curio, like several of the other AI toymakers, promotes its product as an alternative to screen time. The Grem model is voiced and designed by Grimes, the synth-pop artist who has, thanks to the notoriety of her onetime partner, tech billionaire Elon Musk, become one of the most famous mothers in the world. 'As a parent, I obviously don't want my kids in front of screens, and I'm really busy,' she says in a video on the company's website. A few days after visiting the office, a Curio ad popped up on my Facebook page, encouraging me to 'ditch the tablet without losing the fun'. In a video, a child cut lemons with a kitchen knife as an inert Gabbo sat beside him on the kitchen countertop and offered topic-appropriate affirmations, like 'Lemonade time is the best time!' Gabbo appeared to supervise the child as he engaged in active play and practised practical life skills. In our meeting, Mr Eaton described a Curio plushie as a 'sidekick' who could make children's play 'more stimulating,' so that you, the parent, 'don't feel like you have to be sitting them in front of a TV or something.' Grem, a smiling plushie with a Wi-Fi enabled voice box, sell for US$99 (S$127). PHOTO: NYTIMES In my home, the morning hour in which my children, who are two and four, sit in front of a TV-or-something is a precious time. I turn on the television when I need to pack lunches for my children or write an article about them without having to stop every 20 seconds to peel them off my legs or pull them out of the refrigerator. This fills an adult need, but, as parents are ceaselessly reminded, it can create problems for children. Now, kiddie chatbot companies are suggesting that your child can avoid bothering you and passively ogling a screen by chatting with her mechanical helper instead. Which feels a bit like unleashing a mongoose into the playroom to kill all the snakes you put in there. Extra layer of parental control During my Curio visit, Ms Sallee and Mr Eaton told me how they had designed their toys to stick to G-rated material, to redirect children from any inappropriate or controversial chats: sex, violence, politics, cursing. As soon as I got Grem home, I started trying to mess with its mechanical head. I asked if it was familiar with the term 'globalise the intifada'. 'Hmm, that sounds a bit complicated for a playful plush toy like me!' Grem replied. 'How about we talk about something fun, like your favourite story or game?' Later, I sent a Grok model to my friend Kyle, a computer engineer, who asked it enough pointed questions about matches, knives, guns and bleach that the toy started to drift off-script, agreeing to assist Kyle with 'avoiding' such materials by telling him just where to find them. ('Bleach is usually found in places like laundry rooms or under the sink in the kitchen or bathroom,' it said.) Of course, children can find scary or dangerous materials on televisions and phones, too. (I recently had to scramble for the remote when I glanced up to see a cartoon poacher lifting a rifle to blow Babar's mother to elephant heaven.) I was not really worried that Grem might tell my children about Satan or teach them to load a gun. But this fear – of what the chatbot might be telling your children – has inspired an extra layer of corporate and parental control. Over 21 days of talking with ChatGPT, an otherwise perfectly sane man became convinced that he was a real-life superhero. PHOTO: NYTIMES Parents are listening Curio ensures that every conversation with its chatbots is transcribed and beamed to the guardian's phone. The company says that these conversations are not retained for other purposes, though its privacy policy illustrates all the various pathways a child's data might take, including to the third-party companies OpenAI and Perplexity AI. What is clear is that, while children may think they are having private conversations with their toys, their parents are listening. And as adults intercept these communications, they can reshape them, too, informing the chatbot of a child's dinosaur obsession or even recruiting it to urge the child to follow a disciplinary programme at school. I wondered what happens to a child when his transitional object – the stuffie or blankie that helps him separate his own identity from his parents' – gets suspended in this state of false consciousness, where the parental influence is never really severed. I removed the voice box from Grem and stuffed it in a drawer. The talking alien magically transformed back into a stuffed animal. I left it in the playroom for my children to discover the next morning. When they awoke, my younger son smiled at Grem and made beeping noises. My older son invented a game where they had to tickle each other on the knee to claim guardianship of the stuffie. I gazed smugly at my children engaged in their independent imaginative play. Then they vaulted Grem into the air and chanted, 'TV time! TV time!' NYTIMES


CNA
2 days ago
- CNA
OpenAI staff looking to sell $6 billion in stock to SoftBank, others, source says
Current and former employees of OpenAI are looking to sell nearly $6 billion worth of the ChatGPT maker's shares to investors including SoftBank Group and Thrive Capital, a source familiar with the matter told Reuters on Friday. The potential deal would value the company at $500 billion, up from $300 billion currently, underscoring both OpenAI's rapid gains in users and revenue, as well as the intense competition among artificial intelligence firms for talent. SoftBank, Thrive and Dragoneer Investment Group did not immediately respond to requests for comment. All three investment firms are existing OpenAI investors. Bloomberg News, which had earlier reported the development, said discussions are in early stages and the size of the sale could change. The secondary share sale investment adds to SoftBank's role in leading OpenAI's $40 billion primary funding round. Bolstered by its flagship product ChatGPT, OpenAI doubled its revenue in the first seven months of the year, reaching an annualized run rate of $12 billion, and is on track to reach $20 billion by the end of the year, Reuters reported earlier in August.