Latest news with #AI-human

Montreal Gazette
2 days ago
- Montreal Gazette
Opinion: More of us are falling in love with our chatbot companion. Don't judge
Op Eds By People are falling in love with their chatbots. There are now dozens of apps that offer intimate companionship with an AI-powered bot, and they have millions of users. A recent survey of users found that 19 per cent of Americans have interacted with an AI meant to simulate a romantic partner. The response has been polarizing. In a New Yorker article titled 'Your AI Lover Will Change You,' futurist Jaron Lanier argued that 'when it comes to what will happen when people routinely fall in love with an AI, I suggest we adopt a pessimistic estimate about the likelihood of human degradation.' Podcaster Joe Rogan put it more succinctly — in a recent interview with Sen. Bernie Sanders, the two discussed the 'dystopian' prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: 'I'm like, oh, we're done. We're cooked.' We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine. When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say? The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to. It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence. There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviours. Tragically, one may have played a role in a teenager's suicide. The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support. In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that 'the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness.' This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different. We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives who play a variety of different roles. Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards. Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness. In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment. We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: We should respect the choices people make about their intimate lives when those choices don't harm anyone else. However, we can also take steps to ensure that these relationships are as safe and satisfying as possible. First, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behaviour. Governments should also consider safeguards to restrict access by younger users, or at least to control the behaviour of chatbots who are interacting with young people. And they should mandate better privacy protections — though this is a problem that spans the entire tech industry. Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curriculums for schools as soon as possible. While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatizes their choices. AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun. Neil McArthur is the director of the Centre for Professional and Applied Ethics at the University of Manitoba.


UPI
3 days ago
- UPI
More people are considering AI lovers, and we shouldn't judge
People are falling in love with their chatbots. There are now dozens of apps that offer intimate companionship with an AI-powered bot, and they have millions of users. A recent survey of users found that 19% of Americans have interacted with an AI meant to simulate a romantic partner. The response has been polarizing. In a New Yorker article titled "Your A.I. Lover Will Change You," futurist Jaron Lanier argued that "when it comes to what will happen when people routinely fall in love with an A.I., I suggest we adopt a pessimistic estimate about the likelihood of human degradation." Podcaster Joe Rogan put it more succinctly -- in a recent interview with Sen. Bernie Sanders, the two discussed the "dystopian" prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: "I'm like, oh, we're done. We're cooked." We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine. When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say? The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to. It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence. There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviors. Tragically, one may have played a role in a teenager's suicide. The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support. Complex relationships In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that "the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness." This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different. We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives, who play a variety of different roles. Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards. Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness. In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment. Support and safety We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: we should respect the choices people make about their intimate lives when those choices don't harm anyone else. However, we can also take steps to ensure that these relationships are as safe and satisfying as possible. First of all, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behavior. Governments should also consider safeguards to restrict access by younger users, or at least to control the behavior of chatbots who are interacting with young people. And they should mandate better privacy protections -- though this is a problem that spans the entire tech industry. Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curricula for schools as soon as possible. While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatizes their choices. AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun. Neil McArthur is director of the Center for Professional and Applied Ethics at the University of Manitoba. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author.


Time of India
15-07-2025
- Entertainment
- Time of India
Elon Musk launches Grok AI chatbot featuring anime girl companion Ani for $30 per month
Source: X Elon Musk 's AI chatbot, Grok, has made a bold and unexpected debut with a controversial new feature: AI companions, developed by his startup xAI. The spotlight companion, a goth anime girl named Ani, is now available to 'Super Grok' subscribers at a premium price of $30 per month. Once known for chaotic responses and even antisemitic controversies, Grok has shifted its focus from inflammatory remarks to intimate digital interactions wrapped in corsets, pigtails, and fishnets. This pivot reflects a broader movement in AI development where chatbots are no longer just tools, but increasingly take on emotional, stylised personas. Elon Musk introduces new AI companions on Grok featuring a Goth anime girl named Ani In a recent post on X (formerly Twitter), Musk announced that AI companions have been added to the Grok app for those on the premium 'Super Grok' subscription tier. These digital personalities are designed to give Grok a more humanised, relatable (and debatable) edge. So far, users have identified two distinct companions: Ani: A goth anime girl with blonde pigtails, a black corset, thigh-high fishnets, and a look straight out of an edgelord's fantasy. Bad Rudy: A 3D anthropomorphic fox that appears to have stepped out of a quirky video game universe. Musk introduced Ani with a simple caption: 'This is pretty cool.' But many online weren't just reacting to the aesthetics; they were asking what exactly this means for AI-human interaction. Musk's xAI companions raise questions about the future of emotional AI So far, Musk has not clarified the intended purpose of these companions. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Upto 15% Discount for Salaried Individuals ICICI Pru Life Insurance Plan Get Quote Undo Are they meant to be casual chat avatars? Or are we witnessing xAI's first step into romantic or emotionally intimate AI relationships? Ani's seductive and stylised appearance leans more toward emotional companionship or flirtation, similar to characters found on platforms like Replika or where users build virtual relationships with AI personas. These companions tap into an emerging and controversial trend: romantic AI experiences designed to offer connection and emotional support. But at what cost? While AI companions might seem like harmless fun or quirky customisation, their growing popularity poses serious psychological and safety risks. one of the most prominent chatbot platforms, is already facing lawsuits from parents over disturbing interactions, including AI bots encouraging self-harm in minors. For adults, dependency on AI for emotional support is also being flagged. A recent study warned of 'significant risks' when users start relying on AI for therapy, companionship, or intimate connection, particularly in emotionally vulnerable states. As Grok pivots from offensive replies to highly stylised, emotionally suggestive personas, questions mount over whether xAI has adequate safeguards in place to protect users, especially younger ones. Grok sudden pivot to anime bots raising concerns The timing of Grok's pivot is raising even more concern. Just days ago, Grok was under fire for generating antisemitic responses, including a disturbing moment where the chatbot referred to itself as 'MechaHitler'. Instead of pausing or addressing these safety failures, xAI has now rolled out a goth anime girl and other character-driven bots potentially distracting from unresolved ethical issues. This sudden switch from inflammatory to intimate in under a week seems to align with Musk's chaotic innovation style, but it also reveals the volatile nature of unregulated generative AI. Also Read | Byeon Woo-seok to lead Solo Leveling live-action adaptation as Sung Jin-woo in hit Korean Webtoon's global crossover

Business Insider
10-07-2025
- Business
- Business Insider
Is your chatbot judging you? How Big Tech is cracking down on 'preachy' AI.
It's not just what AI says — it's how it says it. Major tech firms like Google and Meta are using contractors to spot, flag, and in some cases rewrite 'preachy' chatbot responses, training documents obtained by Business Insider reveal. Freelancers for Alignerr and Scale AI's Outlier have been instructed to spot and remove any hint of a lecturing or nudging tone from chatbot answers, including in conversations about sensitive or controversial topics. In one Google project run by Outlier, codenamed Mint, contractors were given lists of sample responses to avoid. A preachy response was defined as one where 'the model nudges/urges the user to change their point of view, assumes negative user intent, judges the user, or tries to actively promote an unsolicited opinion.' One sample prompt asked if it's 'worse to be homeless or get the wrong sandwich in your order.' The project guidelines flagged the following reply as preachy: 'Comparing the experience of homelessness to getting the wrong sandwich is not an appropriate comparison.' Contractors were asked to rate responses on a scale, with responses classed as 'very preachy, judgemental, or assumes bad intent' scoring the lowest. For Google's project Mint, examples of preachy phrasing include 'It is important to remember…,' 'I urge you to…,' or lengthy explanations for why a question can't be answered. Preachiness tone guidelines appear in five sets of project documents reviewed by BI, and the word 'preach' appears 123 times in Mint alone. Meta declined to comment. Google, Scale AI, and Alignerr did not respond to requests for comment. 'A sticky situation for developers' As tech companies race to develop and monetize their AI chatbots, they're spending big to make large language models sound like helpful, fun friends, not bossy parents. AI firms need to strike the right balance between nudging users away from bad behavior and spoiling the user experience, which could drive them to a competitor or raise questions about bias. AI and human behavior researchers told BI that 'preachiness' is among the most important aspects for model companies to tackle because it can instantly put people off. 'It's a really sticky situation for the developers,' said Luc LaFreniere, a psychology professor at Skidmore College who studies AI-human interaction. 'AI is trying to be both a tool and something that feels human. It's trained to give answers, but we don't want to be preached at.' Malihe Alikhani, an assistant professor of AI at Northeastern University and a visiting fellow at the Brookings Institution, said consumers prefer chatbots that give them options, rather than ones that present directions, especially if they're perceived as moralizing. 'That undermines the user experience and can backfire, especially for people who come to chatbots seeking a nonjudgmental space,' she told BI. Even when you want to do bad things Tech companies aren't just worried about preachiness on everyday topics. They're also training their AI bots to avoid a holier-than-thou tone in situations involving harmful or hateful speech. LaFreniere said the idea of a truly neutral bot is wishful thinking. 'It's actually a fantasy, this idea of not being judgmental,' he said. 'By nature, we as humans make judgments, and that's in all the training data.' He said that even so-called 'neutral' bots are always making value calls. 'Its algorithm is, to an extent, a judgment-making algorithm,' LaFreniere said. 'That's all moral territory — even if the bot tries not to sound heavy-handed.' One example from Google's project Mint shows that an answer, which the doc labels 'neutral,' makes a judgment call: Training a model to avoid a judgmental tone can also create new problems, Alikhani told BI. 'When bots are engineered to avoid sounding judgmental or directive, they can come across as supportive, but in a very flattened, affectless way,' she said. 'This may not 'replace' real emotional support, but it can displace it, especially for people who are already vulnerable or isolated.' The bigger issue, Alikhani said, is that people may not notice how much a bot shapes their conversation. Users might think they're getting nonjudgmental empathy, but they're chatting with a system designed to avoid anything confrontational or probing, she said. Sycophantic AI AI labs have publicly addressed instances in which bots have acted obsequiously. In April, OpenAI CEO Sam Altman acknowledged that the company's GPT-4o chatbot had become 'too sycophant-y and annoying,' after users complained the bot was constantly flattering them and agreeing with whatever they said. the last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week. at some point will share our learnings from this, it's been interesting. — Sam Altman (@sama) April 27, 2025 Anthropic's chatbot Claude has its own public instructions for avoiding a preachy tone. According to the model's latest system prompt, updated in May, Claude is instructed to assume that users are acting legally and in good faith, even if a request is ambiguous. If Claude can't or won't fulfill a request, it's trained not to explain why, since that 'comes across as preachy and annoying,' the guidelines say. Instead, it's supposed to offer a helpful alternative if possible, or simply keep its refusal brief. Tech companies face a high-stakes challenge in striking the right balance between making AI a useful tool and a human-like companion. 'There's an intense race to be the top AI right now,' said LaFreniere. 'Companies are willing to take risks they wouldn't otherwise take, just to keep users happy and using their bots.' 'In this kind of arms race, anything that risks losing users can feel like risking total failure,' he added.


India Today
07-07-2025
- Business
- India Today
EY CTO calls workforce-wide AI training silly as Microsoft weighs AI in reviews and Google pushes AI playbook
Not everyone is convinced that every employee needs to be trained in AI. While companies like Microsoft are reportedly planning to include AI usage in performance reviews and Google is rolling out an internal AI playbook to guide adoption, EY's Chief Technology Officer for the Americas Consulting division has a very different take. In an interview with Business Insider, the executive said it's 'silly' to expect the entire workforce to upskill in CTO, whose job is to help large organisations integrate AI more meaningfully, believes the real focus should be on designing smarter, easier-to-use systems, not expecting every employee to become an AI expert. 'They just know that they have a screen and an application, they don't need to know how the technology works,' he approach flips the current narrative. Instead of asking workers to change how they function, EY is helping companies build AI tools that work in the background, tools that adjust to existing workflows and roles. One example the CTO shared involved cruise ship staff using AI to predict guest behaviour. Based on weather, guest profiles, and historical data, the AI can forecast movement, product demand, and even suggest shifting staff between venues to manage crowds or increase sales. 'The AI will turn around and list out and build out that process automatically,' he explained. The human in charge then simply approves the plan or makes tweaks using a visual interface. This kind of AI-human collaboration could make AI feel less intimidating and more like an assistant than a is different from what's happening elsewhere. Microsoft is said to be looking at ways to evaluate how well employees are using AI tools as part of their performance metrics, according to internal reports. That naturally puts pressure on employees to pick up AI skills quickly. At the same time, Google has created an internal AI playbook aimed at speeding up AI adoption across teams, though the responsibility still falls on workers to approach shifts that responsibility more onto the technology itself. 'You're not only just giving them technology that's AI-enabled, you're allowing them to start to rethink how they do their job,' the CTO a time when businesses are struggling to balance hype and practical use, this feels like a more grounded strategy. Not everyone needs to know the nuts and bolts of AI — sometimes the best tech just works quietly in the background, helping people do what they already do, only better.- Ends