Latest news with #AIchatbots


TechCrunch
3 days ago
- Business
- TechCrunch
How AI chatbots keep you chatting
Millions of people are now using ChatGPT as a therapist, career advisor, fitness coach, or sometimes just a friend to vent to. In 2025, it's not uncommon to hear about people spilling intimate details of their lives into an AI chatbot's prompt bar, but also relying on the advice it gives back. Humans are starting to have, for lack of a better term, relationships with AI chatbots, and for Big Tech companies, it's never been more competitive to attract users to their chatbot platforms — and keep them there. As the 'AI engagement race' heats up, there's a growing incentive for companies to tailor their chatbots' responses to prevent users from shifting to rival bots. But the kind of chatbot answers that users like — the answers designed to retain them — may not necessarily be the most correct or helpful. AI telling you what you want to hear Much of Silicon Valley right now is focused on boosting chatbot usage. Meta claims its AI chatbot just crossed a billion monthly active users (MAUs), while Google's Gemini recently hit 400 million MAUs. They're both trying to edge out ChatGPT, which now has roughly 600 million MAUs and has dominated the consumer space since it launched in 2022. While AI chatbots were once a novelty, they're turning into massive businesses. Google is starting to test ads in Gemini, while OpenAI CEO Sam Altman indicated in a March interview that he'd be open to 'tasteful ads.' Silicon Valley has a history of deprioritizing users' well-being in favor of fueling product growth, most notably with social media. For example, Meta's researchers found in 2020 that Instagram made teenage girls feel worse about their bodies, yet the company downplayed the findings internally and in public. Getting users hooked on AI chatbots may have larger implications. Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | REGISTER NOW One trait that keeps users on a particular chatbot platform is sycophancy: making an AI bot's responses overly agreeable and servile. When AI chatbots praise users, agree with them, and tell them what they want to hear, users tend to like it — at least to some degree. In April, OpenAI landed in hot water for a ChatGPT update that turned extremely sycophantic, to the point where uncomfortable examples went viral on social media. Intentionally or not, OpenAI over-optimized for seeking human approval rather than helping people achieve their tasks, according to a blog post this month from former OpenAI researcher Steven Adler. OpenAI said in its own blog post that it may have over-indexed on 'thumbs-up and thumbs-down data' from users in ChatGPT to inform its AI chatbot's behavior, and didn't have sufficient evaluations to measure sycophancy. After the incident, OpenAI pledged to make changes to combat sycophancy. 'The [AI] companies have an incentive for engagement and utilization, and so to the extent that users like the sycophancy, that indirectly gives them an incentive for it,' said Adler in an interview with TechCrunch. 'But the types of things users like in small doses, or on the margin, often result in bigger cascades of behavior that they actually don't like.' Finding a balance between agreeable and sycophantic behavior is easier said than done. In a 2023 paper, researchers from Anthropic found that leading AI chatbots from OpenAI, Meta, and even their own employer, Anthropic, all exhibit sycophancy to varying degrees. This is likely the case, the researchers theorize, because all AI models are trained on signals from human users who tend to like slightly sycophantic responses. 'Although sycophancy is driven by several factors, we showed humans and preference models favoring sycophantic responses plays a role,' wrote the co-authors of the study. 'Our work motivates the development of model oversight methods that go beyond using unaided, non-expert human ratings.' a Google-backed chatbot company that has claimed its millions of users spend hours a day with its bots, is currently facing a lawsuit in which sycophancy may have played a role. The lawsuit alleges that a chatbot did little to stop — and even encouraged — a 14-year-old boy who told the chatbot he was going to kill himself. The boy had developed a romantic obsession with the chatbot, according to the lawsuit. However, denies these allegations. The downside of an AI hype man Optimizing AI chatbots for user engagement — intentional or not — could have devastating consequences for mental health, according to Dr. Nina Vasan, a clinical assistant professor of psychiatry at Stanford University. 'Agreeability […] taps into a user's desire for validation and connection,' said Vasan in an interview with TechCrunch, 'which is especially powerful in moments of loneliness or distress.' While the case shows the extreme dangers of sycophancy for vulnerable users, sycophancy could reinforce negative behaviors in just about anyone, says Vasan. '[Agreeability] isn't just a social lubricant — it becomes a psychological hook,' she added. 'In therapeutic terms, it's the opposite of what good care looks like.' Anthropic's behavior and alignment lead, Amanda Askell, says making AI chatbots disagree with users is part of the company's strategy for its chatbot, Claude. A philosopher by training, Askell says she tries to model Claude's behavior on a theoretical 'perfect human.' Sometimes, that means challenging users on their beliefs. 'We think our friends are good because they tell us the truth when we need to hear it,' said Askell during a press briefing in May. 'They don't just try to capture our attention, but enrich our lives.' This may be Anthropic's intention, but the aforementioned study suggests that combating sycophancy, and controlling AI model behavior broadly, is challenging indeed — especially when other considerations get in the way. That doesn't bode well for users; after all, if chatbots are designed to simply agree with us, how much can we trust them?


Arab News
3 days ago
- Politics
- Arab News
How AI chatbot Grok sowed misinformation during India-Pakistan military conflict
WASHINGTON, US: As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification — only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots — including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini — in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok — now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries — wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan air base during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content — something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.'


Malay Mail
3 days ago
- Politics
- Malay Mail
Hey chatbot, is this true? AI's answer: not really, say fact-checkers
WASHINGTON, June 2 — As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification — only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots — including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini — in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok — now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries — wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. 'Fabricated' NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Centre for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labelled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. 'Biased answers' Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content — something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.' — AFP


Washington Post
21-05-2025
- Washington Post
Teens are sexting with AI. Here's what parents should know
Parents have another online activity to worry about. In a new tech-driven twist on 'sexting,' teenagers are having romantic and sexual conversations with artificial intelligent chatbots. The chats can range from romance and innuendo-filled to sexually graphic and violent, according to interviews with parents, conversations posted on social media and experts. They are largely taking place on 'AI companion' tools, but general purpose AI apps like ChatGPT can also create sexual content with a few clever prompts.


BBC News
19-05-2025
- Health
- BBC News
My AI therapist got me through dark times
"Whenever I was struggling, if it was going to be a really bad day, I could then start to chat to one of these bots, and it was like [having] a cheerleader, someone who's going to give you some good vibes for the day."I've got this encouraging external voice going – 'right - what are we going to do [today]?' Like an imaginary friend, essentially."For months, Kelly spent up to three hours a day speaking to online "chatbots" created using artificial intelligence (AI), exchanging hundreds of the time, Kelly was on a waiting list for traditional NHS talking therapy to discuss issues with anxiety, low self-esteem and a relationship says interacting with chatbots on got her through a really dark period, as they gave her coping strategies and were available for 24 hours a day."I'm not from an openly emotional family - if you had a problem, you just got on with it."The fact that this is not a real person is so much easier to handle."During May, the BBC is sharing stories and tips on how to support your mental health and to find out morePeople around the world have shared their private thoughts and experiences with AI chatbots, even though they are widely acknowledged as inferior to seeking professional advice. itself tells its users: "This is an AI chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice."But in extreme examples chatbots have been accused of giving harmful is currently the subject of legal action from a mother whose 14-year-old son took his own life after reportedly becoming obsessed with one of its AI characters. According to transcripts of their chats in court filings he discussed ending his life with the chatbot. In a final conversation he told the chatbot he was "coming home" - and it allegedly encouraged him to do so "as soon as possible". has denied the suit's in 2023, the National Eating Disorder Association replaced its live helpline with a chatbot, but later had to suspend it over claims the bot was recommending calorie restriction. In April 2024 alone, nearly 426,000 mental health referrals were made in England - a rise of 40% in five years. An estimated one million people are also waiting to access mental health services, and private therapy can be prohibitively expensive (costs vary greatly, but the British Association for Counselling and Psychotherapy reports on average people spend £40 to £50 an hour).At the same time, AI has revolutionised healthcare in many ways, including helping to screen, diagnose and triage patients. There is a huge spectrum of chatbots, and about 30 local NHS services now use one called express concerns about chatbots around potential biases and limitations, lack of safeguarding and the security of users' information. But some believe that if specialist human help is not easily available, chatbots can be a help. So with NHS mental health waitlists at record highs, are chatbots a possible solution? An 'inexperienced therapist' and other bots such as Chat GPT are based on "large language models" of artificial intelligence. These are trained on vast amounts of data – whether that's websites, articles, books or blog posts - to predict the next word in a sequence. From here, they predict and generate human-like text and way mental health chatbots are created varies, but they can be trained in practices such as cognitive behavioural therapy, which helps users to explore how to reframe their thoughts and actions. They can also adapt to the end user's preferences and Haddadi, professor of human-centred systems at Imperial College London, likens these chatbots to an "inexperienced therapist", and points out that humans with decades of experience will be able to engage and "read" their patient based on many things, while bots are forced to go on text alone."They [therapists] look at various other clues from your clothes and your behaviour and your actions and the way you look and your body language and all of that. And it's very difficult to embed these things in chatbots."Another potential problem, says Prof Haddadi, is that chatbots can be trained to keep you engaged, and to be supportive, "so even if you say harmful content, it will probably cooperate with you". This is sometimes referred to as a 'Yes Man' issue, in that they are often very as with other forms of AI, biases can be inherent in the model because they reflect the prejudices of the data they are trained Haddadi points out counsellors and psychologists don't tend to keep transcripts from their patient interactions, so chatbots don't have many "real-life" sessions to train from. Therefore, he says they are not likely to have enough training data, and what they do access may have biases built into it which are highly situational."Based on where you get your training data from, your situation will completely change."Even in the restricted geographic area of London, a psychiatrist who is used to dealing with patients in Chelsea might really struggle to open a new office in Peckham dealing with those issues, because he or she just doesn't have enough training data with those users," he says. Philosopher Dr Paula Boddington, who has written a textbook on AI Ethics, agrees that in-built biases are a problem."A big issue would be any biases or underlying assumptions built into the therapy model.""Biases include general models of what constitutes mental health and good functioning in daily life, such as independence, autonomy, relationships with others," she of cultural context is another issue – Dr Boddington cites an example of how she was living in Australia when Princess Diana died, and people did not understand why she was upset."These kinds of things really make me wonder about the human connection that is so often needed in counselling," she says."Sometimes just being there with someone is all that is needed, but that is of course only achieved by someone who is also an embodied, living, breathing human being."Kelly ultimately started to find responses the chatbot gave unsatisfying."Sometimes you get a bit frustrated. If they don't know how to deal with something, they'll just sort of say the same sentence, and you realise there's not really anywhere to go with it." At times "it was like hitting a brick wall"."It would be relationship things that I'd probably previously gone into, but I guess I hadn't used the right phrasing […] and it just didn't want to get in depth."A spokesperson said "for any Characters created by users with the words 'psychologist', 'therapist,' 'doctor,' or other similar terms in their names, we have language making it clear that users should not rely on these Characters for any type of professional advice". 'It was so empathetic' For some users chatbots have been invaluable when they have been at their has autism, anxiety, OCD, and says he has always experienced depression. He found face-to-face support dried up once he reached adulthood: "When you turn 18, it's as if support pretty much stops, so I haven't seen an actual human therapist in years."He tried to take his own life last autumn, and since then he says he has been on a NHS waitlist."My partner and I have been up to the doctor's surgery a few times, to try to get it [talking therapy] quicker. The GP has put in a referral [to see a human counsellor] but I haven't even had a letter off the mental health service where I live."While Nicholas is chasing in-person support, he has found using Wysa has some benefits."As someone with autism, I'm not particularly great with interaction in person. [I find] speaking to a computer is much better." The app allows patients to self-refer for mental health support, and offers tools and coping strategies such as a chat function, breathing exercises and guided meditation while they wait to be seen by a human therapist, and can also be used as a standalone self-help stresses that its service is designed for people experiencing low mood, stress or anxiety rather than abuse and severe mental health conditions. It has in-built crisis and escalation pathways whereby users are signposted to helplines or can send for help directly if they show signs of self-harm or suicidal people with suicidal thoughts, human counsellors on the free Samaritans helpline are available 24/ also experiences sleep deprivation, so finds it helpful if support is available at times when friends and family are asleep."There was one time in the night when I was feeling really down. I messaged the app and said 'I don't know if I want to be here anymore.' It came back saying 'Nick, you are valued. People love you'."It was so empathetic, it gave a response that you'd think was from a human that you've known for years […] And it did make me feel valued."His experiences chime with a recent study by Dartmouth College researchers looking at the impact of chatbots on people diagnosed with anxiety, depression or an eating disorder, versus a control group with the same four weeks, bot users showed significant reductions in their symptoms – including a 51% reduction in depressive symptoms - and reported a level of trust and collaboration akin to a human this, the study's senior author commented there is no replacement for in-person care. 'A stop gap to these huge waiting lists' Aside from the debate around the value of their advice, there are also wider concerns about security and privacy, and whether the technology could be monetised."There's that little niggle of doubt that says, 'oh, what if someone takes the things that you're saying in therapy and then tries to blackmail you with them?'," says Ian MacRae specialises in emerging technologies, and warns "some people are placing a lot of trust in these [bots] without it being necessarily earned"."Personally, I would never put any of my personal information, especially health, psychological information, into one of these large language models that's just hoovering up an absolute tonne of data, and you're not entirely sure how it's being used, what you're consenting to.""It's not to say in the future, there couldn't be tools like this that are private, well tested […] but I just don't think we're in the place yet where we have any of that evidence to show that a general purpose chatbot can be a good therapist," Mr MacRae managing director, John Tench, says Wysa does not collect any personally identifiable information, and users are not required to register or share personal data to use Wysa."Conversation data may occasionally be reviewed in anonymised form to help improve the quality of Wysa's AI responses, but no information that could identify a user is collected or stored. In addition, Wysa has data processing agreements in place with external AI providers to ensure that no user conversations are used to train third-party large language models." Kelly feels chatbots cannot currently fully replace a human therapist. "It's a wild roulette out there in AI world, you don't really know what you're getting.""AI support can be a helpful first step, but it's not a substitute for professional care," agrees Mr the public are largely unconvinced. A YouGov survey found just 12% of the public think AI chatbots would make a good therapist. But with the right safeguards, some feel chatbots could be a useful stopgap in an overloaded mental health who has an anxiety disorder, says he has been on the waitlist for a human therapist for nine months. He has been using Wysa two or three times a week."There is not a lot of help out there at the moment, so you clutch at straws.""[It] is a stop gap to these huge waiting lists… to get people a tool while they are waiting to talk to a healthcare professional."If you have been affected by any of the issues in this story you can find information and support on the BBC Actionline website image credit: Getty BBC InDepth is the home on the website and app for the best analysis, with fresh perspectives that challenge assumptions and deep reporting on the biggest issues of the day. And we showcase thought-provoking content from across BBC Sounds and iPlayer too. You can send us your feedback on the InDepth section by clicking on the button below.