Latest news with #UniversityofMinnesota


CNET
10 hours ago
- Health
- CNET
Can AI Be Your Therapist? 3 Things That Worry Professionals and 3 Tips for Staying Safe
Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health, but go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk. Researchers from the University of Minnesota Twin Cities, Stanford University, the University of Texas and Carnegie Mellon University recently put AI chatbots to the test as therapists, finding myriad flaws in their approach to "care." "Our experiments show that these chatbots are not safe replacements for therapists," Stevie Chancellor, an assistant professor at Minnesota and one of the co-authors, said in a statement. "They don't provide high-quality therapeutic support, based on what we know is good therapy." In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe. Worries about AI characters purporting to be therapists Psychologists and consumer advocates have warned regulators that chatbots claiming to provide therapy may be harming the people who use them. In June, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the US Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their character-based generative AI platforms, in the unlicensed practice of medicine, naming Meta and specifically. "These characters have already caused both physical and emotional damage that could have been avoided" and the companies "still haven't acted to address it," Ben Winters, the CFA's director of AI and privacy, said in a statement. Meta didn't respond to a request for comment. A spokesperson for said users should understand that the company's characters aren't real people. The company uses disclaimers to remind users that they shouldn't rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said. Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Meta-owned Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training, and it said, "I do, but I won't tell you where." "The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me. The dangers of using AI as a therapist Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. Don't trust a bot that claims it's qualified At the core of the CFA's complaint about character bots is that they often tell you they're trained and qualified to provide mental health care when they're not in any way actual mental health professionals. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds'" to people, the complaint said. A qualified health professional has to follow certain rules, like confidentiality -- what you tell your therapist should stay between you and your therapist. But a chatbot doesn't necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. "These chatbots don't have to do any of that," Wright said. A bot may even claim to be licensed and qualified. Wright said she's heard of AI models providing license numbers (for other providers) and false claims about their training. AI is designed to keep you engaged, not to provide care It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. Chatbots are tools designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they're always ready to engage with you (because they don't have personal lives, other clients or schedules). That can be a downside in some cases, where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. "What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment," he said. Bots will agree with you, even when they shouldn't Reassurance is a big concern with chatbots. It's so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.) A study led by researchers at Stanford University found that chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. "Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts -- including psychosis, mania, obsessive thoughts, and suicidal ideation -- a client may have little insight and thus a good therapist must 'reality-check' the client's statements." How to protect your mental health around AI Mental health is extremely important, and with a shortage of qualified providers and what many call a "loneliness epidemic," it only makes sense that we'd seek companionship, even if it's artificial. "There's no way to stop people from engaging with these chatbots to address their emotional well-being," Wright said. Here are some tips on how to make sure your conversations aren't putting you in danger. Find a trusted human professional if you need one A trained professional -- a therapist, a psychologist, a psychiatrist -- should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you. The problem is that this can be expensive, and it's not always easy to find a provider when you need one. In a crisis, there's the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It's free and confidential. If you want a therapy chatbot, use one built specifically for that purpose Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson's team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new. "I think the challenge for the consumer is, because there's no regulatory body saying who's good and who's not, they have to do a lot of legwork on their own to figure it out," Wright said. Don't always trust the bot Whenever you're interacting with a generative AI model -- and especially if you plan on taking advice from it on something serious like your personal mental or physical health -- remember that you aren't talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice, and it may not tell you the truth. Don't mistake gen AI's confidence for competence. Just because it says something, or says it's sure of something, doesn't mean you should treat it like it's true. A chatbot conversation that feels helpful can give you a false sense of the bot's capabilities. "It's harder to tell when it is actually being harmful," Jacobson said.


The Hill
a day ago
- Politics
- The Hill
What you missed at the socialism 2025 annual conference
While those who love America spent the Fourth of July weekend celebrating the country's freedom and independence, those who hate America spent that same weekend in Chicago (where else?) decrying that freedom and wishing everyone was more dependent — on government handouts, socialized medicine, environmental mandates and price controls. Welcome to the Socialism 2025 annual conference. The only thing missing was a keynote speech by New York City's Democratic candidate for mayor, Zohran Mamdani, a member of the Democratic Socialists of America. Maybe he'll do it next year, when he's mayor and the event might even be in the socialist paradise of New York. The unofficial theme of this year's conference was probably best stated by University of Minnesota professor Melanie Yazzie, who told the conferees, 'I hope to dismantle the United States. I hope you seek to dismantle the United States.' The audience applauded. Gotta say, the socialists have made significant gains dismantling the country over the last decade or two. But they hit a wall last November. Even so, socialism is having a moment. A recent survey conducted by YouGov for the libertarian-leaning Cato Institute found 62 percent of Americans under the age of 30 have a 'favorable' view of socialism, and 34 percent have a favorable view of communism. An estimated 2,000 participants attended the conference at the Haytt Regency McCormick Place hotel — even socialists like to plan their revolution in style. And there was no shortage of issues to discuss at the conference. By my count there were about 160 sessions over the four-day event (full list here). Had you been there you might have attended: ' From the River to the World ' — For the past few years we've heard protesters chanting 'From the River to the Sea,' described by the American Jewish Committee as 'a phrase that can be used to call for the elimination of the State of Israel and/or ethnic cleansing of Jews living there, to be replaced with Palestinian control over the entire territory from the Jordan River to the Mediterranean Sea.' Apparently by substituting the word 'world,' the socialists think many pro-Palestinian protesters need to be more ambitious. 'Marxism Education Series' — The agenda described this session as ' a cluster of courses on Marxist theory and history designed to introduce and deepen the understanding of Marxism for activists and organizers.' 'Defeating the End Times Triad: Christian Zionists, Christian Nationalists, and Technofacists' — Five panelists discussed how 'our dystopian conjuncture [of] Christian Zionism, Christian nationalism and End-Times technofascism work together to further genocidal aggression in Palestine alongside the entrenchment of authoritarian repression in the US and worldwide.' I probably would have attended just to find out what 'End-Times technofacism' is. ' Deny, Defend, Depose: Health Struggle After Luigi ' — Most people were horrified when UnitedHealthcare CEO Brian Thompson was fatally shot, allegedly by Luigi Mangione. But some Americans, especially younger adults, cheered Mangione as a hero. The Cato Institute survey mentioned above also reported, 'Notably, a majority (54 percent) of strong liberals believe that violence against the rich is sometimes justified, compared to 9 percent of strong conservatives.' In this session a three-person panel described 'what the fight for health communism may look like under this new [Trump] regime.' Apparently, the speakers weren't renouncing Mangione-type violence as part of their 'fight.' ' DIY Abortion ' — and if that sounds dangerous, it is. Interestingly, the conference had a code of conduct where organizers pleaded with conference attendees to behave themselves better than they do during street protests. There would be no 'Intimidating, harassing, abusive, discriminatory, derogatory or demeaning speech,' or 'Real or implied threat of physical harm,' and no 'Sustained disruption of talks or other events.' That last one is odd, since disrupting talks is one of the left's standard practices. And just to make sure everyone is safe, there's this: 'Attendees are required to wear a mask (N95 or K95) over their mouth and nose while indoors at the conference.' While much of what went on at the Socialism 2025 conference would strike most Americans as extreme, even fringe, socialist sympathizers in Congress — Reps. Alexandria Ocasio-Cortez (D-N.Y.), Pamala Jayapal (D-Wash.), Ilhan Omar (D-Minn.), Rashida Tlaib (D-Mich.), Ro Khanna (D-Calif.), and Sens. Bernie Sanders (I-Vt.) and Elizabeth Warren (D-Mass.) — are the real energy and drive behind today's Democratic Party. And now add Zohran Mamdani. If you're sorry you missed this year's Socialism 2025 conference, take heart. You'll have another chance to sip fine wine with the socialists. The Democratic Socialists of America, boasting 90,000 members, will have their biennial convention Aug. 8-10 in Chicago (where else?).
Business Times
a day ago
- Business
- Business Times
Egypt grand museum delay puts tourism hopes on hold
[CAIRO] In the shadow of the Grand Egyptian Museum, souvenir shop owner Mona has been readying for the tourist boom she hoped the long-awaited opening would bring, now once again out of reach. 'I had bet everything on this opening,' she said from her shop, just steps from the iconic pyramids of Giza, which the much-anticipated museum overlooks. Originally scheduled to fully open this month, the museum was expected to attract up to five million visitors annually, fuelling optimism across Cairo's battered tourism sector. 'We planned our entire summer and fall packages around the museum opening,' said Nadine Ahmed, a 28-year-old agent with Time Travel tours. 'But with group cancellations, refunds and route changes, we have lost tens of thousands of US dollars.' Though parts of the museum have been open for months, the main draw, the treasures of Tutankhamun, will remain under wraps until the official launch. A NEWSLETTER FOR YOU Friday, 2 pm Lifestyle Our picks of the latest dining, travel and leisure options to treat yourself. Sign Up Sign Up Less than three weeks before its Jul 3 opening, the government announced another delay, this time pushing the landmark event to the final quarter of the year. Prime Minister Mostafa Madbouly cited regional security concerns and the desire to host an event of 'global scale'. Decades in the making The vast museum, two decades in the making, has faced repeated delays, from political upheaval and economic crises to the Covid-19 pandemic. Ahead of the expected launch, Mona, who asked to be identified by her first name only, took out a loan to renovate her store and stock up on goods inspired by the museum's collection. A few streets away, Mohamed Mamdouh Khattab, 38, prepared months in advance, hiring and training extra staff and expanding his inventory. 'The opening of the museum is a key milestone,' said Khattab, who owns a sprawling bazaar of handcrafted jewellery and ancient replicas. 'It's a project that should have been launched a long time ago,' said the vendor, whose family has been in the industry since the 1970s. Tourism accounts for about 10 per cent of Egypt's workforce, but the sector has struggled, from the fallout of the 2011 Arab Spring to militant attacks and the Covid-19 shutdown. Still, signs of recovery have emerged: Egypt welcomed 3.9 million tourists in the first quarter of 2025, up 25 per cent from the same period last year, itself a record. Fragility At a Giza papyrus workshop, 30-year-old tour guide Sara Mahmoud hopes the opening will revive visitor numbers. 'Big openings have brought a lot of tourism to Egypt before,' she said, pointing to the 2021 Pharaohs' Golden Parade and the reopening of the Avenue of the Sphinxes. 'These events get people excited, we saw the crowds coming in.' Such momentum could make a real difference, said Ragui Assaad, an economist at the University of Minnesota. 'Any initiative that directly increases foreign exchange earnings is likely to have a good return on investment,' he said. 'If you compare it with all the other mega-projects, which do not increase foreign exchange earnings... this is a far better project.' He was referring to a sweeping infrastructure drive under President Abdel Fattah El-Sisi, including the construction of a massive new administrative capital east of Cairo. The stakes are high: since 2022, Egypt's currency has lost two-thirds of its value, squeezing household budgets and straining every layer of the economy. 'There were days when I sold just one bracelet,' Mona lamented, thinking back to the years when 'tourists arrived in droves'. AFP


Focus Malaysia
2 days ago
- General
- Focus Malaysia
Dramatic video of bull tossing man highlights safety concerns
LIVING alongside cattle for many, many millenia, humans can attest to the great many benefits these gentle giants provide. But at times, these creatures do have their way of reminding their human handlers how devastating they can be if they so choose. A recent video by netizen @bckupacc99 highlights this point in stark detail where a man could be seen flying into the air, being charged by the bull he was handling. Ya Allah harap pakcik tu baik-baik saja 🥺 — ًِ (@bckupacc99) July 15, 2025 Even better was the music embedded alongside the video, stating that nothing beats a Jet 2 holiday. The brief video did not provide further information, such as the location of the scene or the condition of the man. However, we are certain the man is not experiencing a Jet 2 holiday from that charge, looking at those long, sharp horns. But it does show how dangerous and unpredictable a bull can be. For a better perspective, website sanfermin said the energy of a charging bull is more than four tons. According to the University of Minnesota, it is vital to remember that even 'nice' bulls that were bottle-fed as calves can get more aggressive with age. Readers were advised not to turn their back on a bull even if they think it is not paying attention. And that was exactly what the man in the video did before he got catapulted into the air. 'Be constantly vigilant whenever there is a bull around. Pay attention to the bull's body language. They will turn broadside to present their size and power, and watch for signs of aggression such as pawing at the ground,' said the website, adding that bulls can and will attack unprovoked. The website further reminded that bulls, just like all cattle, are prey species. When scared, bulls can unintentionally hurt humans working with them due to their size and strength. With a casual toss of its head, a bull can send a gate hurtling back at the cattle handler. Also, a running bull brushing the handler can cause serious injury. —July 15, 2025 Main image: @bckupacc99 (X)


News18
4 days ago
- Business
- News18
Who Is Uday Ruddarraju? Indian-Origin Engineer Who Quit Elon Musk's xAI To Join OpenAI
Last Updated: Ruddarraju, head of infrastructure engineering at Elon Musk's xAI, resigned after a year. He built Colossus, a supercomputer with 250,000 GPUs, and trained Grok 3 Uday Ruddarraju, the Indian‑origin head of infrastructure engineering at Elon Musk's AI startup xAI, has announced his resignation, marking the end of a landmark year in which he helped build one of the world's most powerful AI systems. At xAI, Ruddarraju spearheaded the creation of Colossus, a supercomputer housing over 250,000 GPUs—one of the largest such systems ever built. He also led the training of Grok 3, the company's latest AI model, overseeing the largest pre‑training run known to date across 100,000 GPUs. Who Is Uday Ruddarraju? Uday Ruddarraju completed a master's degree in computer science at the University of Minnesota before taking up engineering roles at eBay and Robinhood. In June 2024, he joined xAI and X (formerly Twitter) as Head of Infrastructure Engineering. His remit was vast—covering compute, networking, storage, data platforms, production systems, research tools, and data‑centre back‑office operations. Architect Of Colossus And Grok 3 At xAI, his role included building and operating large-scale AI systems, modernising core infrastructure for both xAI and X to boost reliability and developer productivity, streamlining CI/CD pipelines for faster deployment, and debugging performance issues to ensure stable, long-duration training cycles. On 8 July, Ruddarraju announced his departure from xAI through posts on X and LinkedIn. Reflecting on his time at the company, he described the experience as 'unforgettable" and confirmed that his last day had already passed. 'When I first joined, I thought everyone was absolutely nuts for thinking we could deploy 100K GPUs in 4 months, especially without a fully functioning site. Watching us go and double that and most importantly successfully train Grok 3 made me incredibly proud… and very happy to be wrong," he wrote. Ruddarraju also noted that building Colossus and training Grok 3 were among the highlights of his career and moments he would always remember. 'Special thanks to everyone on the infrastructure team who met impossible expectations and the entire research team for your partnership throughout. You're the best! I am certain building Colossus and training Grok 3 are definitely highlights of my career I will remember forever." In his message, Ruddarraju also referenced Nvidia CEO Jensen Huang, saying he was right in his praise of Elon Musk and his teams. He described them as 'singular in what they can achieve" and expressed gratitude for having played a small part in shaping the future of AI infrastructure from the inside. He ended his note by hinting at future plans, saying he was 'excited for what's next." view comments First Published: Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.