logo
#

Latest news with #ELIZA

Mark Zuckerberg Wants AI to Solve America's Loneliness Crisis. It Won't
Mark Zuckerberg Wants AI to Solve America's Loneliness Crisis. It Won't

Time​ Magazine

time14-05-2025

  • Time​ Magazine

Mark Zuckerberg Wants AI to Solve America's Loneliness Crisis. It Won't

In early May, Mark Zuckerberg made headlines for comments about how he sees the potential of AI to transform social isolation. In a conversation with podcaster Dwarkesh Patel, he pointed out that the 'average American has fewer than three friends' and argued that the answer may not be in traditional policy solutions like reinvesting in community centers or expanding mental health support. 'I think people are going to want a system that knows them well and understands them in the way that their feed algorithms do,' he continued, envisioning AI friends, companions, and therapists as the remedy for modern disconnection. Zuckerberg's prescription might sound extreme. But it begs profound philosophical questions: What does it mean to have a friend? Can algorithms ever replicate the complex and messy process of forging meaningful bonds with another being? These are questions that we, as a society, need to answer. Quickly. Zuckerberg is correct that there's a real problem. The loneliness epidemic is increasingly serious. Surveys show that Americans' in-person interactions have dropped by as much as 45% in recent years across certain groups. Beyond simply loneliness, the challenge can be described in terms of falling trust and social cohesion—a deficit of belonging. There's growing evidence that social media and the decline of in-person social connection have coincided with major increases in anxiety and depression, as well as political polarization and pessimism about the future. Today, the U.S. ranks last among G7 countries in terms of trust in public institutions. The Meta founder is also right that AI can meet some of a person's immediate emotional needs. Since the 1960s, when MIT researchers developed ELIZA, a program designed to mimic a psychotherapist, we've known that even basic AI interactions can provide temporary comfort. Contemporary studies even show that ChatGPT responses are rated highly in therapeutic contexts, suggesting these AI systems may provide accessible support without the biases and limitations of human therapists. While they may have their own biases and hallucinations, AI companions offer consistency, immediate availability, and can tailor interactions precisely to an individual's preferences, something busy friends or family members can't always do. Still, the case for preserving real human bonds isn't just a romantic ideal or techno-skepticism. Connection is what makes us human, and despite Zuckerberg's enthusiasm, there's clear evidence that real human interaction can't be replaced by machines. Researchers like Julianne Holt-Lunstad of Brigham Young University have demonstrated how face-to-face interactions reduce not only psychological distress but physical health problems, including cardiovascular disease. Neuroscientist Marco Iacoboni of UCLA highlights the role of ' mirror neurons,' specialized brain cells activated only through direct human interactions, crucial for empathy and emotional understanding—capacities AI interactions cannot stimulate. Human relationships are intricate and inherently messy, providing intangible benefits like growth through discomfort and emotional depth through complexity. Psychology research stretching back over 50 years shows that even unpleasant social interactions—including misunderstandings and disagreements—can foster personal empathy, problem-solving skills, and resilience. Overcoming social friction is integral to community-building. There's danger in normalizing the idea that human connection can be replaced by AI. It leads people to deprioritize investments in human connection. Rather than navigating a difficult but necessary conversation or making an effort to win someone's trust, there's the temptation to turn instead to the chatbot for companionship. In doing so, we run the risk of viewing ourselves as little more than machines. This is corrosive to human character as well as community. There's a simple reason why we hear a lot about human-centered solutions to the social connection crisis—including investments in mental health, youth programs, community centers, volunteerism, libraries, parks, and quality public spaces. They work. Decades of research validate that well-funded, vibrant public spaces and service-oriented initiatives significantly reduce loneliness and increase trust and social cohesion. Yet, our investments in these strategies remain insufficient. The decline of religious institutions also plays a significant role in weakening social bonds. Religion has historically optimized rituals, intergenerational connections, community building, and opportunities to explore profound existential questions—all essential aspects of belonging. Though understandable reasons lead many people to distance themselves from organized religion, we can't discard the wisdom these structures have built over centuries. As religious participation declines, one important answer is to create secular frameworks that similarly build common meaning and shared purpose among people—qualities that machines, by definition, cannot provide. It's easy to laugh off Zuckerberg's comments as the musings of another out-of-touch tech mogul eager to automate humanity. But dismissing them isn't enough. This is a moment that demands clarity and resolve around the irreplaceable value of human presence. Zuckerberg accurately diagnoses our crisis of isolation. But the solution will not be automated.

The science of AI and the AI of science
The science of AI and the AI of science

The Hindu

time01-05-2025

  • Science
  • The Hindu

The science of AI and the AI of science

The fundamental idea behind artificial intelligence (AI) stems from the British mathematician Alan Turing, who in the 1950s defined the idea of intelligence in a machine. During World War II, when attempting to break encryption code the Nazis used to transmit secret messages, he wondered whether machines would be able to find patterns in large amounts of data that humans couldn't. He speculated machines could learn from experience instead of being taught to work from the first principles. Computer scientist John McCarthy coined the term 'artificial intelligence' in a summer workshop in 1955 that many leading computer scientists attended. While the idea enjoyed enough attention in the subsequent decade — enough for the first chatbot, ELIZA, to be created in 1966 — funding dipped in the 1970s before rebounding after. By 1997, IBM's Deep Blue defeated chess champion Gary Kasparov, around the same time researchers were starting to build sophisticated artificial neural networks that supercharged machine learning. Soon, the idea emerged that these neural networks, which are computers that process information like networks of neurons in animal brains, could solve most scientific problems. From ANNs to GPUs Artificial neural networks, or ANNs, could learn to solve each problem by digesting large amounts of data, mapping the relationships between various problem statements and their respective solutions, and finally recreating these relationships for unsolved problems. This paradigm is called unsupervised learning. In supervised learning, humans label the data with tags the machine picks up on. For example, humans can create a database of images of cats and dogs and label them accordingly. The ANN that 'trains' with the database then 'learns' what 'cat' and 'dog' stand for. In another type of learning, called reinforcement learning, humans go through the results of what a machine has learnt and provide feedback for the machine to improve. Every ANN consists of nodes, small computers that accept input signals and provide an output. The nodes are divided into groups called layers. The layers are connected to each other like neurons in the brain: each node in one layer connects to a single other node in the next layer. It's a sandwich: the layers are the two slices of bread and in between are all the connections between the nodes. Not all connections are equal, some are more important than others. These relationships are adjusted by giving each connection a weight. The greater the weight, the more important the signal passing along that connection. By adjusting the weights, arrangement of nodes, and number of layers, then, the ANN can be adjusted to learn and process data in different ways. Machine-learning models that use such ANN architecture are capable of processing in a few hours databases that might take humans several months — as long as they have the requisite computing. This power comes from graphics processing units (GPUs), an advanced version of the central processing units (CPUs) that power home computers. GPUs are specialised to solve multiple mathematical problems simultaneously, speeding up the ANN's learning process. Machine learning v. artificial intelligence Recognising patterns in any form of data is in the domain of machine-learning (ML). It has applications in many fields. For example, ML models installed on self-driving cars are trained to check the condition of the cars' various components and if possible perform repairs. In the clinical realm, ML models can learn to find patterns in disorders that lead to new forms of treatment or read test reports to identify the risk of specific diseases. AI, on the other hand, is broader. It is based on more recent advances in ML that mimic human intelligence in problem-solving — like completing an unfinished sentence the way Arthur Clarke might or create an image in the style of Vincent van Gogh. Such AI models are being rapidly adapted for various applications. For example, researchers can build ML algorithms that digest the average behaviour of a user's financial accounts, like transaction frequency, spending limits, login times, and device use, according to Jia Zhai, senior associate professor in the Department of Finance at the Xi'an Jiaotong-Liverpool University in Suzhou, China. 'If a fraudster gains valid credentials but logs in via an unrecognised device at 3 am and initiates rapid microtransactions, clustering algorithms detect this as an outlier compared to the user's historical behaviour,' she said. Then, more specialised networks called convolutional neural networks look for complex patterns in transactions; recurrent neural networks identify deviations from average spending behaviour; and graph neural networks examine the connections between accounts, merchants, and IP addresses to uncover hidden money-laundering networks, said Shimeng Shi, assistant professor in the same department and institute. The capabilities of AI surged from around 2017, when researchers began using ML to process large amounts of data simultaneously using multiple GPUs. A major advance that resulted was the large language model. As private sector enterprises figured out how to apply this and other models to solve different but specific problems, manufacturers and vendors rushed to meet the demand for the underlying hardware. This in turn led to more computing power and faster chips entering the market. Another equally important and roughly simultaneous development was the availability of large datasets on which the new batch of AL/ML models could be trained. All together, the next major advance took shape: generative AI, where an AI model didn't just analyse what was in front of it but also put existing information together in new ways, e.g. creating an image based on a user's text instructions. Perhaps the most well-known products that make such capabilities available to users are ChatGPT and DALL-E, both made by US-based company OpenAI. Shimeng Shi also said financial firms have been trying to 'help their clients to generate real-time trade ideas' using 'AI-empowered tools' that are out of view. The technology isn't a silver bullet, of course. Completely autonomous AI agents are not yet a reality because of their tendency to 'hallucinate', i.e. invent information that doesn't exist in the real world. This happens when an AI model is confronted with a particular kind of data it hasn't been trained on, causing it to mix them up with data it is familiar with. Precision, speed, structure 'Your model is as good as your data,' Aditi Shanmugam, a research associate of analytics and databases at Bengaluru-based startup Ultrahuman who uses AI models to draw inferences in health data, said. 'For any good model, you need lots of data with good diversity,' Debnath Pal, professor in the Department of Computational and Data Sciences at the Indian Institute of Science (IISc), Bengaluru, added. The next thing a good model needs after training data is hardware resources. 'Each data centre — especially a large one with AI GPUs — can consume as much power as a whole nuclear power plant will produce,' Akash Pradhan, a member of the technical staff at chip-maker AMD, said. The machines also generate a large amount of heat of their own, which means they need to be cooled, which requires even more power. If the machines are performing a particularly complex task, the data they are manipulating need to be stored in high-speed hard drives. Given all these requirements, most of the better AI research today — especially of the cutting edge variety — is led by big corporations with deep pockets. But it may not always be this way. Many computer scientists are working on techniques to lower the power and hardware requirements for specific models without also compromising the latter's problem-solving ability. For example, Rakesh Sengupta, director of the Centre for Creative Cognition at S.R. University in Warangal is working on a technique called pruning. In a recent paper, he proposed a method in which some connections in a neural network are cut while the most important ones are preserved, then retraining the model to work with the smaller set of connections. He expressed belief that we can 'trim' existing models without sacrificing their reliability. 'I feel customising small language models for specialised tasks in healthcare or robotics will be most' improved, he added. The faster and more precise AI models become, the more precise application they will find — 'whether it's neural prosthetics or brain-computer interfaces or some [other] technologies that can interface seamlessly with the brain,' Sengupta said. Most AI researchers use the most accessible model and data to achieve specific goals. In their absence, researchers draw up datasets from first principles and mix them with available ones to create more complete, yet also more reliable, datasets. For example, Pal said, material science researchers integrate experimental data of the properties of materials with synthetic data of the presence of other materials to create synthetic datasets that are complete and contain more information for the models to search for. 'After doing all these experiments, you may be able to figure out that, 'oh, if I dope with this material, then I would get that property'. Such experiments are being done and then it is kind of reducing the time to realise those compositions,' Pal said. But defining the problems and arriving at solutions is not always straightforward, and often depends on factors that require researchers to dig deep into the specific peculiarities in the data and the models. For example, Adway Mitra, an assistant professor in the Centre of Excellence in Artificial Intelligence at IIT-Kharagpur, expressed belief that there is considerable scope to use AI models to improve weather and seasonal predictions, especially of Indian monsoons. This is what he does. Often, weather data exists as a combination of textual, visual, and numerical data 'We first condense the space of all weather patterns to a small number (about 10) of 'typical' patterns, and our claim is that every day's weather pattern is an approximate or noisy version of any one of these 'typical' patterns,' Mitra explained. Generative AI models train on these datasets and create new data from them that are easier to analyse and represent as mathematical structures. But real-world weather data is often noisy and difficult to interpret, and weather is a complex system with lots of parameters across various locations and times. 'The key technical challenge is the availability of weather data,' Mitra said. Weather data has structures that an ML model must be able to work with. Mitra's research focuses on what kind of algorithms or models scientists can use to best utilise that structure. Thus, researchers like Mitra are turning the idea of AI back to where it started from: while machines are good at understanding patterns, at the end of the day, the patterns must be supported by physics because weather patterns are created by physical processes. The question researchers are thus asking is: 'How can we constrain machine learning so that it provides us values which are consistent with the different laws of physics?' This exercise, Mitra said, will bring down the number of computations the AI models will need to perform to make accurate weather predictions and thus demand less power and data storage infrastructure. Towards AI agents Sandeep Juneja, a professor of computer science and director of Safexpress Centre for Data, Learning and Decision Sciences at Ashoka University, said corporations like Google have large data-driven AI models that are already doing this at scale — but that they may be running out of data to train with. On the other hand, he added, academicians in India and even worldwide don't have the computational capacity to develop such large models to develop nuanced weather predictions. He said models like DeepSeek provide hope as they have been able to use 'clever' tricks to use small amounts of data to train the models efficiently. But Chiranjib Bhattacharyya, a professor in the Department of Computer Science and Automation at IISc, said that even DeepSeek's model is large compared to what academics can presently access. Lixian Qian, an associate dean for research and professor in the Department of Intelligent Operations and Marketing at from Xi'an Jiaotong-Liverpool University, works on autonomous vehicles that use AI algorithms to model their complex environment, predict the movement of objects on the road, and decide how the vehicle moves to avoid accidents. While there has been significant integration of AI into autonomous vehicles, he said practical challenges remain — and AI has the ability to address them. 'AI algorithms can increase the number of tests on autonomous driving systems in diverse driving environments, so that the potential problems could be uncovered and diagnosed in advance.' In a sense, then, we are slowly transitioning from a world of generative AI to agentic AI. AI agents are more powerful than the present versions of AI which still specify on particular tasks. They integrate the power of different functionalities into an ecosystem that can be empowered to make particular decisions. For example, AI assistants may one day be able to parse data about a person's life, including their hobbies, expenses, health conditions, work, and life priorities, and help them with tasks like booking appointments or filling out forms. However, how much of such a technology will be accessible and usable to people at large will depend on data privacy protections and technological literacy. Bhattacharya said social scientists and law scholars will play an important role in shaping how such systems fit into our lives. Sohini Majumdar, a software engineering manager at Salesforce, agreed the time for agentic AI was near. Many business platforms are increasingly using agentic AI instead of simple chatbots to integrate their business and increase their impact. However, she added, fundamental challenges remain in using generative AI models too. The principal challenge is understanding why an AI model outputs one specific business decision rather than another — especially if the output deviates from a human understanding of the business. So she and her colleagues use yet other AI models to validate the decisions suggested by generative AI. Their aim is to understand what a model is doing and how to tweak various inputs so that the model does what she wants it to. In this way, her team will be able to make automated decisions and trust them as well. According to Bhattacharyya, the fundamental problem boils down to AI models currently lacking the ability to reason. Pal agreed: 'What is the path that [a model] follows? Is it following the same path that as a human I would want it to follow to do this inference? That we don't know.' Mathematicians, computer scientists, and physicists are currently trying to untangle this Gordian knot. Pradhan of AMD said these challenges are fundamental: despite neural networks being based on the human brain, the way the machines learn and the way the human brain functions are different. A fundamental difference is how the computational blocks in an artificial intelligence model — the GPUs — are different sites from where the parameters of the model are stored. In the brain, these two are stored at the same location. Second, chemical reactions run the brain whereas electricity runs digital machines. The challenges, Pradhan said, can be mitigated in neuromorphic computing, where the hardware more closely mimics how neural networks in our brain operate. 'Instead of you writing code to emulate a neural network, your hardware is the neural network,' he said. Functional neuromorphic computers of the future are expected to require less power and have the ability to update its model automatically when it encounters new data — just like our brain. But there are multiple hardware and software challenges to be surmounted before it can be realised, Pradhan said. Sengupta is sceptical of how much AI will truly mimic us. While each generation of humans has been more comfortable with the increasing presence of smarter gadgets and software, and the tools have changed us too, there might be a natural barrier to how much AI might affect us. But it has also made us think deeply about our technologies. Just like how we constantly grapple with understanding our own existence, we might have to do the same when we invite AI into every aspect of our lives. Debdutta Paul is a freelance science journalist.

Therabot Humanizes AI Help, Recasts Tech Strategy
Therabot Humanizes AI Help, Recasts Tech Strategy

Forbes

time10-04-2025

  • Health
  • Forbes

Therabot Humanizes AI Help, Recasts Tech Strategy

Dartmouth researchers successfully piloted AI-powered therapy. Groundbreaking Dartmouth research could reshape mental health care with an AI-powered therapy chatbot that wins patient trust and delivers measurable clinical gains. The implications reach far beyond the clinical couch and deep into corporate c-suites. Therabot's trial treated over 100 participants diagnosed with depression, anxiety or eating disorders. After eight weeks, the symptom reduction results published in the New England Journal of Medicine were striking. "Our results are comparable to what we would see for people with access to gold-standard cognitive therapy with outpatient providers," Dartmouth Geisel School of Medicine professor Nick Jacobson highlighted. For businesses struggling with employee mental health concerns and skyrocketing healthcare costs, AI solutions like Therabot could represent a scalable intervention that meets high standards. It also recasts workplace debates about how widely AI can help. What makes Therabot particularly notable is its success in a field long considered "AI-proof" due to the presumed necessity of personal empathy and connection. If AI can forge therapeutic relationships comparable to those with human providers, few professional domains can confidently claim immunity from similar disruption. Participants reported genuine, trusted connections with Therabot. Users frequently initiated conversations with the AI beyond prompted interactions, with usage spikes seen during vulnerable times such as the middle of the night. This unexpected development suggests AI systems might fill social and emotional support roles that extend beyond therapy to outdated approaches to legacy business functions such as sales and marketing, customer service, hiring and training. Unlike location-bound counseling, AI therapy can intervene at critical moments. "It was available around the clock for challenges that arose in daily life and could walk users through strategies to handle them in real time," says co-author Dartmouth postdoctoral fellow Michael Heinz. For employers, this access could reduce absenteeism. That's elusive process efficiency that simultaneously delivers heightened effectiveness. Therabot shows AI's capability to spur innovation and humans' capacity to thwart it. Since the pioneering mid-1960s release of Joseph Weizenbaum's ELIZA, the risks of tech-based therapy have been well documented and exhaustively debated. Therabot models responsible AI development in high-stakes domains. "There are a lot of folks rushing into this space since the release of ChatGPT and it's easy to put out a proof of concept that looks great at first glance, but the safety and efficacy is not well established," Jacobson notes. "This is one of those cases where diligent oversight is needed and providing that really sets us apart." Therabot's extensive input from mental health leaders shows that slow, methodical development yields better trusted products. AI's success in 'uniquely human' realms signals more disruption risk for legacy jobs. Many employers may not even sense the boundless potential or looming jeopardy. To date, AI has conquered time by speeding many highly-structured work tasks. Now, leaders must ask how it can tackle the high-touch, ill-structured activities. Those tech strategy solutions start with credible leadership, curious culture and capable talent. In turn, ten questions assess AI attitudes, awareness, ambition, aspiration – and odds: The (non)answers tell all. The question isn't whether AI will transform business, but how quickly. The open question is who will be the change architects or casualties. Bot therapy, anyone?

Does ELIZA, the first chatbot created 60 years ago, hold lessons for modern AI?
Does ELIZA, the first chatbot created 60 years ago, hold lessons for modern AI?

CBC

time18-02-2025

  • Entertainment
  • CBC

Does ELIZA, the first chatbot created 60 years ago, hold lessons for modern AI?

Scientists recently resurrected the original computer code for ELIZA, considered the world's first chatbot, and thought by many to be the first use of artificial intelligence (AI). Its original creator, however, was adamant the program was not intelligent at all. ELIZA was a simple program created in the 1960s by Joseph Weizenbaum, a professor at the Massachusetts Institute of Technology who was interested in human-computer interfaces. He designed ELIZA to be a generalized conversation program to make a natural language conversation with a computer possible. The program could run scripts with different language rules to represent individual conversational personas. Weizenbaum named this early natural language processing program ELIZA after Eliza Doolittle in the 1912 George Bernard Shaw play Pygmalion, where a commoner learns to speak with an upper-class English diction despite her working class roots. "Like the Eliza of Pygmalion fame, it can be made to appear even more civilized," Weizenbaum wrote in his 1966 paper in the journal Communications of the ACM. He wrote the first and most well-known script to run on ELIZA, called "DOCTOR," to mimic a first interview between a patient and psychotherapist. It responded to inputs from the user by selecting key words and spinning them back in a friendly way. For example, here is part of the most famous transcript that Weizenbaum described in his 1966 paper. It's an exchange between the program and a young woman after initially prompting her with, "PLEASE TELL ME YOUR PROBLEM." User: "Men are all alike." ELIZA: "IN WHAT WAY" User: "They're always bugging us about something or other." ELIZA: "CAN YOU THINK OF A SPECIFIC EXAMPLE" User: "Well, my boyfriend made me come here." ELIZA "YOUR BOYFRIEND MADE YOU COME HERE" User: "He says I'm depressed much of the time." ELIZA: "I'M SORRY TO HEAR YOU ARE DEPRESSED" The world's first 'chatbot' was a hit Unlike chatbots today that use large language models and train on massive language databases to generate responses, ELIZA — developed three decades before the term "chatbot" even existed — had little data to rely on. The DOCTOR script would search for keywords in the person's reply and mirror it back to them in its response. If no keyword was detected, the script would simply offer a generic response. ELIZA was programmed to prompt the user to continue the conversation. That made it seem like it was a good listener, which is important in psychotherapy, and what made the program a huge sensation. During the late 1960s and early '70s, before computers shrunk for individual use, they were large, and quite costly mainframe systems. So ELIZA seemed miraculous, with students believing the machine could think like a human and understand their problems. At the time, Weizenbaum described the response to ELIZA as "a striking form of Turing's test," where a user cannot tell whether responses are coming from a machine or a real person. I had the privilege of meeting Joseph Weizenbaum in the early '80s. He told me, "The program totally backfired. People thought ELIZA was intelligent, they were confiding in the machine, revealing personal issues they would not tell anyone else. Even my secretary asked me to leave the room so she could be alone with the computer. They called me a genius for creating it, but I kept telling them that the computer was not thinking at all." Later, Weizembaum wrote a book called Computer Power and Human Reason: From Judgment to Calculation in which he emphasized that computers, as clever and capable as they may become, do not think like humans and should never replace humans in roles such as doctors, teachers or scientists. He disliked the term "artificial intelligence," believing that humans are always necessary and computers should never be allowed to make important decisions. Reanimating defunct code For nearly 60 years, AI historians thought the original 420-line computer code for ELIZA and the famous DOCTOR script were lost. But in 2021, two software sleuths found the original printouts of code in a dusty box of Weizenbaum's archives at MIT. Those software scientists, among others, wrote in a paper that has yet to be peer-reviewed that they figured the only way to see if the code worked was to try it — a task made even more difficult given that the defunct code was written for a computer and operating system that no longer existed. Back in the 1960s, MIT had an IBM 7094, an early transistorized computer loaded with 32 kilobytes of user memory. At the time, it was one of the biggest and fastest computers available. The operating system developed for it was called the Compatible Time-Sharing System (CTSS). It was also the world's first time sharing system — meaning that it could support around 30 users at once. To resurrect the original ELIZA program with its DOCTOR script, the researchers used a restored CTSS operating system on hardware and software designed to emulate the original IBM 7094. On Dec 31, 2024, they brought ELIZA back to life and tested it by recreating the "Men are all alike" conversation. The revived version, adapted to work on modern systems, is available here for anyone to try out. Weizenbaum's legacy lives on in Germany at the Weizenbaum Institute, dedicated to the critical exploration and constructive shaping of digitization for the benefit of society. Today, AI is a powerful new tool that is having a profound influence on science, medicine, academics and culture. It's also growing at an astounding rate. This growth comes with a very real fear factor helped along by Hollywood with the likes of the Terminator film series or War Games, a 1983 film where computers try to eliminate humanity — and more recently, ominous warnings from AI industry insiders. This past week, government leaders, executives, and experts from over 100 countries met in Paris for the Paris Artificial Intelligence Action Summit, to discuss the future of AI with a focus on how to keep it both accessible and safe as the technology continues to develop at breakneck speed.

In a showdown of psychotherapists vs. ChatGPT, the latter wins, new study finds
In a showdown of psychotherapists vs. ChatGPT, the latter wins, new study finds

Yahoo

time13-02-2025

  • Health
  • Yahoo

In a showdown of psychotherapists vs. ChatGPT, the latter wins, new study finds

Can couples in distress find as much support from artificial intelligence as they would from a therapist? Yes, a new study suggests. For the experiment, researchers prepared couples-therapy scenarios—one in which a partner was dismissing the other's depression, for example—and then turned to two support systems for responses: a group of experienced therapists and the ChatGPT AI chatbot. They then had 830 study participants—about half men and half women averaging 45 years old, the majority of which were in relationships—randomly receive either a therapist or AI response to see how easily they could discern the source of each. As suspected from the outset by researchers—including lead author Gabe Hatch and colleagues from psychology and psychiatry programs at universities including the Ohio State University—participants had difficulty identifying whether responses were written by ChatGPT or by therapists; they correctly guessed that the therapist was the author 56.1% of the time and correctly guessed it was ChatGPT 51.2% of the time. Further, participants in most cases preferred ChatGPT's take on the matter at hand. That was based on five factors: whether the response understood the speaker, showed empathy, was appropriate for the therapy setting, was relevant for various cultural backgrounds, and was something a good therapist would say. ChatGPT came out ahead of human therapists particularly around understanding the speaker, showing empathy, and showing cultural competence. 'This may be an early indication that ChatGPT has the potential to improve psychotherapeutic processes,' the authors of the study, published in PLOS Mental Health, wrote. Specifically, it could lead to the development of different ways to test and create psychotherapeutic interventions—something the authors call for mental health experts to pay attention to, given the mounting evidence that generative AI could be integrated into therapeutic settings sooner rather than later. For years, experts have noted that psychology practice could do well with some AI innovations—including therapeutic chatbots, tools that automate note taking and other administrative tasks, and more intelligent trainings—as well as clinicians had tools they could understand and trust, the American Psychological Association wrote back in 2023. 'The bottom line is we don't have enough providers,' Jessica Jackson, a licensed psychologist and equitable technology advocate based in Texas, told APA for that story. 'While therapy should be for everyone, not everyone needs it. The chatbots can fill a need.' Training from a chatbot, the piece noted, could fill in gaps for some mental health concerns (such as sleep problems), make mental health support more affordable and accessible, and be a great tool for those with social anxiety who may find human therapists off-putting. And since the 1966 invention of ELIZA—a chatbot programmed to respond as a Rogerian psychotherapist—researchers have debated whether AI could play the role of a therapist. Regarding that, the new study authors write, 'Although there are still many important lingering questions, our findings indicate the answer may be 'Yes.' We hope our work galvanizes both the public and Mental Practitioners to ask important questions about the ethics, feasibility, and utility of integrating AI and mental health treatment, before the AI train leaves the station.' This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store