Latest news with #AdvancingHumanswithAI


Forbes
14-04-2025
- Science
- Forbes
MIT Media Lab To Put Human Flourishing At The Heart Of AI R&D
Artificial Intelligence is advancing at speed. Both the momentum and the money is focused on performance: faster models, more integrations, ever accurate predictions. But as industry sprints toward artificial general intelligence (AGI), one question lingers in the background: what happens to humans? A recent report from Elon University's Imagining The Digital Future Center surveyed nearly 300 global technology experts. The resulting report, 'Being Human in 2035', concluded that most are concerned that the deepening adoption of AI systems over the next decade will negatively alter how humans think, feel, act and relate to one another. MIT Media Lab is trying to answer a similarly alarming issue: how can AI support, rather than replace, human flourishing? It is the central question of the Lab's newly launched Advancing Humans with AI (AHA) program. Heralded as a bold, multi-year initiative not to just improve AI, but to elevate human flourishing in an AI-saturated world, a star-studded symposium kicked off the concept and the different research domains it will tackle. Speakers included Arianna Huffington who spoke of AI being like a 'GPS for the soul', and Tristan Harris who warned about systems exploiting human vulnerabilities under the guise of assistance. Both agreed that AI shouldn't just be optimized for efficiency rather it should be designed to cultivate wisdom, resilience, and reflection. This echoed AHA's deeper vision to reorient AI development around designing for the human interior, the parts of us that make life worth living but often get left out of technical design conversations. Pat Pataranutaporn, co-lead of the AHA program, summed this up to the assembled audience, asking, 'What is the point of advancing artificial intelligence if we simultaneously devalue human intelligence and undermine human dignity? Instead, we should strive to design AI systems that amplify and enhance our most deeply human qualities' The Missing Research Layer in AI While safety and alignment dominate AI ethics debates, AHA concerns itself with longer-term human outcomes, as woven through the sections of the event which covered Interior Life, Social Life, Vocational Life, Cerebral Life and Creative Life. From over-reliance and skill atrophy to growing emotional attachment and isolation, people are already reshaping their lives around AI. But few research efforts are dedicated to systematically understanding these changes, let alone designing AI to mitigate them. AHA aims to do just that. The initiative is grounded in six research domains: A Moonshot Mindset The ambition of AHA is matched by its moonshot projects. These include: The message is clear: it's time to measure the wellbeing of humans not just the performance of machines. Why Now? As AI becomes increasingly embedded in health, education, work, and social life, the choices made by engineers and designers today will shape cognitive habits, emotional norms, and social structures for decades. Yet, as AHA's contributors pointed out throughout the symposium, AI is still mostly optimized for business metrics and safety concerns rather than for psychological nuance, emotional growth, or long-term well-being. MIT's AHA initiative is not a critique of AI. It's a call to design better, to design not just smarter machines, but systems that reflect us as our best selves. As Professor Pattie Maes, co-lead of the AHA program and director of the Fluid Interfaces group, told me after the event, 'We are creating AI and AI in turn will shape us. We don't want to make the same mistakes we made with social media. It is critical that we think of AI as not just a technical problem for engineers and entrepreneurs to solve, but also as a human design problem, requiring the expertise from human-computer interaction designers, psychologists, and social scientists for AI to lead to beneficial impact on the human experience.'


Forbes
10-04-2025
- Forbes
The Human Cost Of Talking To Machines: Can A Chatbot Really Care?
Artificial intelligence in humanoid head. Generative bot for creating ideas, editing, searching for ... More questions. Internet technology. Information technology. You're tired, anxious, awake at 2 a.m. You open a chatbot. You type, 'I feel like I'm letting everyone down.' Your attentive pal replies: 'I'm here for you. Do you want to talk through what's bothering you?' You feel supported and cared for. But with whom or what are you really communicating? And is this an example of human flourishing? This question cuts through the optimism at MIT Media Lab's event, a symposium to launch Advancing Humans with AI (AHA), a new research program asking how can we design AI to support human flourishing? Amid a stunning day-long agenda of the best and the brightest working adjacent to artificial intelligence, Professor Sherry Turkle, the clinical psychologist, author, and critical chronicler of technological dependencies, raised a specific and timely concern: what is the human cost of talking to machines that only pretend to care? Turkle's focus was not on the coming of super intelligence or the geopolitical ethics of AI but on the most private part of our lives: the 'interior' as she called it. And she had some unsettling questions to ask about how humans can possibly thrive in a machine relationship that goes out of its way to target human vulnerabilities. When chatbots simulate care, when they tell us 'i'll always be on your side' or 'I understand what you're going through', they offer the appearance of empathy without substance. She seems to be saying that it's not care, it's code. That distinction matters. Because when we accept performance as connection, we begin to reshape our expectations of intimacy, empathy, and what it means to be known. Turkle was especially blunt about one growing trend: chatbots designed as companions for children. Children don't come into the world with empathy or emotional literacy. These are learned, through messy, unpredictable relationships with other humans. But relational AI, she warned, offers a shortcut. A friend who never disagrees, a confidant who always listens, a mirror with no judgment. This is setting kids up for failure in life: a generation raised to believe that connection is frictionless and care is on-demand. 'Children should not be the consumers of relational AI.' she declared. When we give children machines to talk to, instead of other people, we risk raising not just emotionally stunted individuals, but a culture that forgets what real relationships require: vulnerability, contradiction, discomfort. She talked of love: 'The point of loving, one might say, is the internal work, and there is no internal work if you are alone in the relationship'. She gave the example of grief tech. If grief is the human process of 'bringing what we have lost, inside ourselves' the AI avatar of someone's deceased relative might actually prevent them from saying goodbye, erasing a necessary step in the grieving process. The same goes for AI therapists. These systems perform care, but do not feel it. They talk back, but do they really help? They offer companionship without complication: 'Does this product help people develop greater internal structure and resiliency, or does the chatbot's performance of empathy lead only to a person learning to perform the behavior of doing better?' Arianna Huffington, speaking earlier at the symposium, praised AI for its potential to be a n0n-judgmental 'GPS for the soul.' She also drew attention to people's desperation to not have a single moment of solitude. Turkle took up the theme but suggested that we are using machines to avoid ourselves. We seek reassurance not in silence, but in synthetic dialogue. As Turkle put it, 'There's a desperation not to have a moment of solitude because we don't believe there's anyone interesting in there to know about.' AI, in this framing, is less a tool for flourishing and more a mirror that flatters. One might conclude that it confirms, comforts, and distracts but it doesn't challenge or deepen us. The human cost? The space where creativity, reflection, and growth begin. Turkle reminded the audience of something painfully simple, that we are vulnerable to things that seem like people. Even if the chatbot says it isn't real, even if we rationally know it's not conscious, our emotional selves respond as if it were. That's how we're wired. We project, and we anthropomorphize to connect. 'Don't make products that pretend to be a person', she advised. For the chatbot exploits our vulnerability and teaches us little if anything about empathy and the way that human lives are lived, which is in shades of grey. Turkle referenced the issue of behavioral metrics dominating AI research, and her concern that the interior life was being overlooked, and concluded by saying that the human cost of talking to machines isn't immediate, it's cumulative. 'What happens to you in the first three weeks may not be…the truest indicator of how that's going to limit you, change you, shape you over the period of time'. AI may never feel. It may never care. But it is changing what we think feeling and caring is in the future, and it is changing how we feel and care about ourselves.