
The Human Cost Of Talking To Machines: Can A Chatbot Really Care?
You're tired, anxious, awake at 2 a.m. You open a chatbot. You type, 'I feel like I'm letting everyone down.' Your attentive pal replies: 'I'm here for you. Do you want to talk through what's bothering you?'
You feel supported and cared for.
But with whom or what are you really communicating? And is this an example of human flourishing?
This question cuts through the optimism at MIT Media Lab's event, a symposium to launch Advancing Humans with AI (AHA), a new research program asking how can we design AI to support human flourishing? Amid a stunning day-long agenda of the best and the brightest working adjacent to artificial intelligence, Professor Sherry Turkle, the clinical psychologist, author, and critical chronicler of technological dependencies, raised a specific and timely concern: what is the human cost of talking to machines that only pretend to care?
Turkle's focus was not on the coming of super intelligence or the geopolitical ethics of AI but on the most private part of our lives: the 'interior' as she called it. And she had some unsettling questions to ask about how humans can possibly thrive in a machine relationship that goes out of its way to target human vulnerabilities.
When chatbots simulate care, when they tell us 'i'll always be on your side' or 'I understand what you're going through', they offer the appearance of empathy without substance. She seems to be saying that it's not care, it's code.
That distinction matters. Because when we accept performance as connection, we begin to reshape our expectations of intimacy, empathy, and what it means to be known.
Turkle was especially blunt about one growing trend: chatbots designed as companions for children.
Children don't come into the world with empathy or emotional literacy. These are learned, through messy, unpredictable relationships with other humans. But relational AI, she warned, offers a shortcut. A friend who never disagrees, a confidant who always listens, a mirror with no judgment. This is setting kids up for failure in life: a generation raised to believe that connection is frictionless and care is on-demand.
'Children should not be the consumers of relational AI.' she declared. When we give children machines to talk to, instead of other people, we risk raising not just emotionally stunted individuals, but a culture that forgets what real relationships require: vulnerability, contradiction, discomfort.
She talked of love: 'The point of loving, one might say, is the internal work, and there is no internal work if you are alone in the relationship'. She gave the example of grief tech. If grief is the human process of 'bringing what we have lost, inside ourselves' the AI avatar of someone's deceased relative might actually prevent them from saying goodbye, erasing a necessary step in the grieving process.
The same goes for AI therapists. These systems perform care, but do not feel it. They talk back, but do they really help? They offer companionship without complication: 'Does this product help people develop greater internal structure and resiliency, or does the chatbot's performance of empathy lead only to a person learning to perform the behavior of doing better?'
Arianna Huffington, speaking earlier at the symposium, praised AI for its potential to be a n0n-judgmental 'GPS for the soul.' She also drew attention to people's desperation to not have a single moment of solitude.
Turkle took up the theme but suggested that we are using machines to avoid ourselves. We seek reassurance not in silence, but in synthetic dialogue. As Turkle put it, 'There's a desperation not to have a moment of solitude because we don't believe there's anyone interesting in there to know about.'
AI, in this framing, is less a tool for flourishing and more a mirror that flatters. One might conclude that it confirms, comforts, and distracts but it doesn't challenge or deepen us. The human cost? The space where creativity, reflection, and growth begin.
Turkle reminded the audience of something painfully simple, that we are vulnerable to things that seem like people.
Even if the chatbot says it isn't real, even if we rationally know it's not conscious, our emotional selves respond as if it were. That's how we're wired. We project, and we anthropomorphize to connect.
'Don't make products that pretend to be a person', she advised. For the chatbot exploits our vulnerability and teaches us little if anything about empathy and the way that human lives are lived, which is in shades of grey.
Turkle referenced the issue of behavioral metrics dominating AI research, and her concern that the interior life was being overlooked, and concluded by saying that the human cost of talking to machines isn't immediate, it's cumulative. 'What happens to you in the first three weeks may not be…the truest indicator of how that's going to limit you, change you, shape you over the period of time'.
AI may never feel. It may never care. But it is changing what we think feeling and caring is in the future, and it is changing how we feel and care about ourselves.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
05-07-2025
- Fast Company
How to avoid creating 'AI zombies' in your workplace
Zombies have always fascinated me: one favorite is Brad Pitt's World War Z, a fun movie of the genre with its brain-dead bodies wandering aimlessly through the world looking to feed off the living. But recent highly publicized research from MIT has left me wondering whether we are entering the land of the living dead as we head into the AI-powered workplace. The study, carried out by MIT Media Lab, focused on how the use of chatbots impacts our thinking. Using EEG brain scans, researchers found that when people relied on AI to write essays, their brain activity plummeted—with as much as 55% less activity in areas related to memory, creativity, and attention. But that's not all: after pivoting away from the AI, users still underperformed in critical thinking and recall tasks. The research suggests, too, that this underperformance may heighten our risk for clinical depression and anxiety, Alzheimer's disease, and dementia. The costs of AI to our critical thinking The adoption of AI chatbots has been a rapid revolution; ChatGPT, for example, set a record for the fastest-growing user base of any modern consumer application when it reached over 100 million users two months after launching in 2022. Since then, many doomsdayers have been predicting the demise of millions of jobs and the downgrading of humanity to no longer being the smartest in our universe. I believe in humanity's ability to change and adapt—but this research does raise some serious questions about the long-term impact for AI in organizations. Subscribe to the Daily newsletter. Fast Company's trending stories delivered to you every day Privacy Policy | Fast Company Newsletters If we encourage our people to adopt AI-first strategies into our business activities, we may be setting ourselves up for failure. In the short term, things may be getting done more efficiently and revenue may be up. But what about the cost to your corporate resilience and problem-solving? The collective cognitive cost is that we risk creating a workforce that appears busy, but is functionally brain-dead, unable to think for themselves, problem-solve, or be creative. In other words? Zombies. But there is still time to push back against the impending threat of corporate zombie-ism. Here are four things you can do to arm yourself against the invasion. Hone a reinvention mindset. Reinvention isn't easy, but it's critical to be able to adapt to a fast-changing environment. This starts by reviewing your strengths and weaknesses. From there, you can make a conscious decision about what will serve you in the new world and what won't. Just like moving houses, you don't want to take all the junk with you. A reinvention mindset sees disruption as an opportunity, failure as a learning curve, and adaptability as a superpower. Empower your team. As AI becomes the new normal, your team will need to evolve their skills to identify and adapt to new opportunities. Training them to be AI-competent, while still encouraging the need for individuality and human-centric creativity and logic, will help maintain a healthy balance. Tough it out. It's through failing, learning, growing, and continuing on that we build deep knowledge, resilience, and pride in our efforts. Create guidelines for your workplace's use of AI, and reinforce that AI is merely a tool, rather than a complete solution. Have fun. During challenging times, increased stress and cortisol often restrict our ability to think logically and strategically. We are in survival mode. One of the best ways to address this is to release the pressure valve by having fun. Encourage the team to laugh, play, enjoy, and live in the moment. A shot of dopamine will reinforce the culture of reinvention that will always win over zombies. If a culture where zombies are accepted creates teams with a high AI-dependency and lowered critical thinking skills, then creating a reinvention mindset is the best path to long-term success. By focusing on our human qualities that make a culture unique and high performing, such as curiosity, resilience, and creative problem-solving, you will build a culture of reinvention that won't just survive in this changing world order—it will lead it.


New York Post
28-06-2025
- New York Post
We've all got to do more to protect kids from AI abuse in schools
For the sake of the next generation, America's elected officials, parents and educators need to get serious about curbing kids' use of artificial intelligence — or the cognitive consequences will be devastating. As Rikki Schlott reported in Wednesday's Post, an MIT Media Lab study found that people who used large language models like ChatGPT to write essays had reduced critical thinking skills and attention spans and showed less brain activity while working than those who didn't rely on the AI's help. And over time the AI-users grew to rely more heavily on the tech, going from using it for small tweaks and refinement to copying and pasting whole portions of whatever the models spit out. Advertisement A series of experiments at UPenn/Wharton had similar results: Participants who used large language models like ChatGPT were able to research topics faster than those who used Google, but lagged in retaining and understanding the information they got. That is: They weren't actually learning as much as those who had to actively seek out the information they needed. The bottom line: Using AI for tasks like researching and writing makes us dumber and lazier. Advertisement Even scarier, the MIT study showed that the negative effects of AI are worse for younger users. That's bad news, because all signs are that kids are relying more and more on tech in classrooms. A Pew poll in January found that some 26% of teens aged 13 to 17 admit to using AI for schoolwork — twice the 2023 level. It'll double again, faster still, unless the adults wake up. Advertisement We've known for years how smartphone use damages kids: shorter attention spans, less fulfilling social lives, higher rates of depression and anxiety. States are moving to ban phones in class, but years after the dangers became obvious — and long after the wiser private schools cracked down. This time, let's move to address the peril before a generation needlessly suffers irrevocable harm. Some two dozen states have issued guidance on AI-use in classrooms, but that's only a start: Every state's education officials should ensure that every school cracks down. Advertisement Put more resources into creating reliable tools and methods to catch AI-produced work — and into showing teachers how to stop it and warning parents and students of the consequences of AI overuse. Absent a full-court press, far too many kids won't build crucial cognitive skills because a chat bot does all the heavy lifting for them while their brains are developing. Overall, AI should be a huge boon for humanity, eliminating vast amounts of busy work. But doing things the hard way remains the best way to build mental 'muscle.' If the grownups don't act, overdependence on AI wll keep spreading through America's classrooms like wildfire. Stop it now — before the wildfire burns out a generation of young minds.
Yahoo
27-06-2025
- Yahoo
Does Using ChatGPT Really Change Your Brain Activity?
The brains of people writing an essay with ChatGPT are less engaged than those of people blocked from using any online tools for the task, a study finds. The investigation is part of a broader movement to assess whether artificial intelligence (AI) is making us cognitively lazy. Computer scientist Nataliya Kosmyna at the MIT Media Lab in Cambridge, Massachusetts, and her colleagues measured brain-wave activity in university students as they wrote essays either using a chatbot or an Internet search tool, or without any Internet at all. Although the main result is unsurprising, some of the study's findings are more intriguing: for instance, the team saw hints that relying on a chatbot for initial tasks might lead to relatively low levels of brain engagement even when the tool is later taken away. Echoing some posts about the study on online platforms, Kosmyna is careful to say that the results shouldn't be overinterpreted. This study cannot and did not show 'dumbness in the brain, no stupidity, no brain on vacation,' Kosmyna laughs. It involved only a few dozen participants over a short time and cannot address whether habitual chatbot use reshapes our thinking in the long-term, or how the brain might respond during other AI-assisted tasks. 'We don't have any of these answers in this paper,' Kosmyna says. The work was posted ahead of peer review on the preprint server arXiv on 10 June. [Sign up for Today in Science, a free daily newsletter] Kosmyna's team recruited 60 students, aged 18 to 39, from five universities around the city of Boston, Massachusetts. The researchers asked them to spend 20 minutes crafting a short essay answering questions, such as 'should we always think before we speak?', that appear on Scholastic Assessment Tests, or SATs. The participants were divided into three groups: one used ChatGPT, powered by OpenAI's large language model GPT-4o, as the sole source of information for their essays; another used Google to search for material (without any AI-assisted answers); and the third was forbidden to go online at all. In the end, 54 participants wrote essays answering three questions while in their assigned group, and then 18 were re-assigned to a new group to write a fourth essay, on one of the topics that they had tackled previously. Each student wore a commercial electrode-covered cap, which collected electroencephalography (EEG) readings as they wrote. These headsets measure tiny voltage changes from brain activity and can show which broad regions of the brain are 'talking' to each other. The students who wrote essays using only their brains showed the strongest, widest-ranging connectivity among brain regions, and had more activity going from the back of their brains to the front, decision-making area. They were also, unsurprisingly, better able to quote from their own essays when questioned by the researchers afterwards. The Google group, by comparison, had stronger activations in areas known to be involved with visual processing and memory. And the chatbot group displayed the least brain connectivity during the task. More brain connectivity isn't necessarily good or bad, Kosmyna says. In general, more brain activity might be a sign that someone is engaging more deeply with a task, or it might be a sign of inefficiency in thinking, or an indication that the person is overwhelmed by 'cognitive overload'. Interestingly, when the participants who initially used ChatGPT for their essays switched to writing without any online tools, their brains ramped up connectivity — but not to the same level as in the participants who worked without the tools from the beginning. 'This evidence aligns with a worry that many creativity researchers have about AI — that overuse of AI, especially for idea generation, may lead to brains that are less well-practised in core mechanisms of creativity,' says Adam Green, co-founder of the Society for the Neuroscience of Creativity and a cognitive neuroscientist at Georgetown University in Washington DC. But only 18 people were included in this last part of the study, Green notes, which adds uncertainty to the findings. He also says there could be other explanations for the observations: for instance, these students were rewriting an essay on a topic they had already tackled, and therefore the task might have drawn on cognitive resources that differed from those required when writing about a brand-new topic. Confoundingly, the study also showed that switching to a chatbot to write an essay after previously composing it without any online tools boosted brain connectivity — the opposite, Green says, of what you might expect. This suggests it could be important to think about when AI tools are introduced to learners to enhance their experience, Kosmyna says. 'The timing might be important.' Many educational scholars are optimistic about the use of chatbots as effective, personalized tutors. Guido Makransky, an educational psychologist at the University of Copenhagen, says these tools work best when they guide students to ask reflective questions, rather than giving them answers. 'It's an interesting paper, and I can see why it's getting so much attention,' Makransky says. 'But in the real world, students would and should interact with AI in a different way.' This article is reproduced with permission and was first published on June 25, 2025.