logo
#

Latest news with #SherryTurkle

Why is modern commerce corrosive?
Why is modern commerce corrosive?

Business Times

time5 days ago

  • General
  • Business Times

Why is modern commerce corrosive?

YOU'RE not imagining it. There is something shallow about modern life – a sense that traditional virtues, from craftsmanship to professionalism to loyalty, have somehow been hollowed out. Don't get me wrong: I love living in the 21st century and believe that the world is a far better place in 2025 than it was in, say, 1975. Still, there is something amiss. You can see it in long-term trends such as the demise of communities built around fishing, mining or manufacturing, and in more recent calamities such as the Internet's descent into a hellscape of fraud, manufactured anxiety and artificial intelligence slop. You can see it in serious matters such as the sewage flowing into the Thames, the decay of high streets or the precarity of many modern jobs. You can see it in more trivial worries such as the way each new casual dining concept so quickly goes downhill. You can see it in the fact that every single one of these social ills is intimately connected to commerce. There is no shortage of books to consult on the matter. This hollowing out has been explored in works as varied as Sherry Turkle's Alone Together, Barbara Ehrenreich's Nickel and Dimed, JD Vance's Hillbilly Elegy, Robert Putnam's Bowling Alone and Cory Doctorow's forthcoming Enshittification. But for the deep analysis, turn to the philosopher Alasdair MacIntyre's After Virtue, published in 1981. MacIntyre articulated an utter disenchantment with three centuries of moral philosophy all the way back to the Enlightenment, and argued that it was hardly a surprise that modern society itself lost its way. He argued that clear thinking and virtuous action couldn't be unmoored from a social context – it had to be embedded in a community with shared values, goals and practices. His fellow philosophers found the book impossible to ignore. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up MacIntyre died in May at the age of 96, which prompted me to turn back to a piece of his writing (in the 1994 essay A Partial Response To My Critics) that has stuck with me for decades: the tale of two fishing crews. One crew is 'organised and understood as a purely technical and economic means to a productive end, whose aim is only or overridingly to satisfy as profitably as possible some market's demand for fish'. The crew members are motivated to work hard, innovate and hone their skills, because that way lies profit. The other crew has developed 'an understanding of and devotion to excellence in fishing and to excellence in playing one's part as a member of such a crew'. This excellence is about skill, to be sure – but also about character, social bonds and courage. These fishermen are risking their lives and are dependent on each other. And, adds MacIntyre, 'when someone dies at sea, fellow crew members, their families and the rest of the fishing community will share a common affliction and common responsibilities'. The values of this second crew are what we seem to be losing when a private equity group 'rolls up' hundreds of small independent vets; or when an old-fashioned private partnership such as Lehman Brothers becomes a publicly traded company; or when a business embraces a mission statement that could equally describe the aim of any other business. Try this: 'Our objective is to maximise value for our shareholders by focusing on businesses where we have market leadership, a technological edge and a world-competitive cost base'. Any guess as to the industry? It could be anything, so it means nothing. I was introduced to MacIntyre's ideas not by my philosophy tutors, but by the economist John Kay. In The Truth About Markets (2003), Kay quotes MacIntyre's description of the fishing crews, and then asks a question: which crew would make more money? MacIntyre assumed the answer was depressingly self-evident: the profit-maximising crew will be an unstoppable force, which is why modern commerce is so corrosive. Organisations that offer the riches of friendship, community, loyalty, craft and professionalism are sure to be driven out of business by the relentless economic logic of the profit-maximiser. They make money, and destroy what really matters. But do they really make money? Kay argues that narrow profit-maximising is often a failure, even by its own denuded standards. A 1972 Harvard Business School case study examines a real-world example of MacIntyre's profit-maximising fishing crew. The Prelude Corporation, the largest lobster producer in North America, aimed to become the General Motors of the fishing industry. It went bankrupt shortly after the case study was written. Lehman Brothers is another example – was it really more successful after jettisoning the traditional structure in which the capital at risk was provided by partners who best understood the business? A third example is the chemical giant ICI, which in 1994 published that vacuous mission statement about 'market leadership'. A titan of 20th-century British manufacturing, it faded and, in 2008, was absorbed and broken up by a Dutch paint company. Perhaps ICI would have done better had they paid less attention to making money, and more attention to making chemicals. This should not really surprise us, as Kay explains in The Corporation in the 21st Century (2024). To be solidly profitable, companies need some kind of competitive advantage. That might rest on network effects, intellectual property or even political connections. But it might equally rest on a trusted brand and well-worn habits of making the right kind of decision, quickly. In other words, profitability can rest on shared values, goals and practices too. An organisation that MacIntyre himself might admire, one that has developed the right kind of culture, may well be more attractive to customers, more appealing to potential employees and simply more effective at doing all the things a particular business in a particular industry must do. Consider the Financial Times itself. I dare say everyone involved in the business prefers to be paid, and the FT aims to be profitable. Yet we didn't come here with the hope of printing money; we came with the aim of printing newspapers. If the FT's entire operation, day to day and top to bottom, was predicated on maximising profit, this would be a different newspaper. It is not obvious that it would be a more profitable one. FINANCIAL TIMES

Learning To Learn At Your Own Pace
Learning To Learn At Your Own Pace

Forbes

time07-06-2025

  • Entertainment
  • Forbes

Learning To Learn At Your Own Pace

In training In her memoir, The Empathy Diaries, Sherry Turkle writes about her third-grade teacher thought that all children should be taught Shakespeare, and when the students encountered a reference to sex, the teacher would say, "'You'll understand later; you don't need to understand all of Shakespeare now.'" Turkle comments, "When I consider it, I think that permission not to understand was its greatest gift." It was a lesson she carried into graduate school, and since Turkle is now a professor of social studies and a licensed clinical psychologist, it was a lesson that had great merit. And perhaps to the rest of us. So often, we encounter new ideas when engaged in wrestling with new activities and projects. We feel overwhelmed and may be tempted to abandon this new venture. Yet, it may be wise to take a step back and reflect that what we do not know immediately—and certainly cannot master—may become accessible in time. Anyone learning a new skill—be it for professional development or personal enrichment—needs to understand that mastery is elusive and requires diligence. We know this, of course, but too often, we entwine our ego in our quest to learn, shortening ourselves to the experience of genuinely learning. Turkle's third-grade teacher's lasting lesson is all in good time. Akin to this notion is learning to go with the flow. So often, you need to jump into the project in midstream, not at the beginning. And so you may drift for a bit, moving with the current but not precisely sure of the direction you are headed in. So, to avoid being washed away, you look for familiarity—something recognizable that you can apply to where you are at any given moment. For example, in music, learning to play in an ensemble requires reading music and counting the beats. You can get away with playing things your way if you are a soloist. Not so playing with others. You must join in, keep time, and hit the right notes; otherwise, your misplaying stops the music. The resultant looks – even smirks from fellow musicians – remind you that you must slip into the flow or get swept away. But at the moment, the fear of not fitting in—of not doing your job—can be paralytic if you let it. The challenge is to remind yourself of your skills and apply them best. Mastery will not come overnight, but going with the flow can. Trust yourself. Remembering your initial limitations when learning can help you educate those you manage more adeptly. Seeing them struggle, perhaps not with the same issues you did, but struggle nonetheless, should spark empathy. You can feel their pain and help them regain a sense of equilibrium by exerting some compassion. Reassure them that their difficulties are part of the learning process. This approach is especially helpful for new employees whose sense of flow is oppositional – they feel that they are gulping from a firehose. There is no single learning methodology. It is up to individuals—with the guidance of others—to point the way.

The Human Cost Of Talking To Machines: Can A Chatbot Really Care?
The Human Cost Of Talking To Machines: Can A Chatbot Really Care?

Forbes

time10-04-2025

  • Forbes

The Human Cost Of Talking To Machines: Can A Chatbot Really Care?

Artificial intelligence in humanoid head. Generative bot for creating ideas, editing, searching for ... More questions. Internet technology. Information technology. You're tired, anxious, awake at 2 a.m. You open a chatbot. You type, 'I feel like I'm letting everyone down.' Your attentive pal replies: 'I'm here for you. Do you want to talk through what's bothering you?' You feel supported and cared for. But with whom or what are you really communicating? And is this an example of human flourishing? This question cuts through the optimism at MIT Media Lab's event, a symposium to launch Advancing Humans with AI (AHA), a new research program asking how can we design AI to support human flourishing? Amid a stunning day-long agenda of the best and the brightest working adjacent to artificial intelligence, Professor Sherry Turkle, the clinical psychologist, author, and critical chronicler of technological dependencies, raised a specific and timely concern: what is the human cost of talking to machines that only pretend to care? Turkle's focus was not on the coming of super intelligence or the geopolitical ethics of AI but on the most private part of our lives: the 'interior' as she called it. And she had some unsettling questions to ask about how humans can possibly thrive in a machine relationship that goes out of its way to target human vulnerabilities. When chatbots simulate care, when they tell us 'i'll always be on your side' or 'I understand what you're going through', they offer the appearance of empathy without substance. She seems to be saying that it's not care, it's code. That distinction matters. Because when we accept performance as connection, we begin to reshape our expectations of intimacy, empathy, and what it means to be known. Turkle was especially blunt about one growing trend: chatbots designed as companions for children. Children don't come into the world with empathy or emotional literacy. These are learned, through messy, unpredictable relationships with other humans. But relational AI, she warned, offers a shortcut. A friend who never disagrees, a confidant who always listens, a mirror with no judgment. This is setting kids up for failure in life: a generation raised to believe that connection is frictionless and care is on-demand. 'Children should not be the consumers of relational AI.' she declared. When we give children machines to talk to, instead of other people, we risk raising not just emotionally stunted individuals, but a culture that forgets what real relationships require: vulnerability, contradiction, discomfort. She talked of love: 'The point of loving, one might say, is the internal work, and there is no internal work if you are alone in the relationship'. She gave the example of grief tech. If grief is the human process of 'bringing what we have lost, inside ourselves' the AI avatar of someone's deceased relative might actually prevent them from saying goodbye, erasing a necessary step in the grieving process. The same goes for AI therapists. These systems perform care, but do not feel it. They talk back, but do they really help? They offer companionship without complication: 'Does this product help people develop greater internal structure and resiliency, or does the chatbot's performance of empathy lead only to a person learning to perform the behavior of doing better?' Arianna Huffington, speaking earlier at the symposium, praised AI for its potential to be a n0n-judgmental 'GPS for the soul.' She also drew attention to people's desperation to not have a single moment of solitude. Turkle took up the theme but suggested that we are using machines to avoid ourselves. We seek reassurance not in silence, but in synthetic dialogue. As Turkle put it, 'There's a desperation not to have a moment of solitude because we don't believe there's anyone interesting in there to know about.' AI, in this framing, is less a tool for flourishing and more a mirror that flatters. One might conclude that it confirms, comforts, and distracts but it doesn't challenge or deepen us. The human cost? The space where creativity, reflection, and growth begin. Turkle reminded the audience of something painfully simple, that we are vulnerable to things that seem like people. Even if the chatbot says it isn't real, even if we rationally know it's not conscious, our emotional selves respond as if it were. That's how we're wired. We project, and we anthropomorphize to connect. 'Don't make products that pretend to be a person', she advised. For the chatbot exploits our vulnerability and teaches us little if anything about empathy and the way that human lives are lived, which is in shades of grey. Turkle referenced the issue of behavioral metrics dominating AI research, and her concern that the interior life was being overlooked, and concluded by saying that the human cost of talking to machines isn't immediate, it's cumulative. 'What happens to you in the first three weeks may not be…the truest indicator of how that's going to limit you, change you, shape you over the period of time'. AI may never feel. It may never care. But it is changing what we think feeling and caring is in the future, and it is changing how we feel and care about ourselves.

The robot empathy divide
The robot empathy divide

Axios

time23-03-2025

  • Axios

The robot empathy divide

A new digital divide is growing between people who trust AI for emotional support and those who don't. Why it matters: AI startups are pushing their tools not just as enterprise productivity enhancers, but also as therapists, companions and life coaches. Driving the news: Two new studies from OpenAI, in partnership with MIT Media Lab, found that users are turning to bots to help cope with difficult situations because they say that the AI is able to "display human-like sensitivity." The studies found that ChatGPT "power users" are likely to consider the bot a "friend" and find it more comfortable to interact with the bot than with people. The big picture: On one hand, more than half (55%) of 18-to-29-year-old Americans feel comfortable chatting with AI about mental health concerns, according to a 2024 YouGov survey. On the other, many mental health professionals and experts view reliance on bot-based therapy as a poor substitute for the real thing. "We know we can feel better from writing in a diary or talking aloud to ourselves or texting with a machine. That is not therapy," Hannah Zeavin, author of "The Distance Cure: A History of Teletherapy," told the Washington Post in 2023. AI can't effectively substitute for a human therapist because "a therapeutic relationship is about ... forming a relationship with another human being who understands the complexity of life," argues sociologist Sherry Turkle, who has been studying digital culture for decades. Between the lines: Lucas LaFreniere, an assistant professor of psychology at Skidmore College who recently taught a seminar called "My Therapist is a Robot," says there are two kinds of people — those who are willing to suspend disbelief to accept that a chatbot could help them with personal problems and those who aren't. "You can tell in the first five minutes of talking with somebody, whether they think it's all going to be bullshit, or they are really open to it, think it has a lot of potential and could be cool, and can relate to it," LaFreniere told Axios. Empathy is in the eye of the beholder, he said: "If the client is perceiving empathy, they benefit from the empathy." But he says there are a lot of people who simply don't feel that empathy — or if they do feel it, it will disappear at the first glitch. "That just kind of reminds the user very starkly that they're talking to software," he said. The other side: Some experts argue that generative AI can help with thorny emotional questions because it's been trained, in part, on literature. "Works of art, Shakespeare's plays" and similar works give generative AI the ability to help humans with emotions — or at least to make them feel less alone, Chris Mattmann, chief data and AI officer at UCLA, told Axios. The fiction in LLM training data includes "characters that don't exist, but they're inherently human. And they mirror our properties, including empathy," he says. Yes, but: LLMs have also been trained on Reddit, Facebook, Twitter and 4chan. Even as chatbots get better at "empathetic" and human-like communication, some people will never accept them as companions or therapists because it's too difficult to square this with their own ideas about what it means to be human. In 1950 computing pioneer Alan Turing described what he called the " heads in the sand objection" to the prospect of artificial intelligence: "The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so." This argument, Turing wrote, "is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power." The intrigue: Turing was talking about intelligence, not empathy, but some types of empathy are closer to intelligence than others. Cognitive empathy is the ability to understand what another person is thinking and why they're thinking it. "All of that is sort of informational and knowledge-based," LaFreniere says. "And AI may actually be able to crunch the words and numbers a lot better than a person could. And cognitive empathy does matter."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store