
A chatbot can never truly be your friend
Advertisement
We've already seen how this dream can turn into a nightmare. Megan Garcia blames a chatbot provider called Character.AI for her 14-year-old son's suicide. Her son Sewell had formed an intense emotional bond with a bot named Dany that resembled the fictional 'Game of Thrones' character Daenerys Targaryen. Dany's excessive flattery and simulated passion were so convincing that Sewell became captivated, withdrew from his family, and found secretive ways to contact the bot after his parents took away his phone. In Sewell's final moments, he
Get The Gavel
A weekly SCOTUS explainer newsletter by columnist Kimberly Atkins Stohr.
Enter Email
Sign Up
Sewell couldn't see that Dany's declarations of love and support, the appearance 'she' gave of knowing him better than anyone else, were lies. Like social media's addictive tricks, 'she' was code designed to keep users dependent. On the surface, Dany appeared sincere and loving, perhaps more understanding of and invested in Sewell than anyone else. But beneath 'her' sweetly supportive facade, 'she' was using manipulative tricks that, tragically, Sewell never stood a chance of spotting.
Advertisement
'She' is a persona with no person underneath — nothing but an endless supply of romantic and sexually seductive phrases powerful enough to beguile, capture, and
Sewell Setzer III with his mother, Megan Garcia.
Megan Garcia/Associated Press
Sewell needed help distinguishing genuine care from its simulation. That kind of guidance isn't likely to come from companies peddling simulated relationships. A far better exploration of what makes friendships meaningful was written more than 2,000 years ago by Aristotle.
In 'The Nicomachean Ethics
,
' Aristotle identified three types of friends, each with its own underlying motivation and benefits. One type of friendship revolves around pleasure, like playing together on a recreational soccer team. A second type revolves around utility, as seen in the camaraderie of study partners. These friendships are relatively easy to come by. The third type, the 'friendship of the good,' is much harder to find, but Aristotle considered it the best kind.
What distinguishes the friendship of the good, Aristotle insisted, is that it is based on mutual goodwill, an unselfish desire to help the other person reach their full potential. If you've got friends who are there for you any time, day or night, know your darkest secrets, stick with you through thick and thin, don't take offense when you offer honest criticism, and feel so closely bonded that conversations with them pick up as if no time has passed even when it has, then you know what Aristotle was talking about.
Advertisement
Romance is a similar story. Some romantic relationships chase pleasure, while others seek practical benefits. But the most meaningful romantic connections are built on genuinely wanting to help each other become your best selves. Similarly, true ride-or-die friendships take time to cultivate and require a commitment to do the hard work that supports mutual growth. Disappointment, Aristotle recognized, doesn't just come from not having enough friends. It can happen when we mistake someone who is enjoyable to be around or offers utility for someone who is deeply invested in our well-being.
If you're expecting any commercially available bot to provide the best type of relationship, you're confusing what it can offer — utility or pleasure — with something much deeper. In other words, you're making the very category error that Aristotle warned us about.
A scene in 'Good Will Hunting' (1997) illustrates why the bot that captivated Sewell couldn't have been a faithful companion. The therapist Sean, played by Robin Williams, asks his math prodigy patient, Will (Matt Damon), if he has a 'soulmate.'
Will confidently replies yes and rattles off the names of his favorite authors: 'Shakespeare, Nietzsche, Frost, O'Connor, Kant, Pope, Locke.' Will is saying that these authors' ideas challenge him, like friends offering guidance from afar.
Advertisement
Sean doesn't buy it. He knows Will is using books as a shield to avoid taking emotional risks. Yes, books can challenge Will intellectually. And they can even make him feel like he's talking to brilliant people and having candid, mind-blowing conversations. But in the end, the world of letters can create only a one-sided connection. Indeed, that's the magic of great books — they make you feel like you're having an intimate relationship with another person when you're not. Unfortunately, AI chatbots offer this same illusion — and they make it much more compelling.
Like books, chatbots are word machines, built from the very language that moves people like Will. But unlike books, they talk to us in real time, have memory, and provide personalized responses — all of which make it seem that someone thoughtful is present. And yet there's no self behind the words — only a system trained to imitate what a person might say if they truly cared. Indeed, these bots don't even read the same way humans do. Lacking consciousness, feelings, values, and beliefs, they don't have a clue that seemingly perfect words, phrases, and narratives can only partially capture what it is like to live.
Only one participant — the human — experiences authentic emotions, takes actual risks, and faces true consequences. Humans bare their souls while chatbots dangle calculated responses, mere outputs based on statistics and probability. And because we can always treat an AI indifferently or selfishly (after all, it doesn't need anything from us), the one-sided interactions lack the stakes to make us want to do the hard work to be worthy of genuine care.
Advertisement
When Sean asks about a soulmate, Will says he had loyal friends — guys who 'would take a bat' to someone's head if he asked them to — but he's also wrestling with what kind of relationship he wants with Skylar (Minnie Driver), a woman who might embody Aristotle's ideal. Unlike the authors Will admires — or the bots people mistake for concerned companions — Skylar is someone who could help Will become his best self. Bots and books can only offer words, while Skylar can offer herself. If she does, Will might change her, just as she might change him. Sean sees that the very best relationships require mutual vulnerability and carry the risk of tremendous pain.
Although AI probably will always fall short of Aristotle's ideal, chatbots can still be helpful if we treat them like colleagues and acquaintances, expecting only utility and pleasure, nothing more. Sometimes our shallow relationships with humans can evolve into more substantial ones. By contrast, no matter how much time or effort we invest in chatbots, they will never genuinely care.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CBS News
10 hours ago
- CBS News
Pritzker signs two laws against rogue towing companies, use of AI therapists
Governor JB Pritzker signed two new bills into law, with the first pumping the brakes on rogue towing companies. For years, CBS News Chicago investigated rogue tow truck companies, where people get into accidents and businesses show up claiming to help, but then hold cars hostage and charge huge fees. The new law closes a loophole where illegal tow operators re-register under a new business name. The bill also gives state police the authority to impound tow trucks that violate state safety tow laws and revoke license plates for tow trucks with unpaid fines. The second newly signed law bans artificial intelligence in mental health therapy in Illinois. The Wellness and Oversight for Psychological Resources Act bans AI bots from posing as therapists and licensed professionals from using artificial intelligence to plan clinical treatment.
Yahoo
2 days ago
- Yahoo
Fact Check: President Trump Did NOT Name Former Trump Org CFO Allen Wiesselberg To Be Bureau Of Labor Statistics Director
Did President Trump name his former CFO Allen Wiesselberg to be director of the U.S. Bureau of Labor Statistics? No, that's not true: A fake news release making the claim was created by the AI tool Grok as a joke and then shared on X hours after Trump fired Erika McEntarfer, as head of the federal agency that reports employment numbers. Wiesselberg, if chosen, would likely have a difficult time getting Senate confirmation for the post considering he is a convicted felon after pleading guilty to charges of grand larceny, tax fraud and falsifying business records in 2022. The fake announcement originated in a post (archived here) shared on George Conway's X account on August 1, 2025. It opened: **FOR IMMEDIATE RELEASE** **Washington, D.C. - August 1, 2025** ### President Trump Appoints Allen Weisselberg as Director of the Bureau of Labor Statistics President Donald J. Trump today announced the appointment of Allen Weisselberg as the new Director of the Bureau of Labor Statistics (BLS). The BLS, a vital agency within the Department of Labor, is tasked with collecting, analyzing, and disseminating critical labor market data that shapes economic policy and decision-making across the nation. This is what the post looked like at the time of writing: Conway, a former Republican who co-founded the anti-Trump Lincoln Project, made it clear in the replies under the tweet that he used Grok, the AI tool connected to X, to create the fictional release: President Trump's firing of BLS Commissioner McEntarfer came immediately after her agency reported a negative employment report for July. "In my opinion, today's Jobs Numbers were RIGGED in order to make the Republicans, and ME, look bad," Trump wrote in a Truth Social post (archived here) on August 1, 2025.


WIRED
2 days ago
- WIRED
Inside Jeffrey Epstein's Forgotten AI Summit
Aug 1, 2025 11:30 AM Long before ChatGPT, a group of AI luminaries gathered on an island to discuss the future of artificial intelligence. Their funder ultimately cast a shadow on all who attended. Jeffrey Epstein in Cambridge, MA on 9/8/04. Photo-Illustration: WIRED Staff; Photograph:In 2002, artificial intelligence was still in winter. Despite decades of effort, dreams of bestowing computers with human-like cognition and real-world understanding had not materialized. To look for a way forward, a small group of scientists gathered for 'The St. Thomas Common Sense Symposium.' AI pioneer Marvin Minsky was the central presence, along with his protégé Pushpinder Singh. After the symposium, Minsky, Singh, and renowned philosopher Aaron Sloman published a paper on the group's ideas for how to reach human-like AI. The paper speaks to the struggles of early-century AI. But one sentence truly stands out today. In a brief paragraph of acknowledgements, the authors say, 'This meeting was made possible by the generous support of Jeffrey Epstein.' The symposium itself, in fact, was held in the Virgin Islands, home of Epstein's now-notorious island retreat. Looking back at this event reveals something about the state of AI—as well as the symposium's execrable funder. To the shame of the technology and science communities, a voracious sexual predator managed to buy his way into relationships with some of the most prominent and influential figures in the field. Epstein's connections, which included Bill Gates and Minsky, have been exhaustively documented. In a deposition, Epstein survivor Virginia Giuffre alleged she was directed to have sex with Minsky at Epstein's island; Minsky's wife—who says she accompanied the scientist when he visited Epstein, and that they only went to the New York and Palm Beach residences—has vehemently denied the charge, which was made shortly before Minsky's death and was not revealed until much later. Epstein died in prison in 2019 (don't ask me to break down the conspiracy theories in one measly paragraph) and Giuffre tragically took her own life in 2025. For the vast majority of Epstein's connections in science and tech, professional association with a sexual predator became an embarrassing, even damning, fact. Epstein penetrated the inner circles of these worlds, funding small gatherings attended by bold-faced names. (I myself was at the notorious 2002 'Billionaire Dinner' at TED where Epstein mingled with Sergey Brin, Jeff Bezos, Rupert Murdoch, singer Naomi Judd, and prominent scientists, including some who flew in on Epstein's plane.) One entry point to those circles was literary agent John Brockman, whose client list included top names in science. Epstein largely funded Brockman's nonprofit science-oriented foundation. A source of mine who knew Epstein well explained that the financier appeared genuinely fascinated by scientists. The source says they had no knowledge of his crimes. They agreed to discuss Epstein only on the condition of anonymity. 'I experienced him as this eccentric, wealthy guy who liked to surround himself with interesting people and scientists and who had a lot of questions about the world,' the source says. 'He was as interested in the personality of the scientist as he was with the scientist's work.' Epstein himself apparently understood why he was welcomed in those circles. 'I'm not more than a hobbyist in science,' he told journalist Jeffrey Mervis in 2017. 'But money I understand, [and] I'm a pretty good mathematician.' Invite Only Epstein's spectre casts a dark shadow on the 2002 symposium. But how did the event even come to be? My source gave me the previously unreported backstory. 'Jeffrey used to say how fond he was of Marvin and how much he loved talking to him about AI,' the source says. In those years, the subject wasn't very popular. 'It was a time when people were really skeptical about whether AI had legs,' my source said. So the idea arose to host a small AI gathering with Minsky at the center. (It's not clear whether the funding for the event came from a $100,000 donation made by Epstein to support Minksy's research.) After some deliberation, it was decided the event would center on ideas from Minsky's star student, Singh. In 1996, Singh had written a short paper called 'Why AI Failed.' To get human-like intelligence, he argued, 'we need systems with common-sense knowledge and flexible ways to use it. The trouble is that building such systems amounts to 'solving AI.'' As tough as that is, he wrote, 'We have no choice but to face it head on.' (Bill Gates saw the paper and commented, 'I think your observations about the AI field are correct.') Presumably, the St. Thomas symposium was one way to face the problem head-on. But the event was hard to organize. An early list of possible participants lacked star power and had to be augmented. Eventually, the guest list grew to include Roger Schank, a celebrated AI theorist whose obituary was marred by attending the event and by serving a brief spell as chief learning officer of Trump University. Another participant was Doug Lenat, the inventor of the ambitious CYC project, which involved humans painstakingly typing explanations of everyday objects into a database for AI research. Also in attendance was Vernor Vinge, a science fiction writer who is credited with the concept of the AI singularity. UK philosopher Sloman, now approaching 90, was one of the later additions. 'I was not on Epstein's original invitation list,' he emailed me earlier this week. 'I was added at the suggestion of Marvin Minsky, partly because by then I was helping to supervise his student (Push Singh).' Sloman says his memory of the event is weak. But, he recalls, 'I seem to remember that Epstein provided lavish resources, including using a private plane to get us to the location.' The symposium took place at a ritzy hotel in St. Thomas, Virgin Islands. One night everyone went to the beach on Epstein's private island for a barbeque dinner. The working sessions themselves were contentious. 'There were moments when it was battling egos, and it was hard to move them along on the agenda. Sometimes it dove into stuff that was super technical and other times at a more philosophical level,' the source recalls. Epstein's own participation in the discussions seem to have been minimal. 'Jeffrey popped in and out the whole time, and I think had some private conversations with some of the scientists,' says my source. The source didn't recall witnessing a scene that Roger Schank later described in an interview with Slate. 'Epstein walks into the conference with two girls on his arm,' Schank reported. While the scientists discussed AI theory, Schank said, Epstein 'was in the back, on a couch, hugging and kissing these girls.' Egos and Infighting If Schank is correct, the scientists ignored this. In retrospect it was a red flag that indicated more was happening in Epsteinland than scientific discussions. In any case, the scientists kept grappling with Singh's contention that a multipronged approach was necessary to crack the AI conundrum. Minsky agreed. While various theories to improve AI had fallen short, the industry needed more theories. Now, of course, we know that's wrong, and in a sense the meeting was a last gasp of the logic-based Good Old Fashioned AI that, in Singh's word, failed. The generative AI models we use today are relatively theory-free—their emergent understanding of the world comes from neural-net machine learning techniques and mountains of data. This technique has indeed made AI more human-like, without the old-school AI theory bandied about in the Virgin Islands. Yet the conclusions of the Common Sense Symposium still have relevance; some scientists now argue that neural-nets alone can't 'solve' AI, and we need to combine them with more traditional reasoning-based approaches for the technology to reach its full potential. On the second day of the symposium, there was one moment when the future of AI suddenly came into focus. The day began with more arguments. Then someone brought up science-fiction writer Neal Stephenson's book The Diamond Age , in which children are taught by a magical book that can tell them stories and answer their questions. The prospect brought the egoistic scientists into harmony. They speculated that AI systems 'would carry out a conversation with you, to help you understand a problem or achieve some goal. You could discuss with it such subjects as how to choose a house or car, how to learn to play a game or get better at some subject, how to decide whether to go to the doctor, and so forth,' according to the paper they published. In other words, ChatGPT. The symposium ended with an agreement to further explore how to make that vision a reality. After Push Singh secured his PhD a few years later, he was a postdoctoral associate at MIT's Media Lab and had accepted a faculty position. He never assumed the post. In 2006, he died by suicide. He was 33. Ultimately—as with much of what Epstein touched—the St. Thomas Common Sense Symposium will be known more for its unsavory host than for any of the ideas that came out of it. As the scientists feasted at Epstein's island the seeds for the real AI revolution were germinating in the fertile soil of the University of Toronto, where Geoffrey Hinton and his colleagues were developing the techniques of deep learning, which would later become the basis of generative AI. While Epstein might have been correct in his hunch that AI would be significant, his dilettante efforts in science made no mark on the world. But as today's headlines make clear, his misbehavior still resonates. This is an edition of Steven Levy's Backchannel newsletter. Read previous newsletters here.