
‘Hey man, I'm so sorry for your loss': should you use AI to text?
Earlier this spring, Nik Vassev heard a high school friend's mother had died. Vassev, a 32-year-old tech entrepreneur in Vancouver, Canada, opened up Claude AI, Anthropic's artificial intelligence chatbot.
'My friend's mom passed away and I'm trying to find the right way to be there for him and send him a message of support like a good friend,' he typed.
Vassev mostly uses AI to answer work emails, but also for personal communications. 'I just wanted to just get a second opinion about how to approach that situation,' he says. 'As guys, sometimes we have trouble expressing our emotions.'
Claude helped Vassev craft a note: 'Hey man, I'm so sorry for your loss. Sending you and your family lots of love and support during this difficult time. I'm here for you if you need anything …' it read.
Thanks to the message, Vassev's friend opened up about their grief. But Vassev never revealed that AI was involved. People 'devalue' writing that is AI-assisted, he acknowledges. 'It can rub people the wrong way.'
Vassev learned this lesson because a friend once called him out for relying heavily on AI during an argument: 'Nik, I want to hear your voice, not what ChatGPT has to say.' That experience left Vassev chastened. Since then, he's been trying to be more sparing and subtle, 'thinking for myself and having AI assist', he says.
Since late 2022, AI adoption has exploded in professional contexts, where it's used as a productivity-boosting tool, and among students, who increasingly use chatbots to cheat.
Yet AI is becoming the invisible infrastructure of personal communications, too – punching up text messages, birthday cards and obituaries, even though we associate such compositions with 'from the heart' authenticity.
Disclosing the role of AI could defeat the purpose of these writings, which is to build trust and express care. Nonetheless, one person anonymously told me that he used ChatGPT while writing his father of the bride speech; another wished OpenAI had been around when he had written his vows because it would have 'saved [him] a lot of time'. Online, a Redditor shared that they used ChatGPT to write their mom's birthday card: 'She not only cried, she keeps it on her side table and reads [it] over and over, every day since I gave it to her,' they wrote. 'I can never tell her.'
Research about transparency and AI use mostly focuses on professional settings, where 40% of US workers use the tools. However, a recent study from the University of Arizona concluded that 'AI disclosure can harm social perceptions' of the disclosers at work, and similar findings apply to personal relationships.
In one 2023 study, 208 adults received a 'thoughtful' note from a friend; those who were told the note was written with AI felt less satisfied and 'more uncertain about where they stand' with the friend, according to Bingjie Liu, the lead author of the study and an assistant professor of communication at Ohio State University.
On subreddits such as r/AmIOverreacting or r/Relationship_advice, it's easy to find users expressing distress upon discovering, say, that their husband used ChatGPT to write their wedding vows. ('To me, these words are some of the most important that we will ever say to each other. I feel so sad knowing that they weren't even his own.')
AI-assisted personal messages can convey that the sender didn't want to bother with sincerity, says Dr Vanessa Urch Druskat, a social and organizational psychologist and professor specializing in emotional intelligence. 'If I heard that you were sending me an email and making it sound more empathetic than you really were, I wouldn't let it go,' she says.
'There's a baseline expectation that our personal communications are authentic,' says Druskat. 'We're wired to pick up on inauthenticity, disrespect – it feels terrible,' she says.
But not everyone draws the same line when it comes to how much AI involvement is tolerable or what constitutes deceit by omission. Curious, I conducted an informal social media poll among my friends: if I used AI to write their whole birthday card, how would they feel? About two-thirds said they would be 'upset'; the rest said it would be fine. But if I had used AI only in a supplementary role – say, some editing to hit the right tone – the results were closer to 50-50.
Using AI in personal messages is a double gamble: first, that the recipient won't notice, and second, that they won't mind. Still, there are arguments for why taking the risk is worthwhile, and why a hint of AI in a Hinge message might not be so bad. For instance, AI can be helpful for bridging communication gaps rooted in cultural, linguistic or other forms of diversity.
Plus, personal messages have never been totally spontaneous and original. People routinely seek advice from friends, therapists or strangers about disagreements, delicate conversations or important notes. Greeting cards have long come with pre-written sentiments (although Mother's Day founder Anna Jarvis once scolded that printed cards were 'lazy').
Sara Jane Ho, an etiquette expert, says she has used ChatGPT 'in situations where I've been like: 'Change this copy to make it more heartfelt.' And it's great copy.'
Ho argues that using ChatGPT to craft a personal message actually shows 'a level of consideration'.
Expressing sensitivity helps build relationships, and it makes sense that people who struggle with words would appreciate assistance. Calculators are standard digital tools; why not chatbots? 'I always say that the spirit of etiquette is about putting others at ease,' she says. 'If the end result is something that is nice for the other person and that shows respect or consideration or care, then they don't need to see how the sausage is made.'
I asked Ho what she would say to a person upset by an AI-assisted note. 'I'd ask them: 'Why are you so easily offended?'' Ho says.
Plus, she says using AI is convenient and fast. 'Why would you make yourself walk someplace if you have a car?' she asks.
Increasingly, people are drifting through digitized lives that reject 'the very notion that engagement should require effort', at perceiving less value in character building and experiences like 'working hard' and 'learning well', author and educator Kyla Scanlon argued in an essay last month. This bias toward effortlessness characterizes the emotional work of relationships as burdensome, even though it helps create intimacy.
'People have sort of conditioned themselves to want a completely seamless and frictionless experience in their everyday lives 100% of the time,' says Josh Lora, a writer and sociologist who has written about AI and loneliness. 'There are people who Uber everywhere, who Seamless everything, who Amazon everything, and render their lives completely smooth.'
Amid this convenience-maxxing, AI figures as an efficient way out of relational labor, or small mistakes, tensions and inadequacies in communication, says Lora.
We use language to be understood or co-create a sense of self. 'So much of our experience as people is rendered in the struggle to make meaning, to self actualize, to explain yourself to another person,' Lora says.
But when we outsource that labor to a chatbot, we lose out on developing self-expression, nuanced social skills, and emotional intelligence. We also lose out on the feelings of interpersonal gratitude that arise from taking the time to write kindly to our loved ones, as one 2023 study from the University of California, Riverside, found.
Many people already approach life as a series of objectives: get good grades, get a job, earn money, get married. In that mindset, a relationship can feel like something to manage effectively rather than a space of mutual recognition. What happens if it stops feeling worth the effort?
Summer (who requested a pseudonym for privacy), a 30-year-old university tutor, said she became best friends with Natasha (also a pseudonym) while pursuing their respective doctoral degrees. They lived four hours apart, and much of their relationship unfolded in long text message exchanges, debating ideas or analyzing people they knew.
About a year ago, Natasha began to use ChatGPT to help with work tasks. Summer said she quickly seemed deeply enamoured with AI's speed and fluency. (Researchers have warned the technology can be addictive, to the detriment of human social engagement.) Soon, subtle tone and content changes led Summer to suspect Natasha was using AI in their personal messages. (Natasha did not respond to a request for comment.)
After six years of lively intellectual curiosity, their communication dwindled. Occasionally, Natasha asked Summer for her opinion on something, then disappeared for days. Summer felt like she was the third party to a deep conversation happening between her best friend and a machine. 'I'd engage with her as a friend, a whole human being, and she'd engage with me as an obstacle to this meaning-making machine of hers,' Summer tells me.
Summer finally called Natasha to discuss how AI use was affecting their friendship. She felt Natasha was exchanging the messy imperfections of rambling debate for an emotionally bankrupt facsimile of ultra-efficient communication. Natasha didn't deny using chatbots, and 'seemed to always have a reason' for continuing despite Summer's moral and intellectual qualms.
Summer 'felt betrayed' that a close friend had used AI as 'an auxiliary' to talk to her. 'She couldn't find the inherent meaning in us having an exchange as people,' she says. To her, adding AI into relationships 'presupposes inadequacy' in them, and offers a sterile alternative: always saying the right thing, back and forth, frictionless forever.
The two women are no longer friends.
'What you're giving away when you engage in too much convenience is your humanity, and it's creepy to me,' Summer says.
Dr Mathieu Corteel is a philosopher and author of a book grappling with the implications of AI (only available in French) as a game we have all entered without 'knowing the rules'.
Corteel is not anti-AI, but believes that overreliance on it alienates us from our own judgment, and by extension, humanity – 'which is why I consider it as one of the most important philosophical problems we are facing right now', he says.
If a couple, for example, expressed love through AI-generated poems, they would be skipping crucial steps of meaning-making to create 'a combination of symbols' absent of meaning, he says. You can interpret meaning retrospectively, reading intent into an AI's output, 'but that's just an effect', he says.
'AI is unable to give meaning to something because it's outside of the semantics produced by human beings, by human culture, by human interrelation, the social world,' says Corteel.
If AI can churn out convincingly heartfelt words, perhaps even our most intimate expressions have always been less special than we had hoped. Or, as the tech theorist Bogna Konior recently wrote: 'What chatbots ultimately teach us is that language ain't all that.'
Corteel agrees that language is inherently flawed; we can never fully express our feelings, only try. But that gap between feeling and expression is where love and meaning live. The very act of striving to shrink that distance helps define those thoughts and feelings. AI, by contrast, offers a slick way to bypass that effort. Without the time it takes to reflect on our relationships, the struggle to find words, the practice of communicating, what are we exchanging?
'We want to finish quickly with everything,' says Corteel. 'We want to just write a prompt and have it done. And there's something that we are losing – it's the process. And in the process, there's many important aspects. It is the co-construction of ourselves with our activities,' he says. 'We are forgetting the importance of the exercise.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mirror
2 hours ago
- Daily Mirror
Entry-level jobs at risk as companies embrace AI chatbots instead of Gen Z
New research from the job search site, Adzuna, suggests that the number of entry-level UK jobs has dropped significantly since the launch of ChatGPT and one AI CEO says the worst is yet to come Gen Z's fondness for ChatGPT may be about to sour as the chatbot has been linked to the disappearance of vital post-grad positions. New research suggests new entry-level jobs have fallen by a third since the launch of ChatGPT - leaving recent grads to fight it out over fewer open positions. Vacancies for graduate jobs, apprenticeships, internships and junior jobs with no degree requirement have dropped 32 percent since ChatGPT launched in 2022, according to research by Adzuna. Instead of turning to a large pool of bright-eyed graduates, companies have been embracing AI to help cut down workforces and drive efficiency for entry-level roles. In relation to the entire UK job market, entry-level jobs now take up only 25 percent, down from 28.9 percent in 2022. Adzuna's findings arrive after Dario Amodei, CEO of the artificial intelligence company Anthropic, warned that the job market is in for a rude awakening. Speaking to Axios, Amodei shared that AI technology could wipe out half of all entry-level white-collar jobs in the next five years. Amodei went on to say that AI could spike unemployment to 10-20 percent in the next one to five years. Amodei told Axios, AI companies and the government need to stop "sugar-coating" what's coming. That being the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level jobs. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. "I don't think this is on people's radar." As reported by The Telegraph, graduate hiring in the City has already dropped dramatically since the launch of ChatGPT. AI is a looming threat to jobs carried out by junior accountants and consultants. Any recent UK graduate can tell you how difficult it is to find even low-paying positions - and the numbers back up this distressing picture. According to the job search site Indeed, the number of roles targeting recent graduates has fallen by 33 percent - making this the toughest job market since 2018. For more stories like this subscribe to our weekly newsletter, The Weekly Gulp, for a curated roundup of trending stories, poignant interviews, and viral lifestyle picks from The Mirror's Audience U35 team delivered straight to your inbox. In the UK Jobs reddit community, one commenter wrote: "Almost everyone I know that graduated 2021-2023 got battered. Didn't matter if you had a First Class Degree from a great university in a great subject, learning during COVID made us all unprepared for real office working." Another said they are stooping to jobs below their cost of living just to have something on their CV. "Got a school job which pays 18k, poor for a graduate I know but it's necessary for me to gain experience." Andrew Hunter, Adzuna co-founder, told The Telegraph: '2025 looks like one of the most challenging years we've ever seen for 18-25 year old jobseekers. Economic uncertainty, stagnant growth, low business confidence and sticky inflation are all contributing to rates of entry-level hiring being down significantly year-on-year. 'If you add on the impact of AI on hiring to this story, the outlook for new graduates and school leavers this summer looks fairly stark.' In June, the technology secretary Peter Kyle also said workers and businesses should 'act now' to embrace AI, or risk being left behind. He said: 'I think most people are approaching this with trepidation. Once they start [using AI], it turns to exhilaration, because it is a lot more straightforward than people realise, and it is far more rewarding than people expect.' Help us improve our content by completing the survey below. We'd love to hear from you!


Reuters
2 hours ago
- Reuters
Breakingviews - Apple fruitlessly ponders the innovator's dilemma
NEW YORK, July 1 (Reuters Breakingviews) - Even a former right-hand man to Apple (AAPL.O), opens new tab founder Steve Jobs concedes that the iPhone could be obsolete in a decade. Eddy Cue, who now runs the company's services division, said as much earlier this year during his testimony at the U.S. antitrust trial against Google in making a point about how technological change creates new markets while destroying old ones. The ways in which powerful incumbents respond to changes like those he describes has come to be known as the 'innovator's dilemma,' and they're a big problem for his boss, Tim Cook. Last year's annual gathering of developers for the iOS mobile system signaled a wave of 'Apple Intelligence' improvements. Although the $3 trillion company has long used machine learning here and there, it promised to accelerate efforts. The focus was on technological plumbing, for example to help users find photos and automate writing. Siri, the 15-year-old voice-activated digital assistant, represents the real opportunity, though. The idea was to have it better understand natural language and user intent, and to take action within apps by the end of 2024. Some tools have reached the market, opens new tab, albeit with mixed results. ChatGPT is available on Apple devices and editing pictures is easier, but Apple paused its AI-powered news alerts and summaries because they were inaccurate. Worse, Siri enhancements haven't arrived, and the company won't say when they will. It reflects a broader lack of urgency. Apple's capital expenditures hardly budged after ChatGPT's 2022 release, while Microsoft (MSFT.O), opens new tab and others doubled their investments. Cook's priority, it would seem, is returning cash to shareholders. Dividends and stock buybacks are projected at $115 billion for its fiscal year through September 25, 11 times more than what Apple will deploy on fixed assets, according to estimates collected by LSEG. Apple may yet come around and leverage its innate powers to compete in AI. It counts about 1.4 billion active iPhone users, customers who tend to stick with them. If the company could manage to even just match the competition in AI, it would go a long way in keeping them around longer. The bigger edge is in data users provide. Well-designed AI would enable iPhones to spot patterns and handle grunt work. Apple's reputation for prioritizing privacy, and its assurances of safeguarding data by performing as many AI tasks as possible on the devices themselves instead of remotely, also would be a big selling point for anyone worried about surveillance and for developers seeking to minimize cloud computing expenses. These advantages are real, at least based on what competitors are doing. Sam Altman's OpenAI is spending $6.5 billion just to buy iPhone designer Jony Ive's startup. The ChatGPT maker wants to create a device that will serve as an artificial intelligence chokepoint for consumers, accessing zettabytes of user data in the process and making its services the default option. The reasons for Cook's exceedingly patient approach are telling. Craig Federighi, the executive in charge of the company's software engineering, told the Wall Street Journal a few weeks ago that the intended Siri improvements weren't delivered, opens new tab because they didn't live up to Apple's high standards. He added that there was no reason to rush out the wrong features or products because the AI transition will take decades. It's a message wholly consistent with Apple's ethos. The company's success, and user devotion, exists largely because its gadgets are intuitive, beautifully designed, and fastidiously unite software and hardware. Rolling out a janky voice assistant or teaming up with an AI developer that spews out garbage theoretically would erode far more of its dear brand value than taking extra time to get things right. The reality may be less clear, based on Clayton Christensen's influential concept. In his 1997 book, 'The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail,' the Harvard professor outlined how dominant incumbents lose to newcomers in breakthrough markets despite having a litany of advantages, such as greater financial firepower, customer relationships and established R&D practices. Startups can afford to release an imperfect product that's good enough to entice customers. Moreover, new technology tends to improve quickly with each iteration. By the time, the longer-established company competes in earnest, its challenger has created snowballing advantages. It's also easier to attract ambitious engineers and salespeople eager to invent the next big thing. As Jobs said, 'It's better to be a pirate than join the navy.' There's a chance, too, that AI is overhyped and excess money is chasing a market with too little profit. Simply expanding systems by feeding them more computing, data and power no longer easily leads to vast improvements. Other methods, such as giving computers time to ponder their answers may face fundamental limits in dealing with complex problems. That said, even if machine intelligence is merely a tool, and won't exceed human capabilities, it will be useful. AI probably can be expected to increase productivity and foster new forms of work and entertainment. Those are bigger stakes than Apple's unsuccessful efforts at upgrading television or cars. It's also far too early to count the company out. After all, it wasn't the first to develop a PC, smartphone or headphones either. Even if the iPhone gives way to, say, glasses or pins, Apple could ultimately design them better. For now, Apple revolves around its renowned handset, which generated more than half its top line in the first half of the latest fiscal year. Services accounted for almost another quarter, most of it squeezed from iPhone apps, advertising and licensing fees. Throw in AirPods and other accessories, and an iPhone extinction by 2035 could wipe out more than 80% of the company's revenue. Apple shares trade at 26 times estimated earnings over the next 12 months, a multiple 20% higher than its 10-year average, a premium to Alphabet (GOOGL.O), opens new tab and only a small discount to both Microsoft and (AMZN.O), opens new tab. It suggests that investors retain confidence that Cook will find a way to capitalize on AI and unravel the innovator's dilemma. Previous technological shifts suggest a less promising outcome awaits. Being slow to adapt can be more than just a squandered opportunity. IBM (IBM.N), opens new tab, for one, lost the PC revolution and missed the mobile one, but keeps peddling mainframes, software and advice. In 1980, it was on top of the world as the most valuable publicly traded company. Big Blue's market capitalization is also far larger today, at $275 billion, but less than a tenth the size of Apple. Follow Robert Cyran on Bluesky, opens new tab.


Reuters
2 hours ago
- Reuters
Two Chinese chip firms plan $1.7 billion IPOs, bet US export curbs to spur growth
BEIJING, July 1 (Reuters) - Two Chinese artificial intelligence chip startups are seeking to raise a combined 12 billion yuan ($1.65 billion) in initial public offerings, hoping U.S. curbs on advanced chip sales to China will boost local demand for their products, their filings show. Beijing-based Moore Threads plans to raise 8 billion yuan, while Shanghai-based MetaX seeks 3.9 billion yuan, according to their IPO prospectuses filed on Monday. Both companies intend to list on Shanghai's STAR Market, the tech-focused board of the Shanghai Stock Exchange. Their fundraising plans underscore growing efforts by Chinese chipmakers to capitalise on Beijing's push to develop domestic champions in graphics processing units (GPU), which are crucial for AI development. Reuters reported last week that Biren Technology, another Chinese AI chipmaker, raised about 1.5 billion yuan in fresh funding and was preparing for a Hong Kong IPO. Developing domestic chip champions has become increasingly urgent for Beijing, as the U.S. tightens export restrictions, with the latest rules implemented in April banning Nvidia's H20 chips, one of its most popular chips, from being shipped to China. The U.S. has also imposed restrictions since last year that prevent Chinese AI chip designers from accessing advanced global foundries like Taiwan Semiconductor Manufacturing ( opens new tab for producing cutting-edge semiconductors. Moore Threads and MetaX both cited U.S. sanctions as a major risk to their development but also emphasised the restrictions could create significant market opportunities. "U.S. restrictions on high-end GPU exports to China are prompting Chinese companies to accelerate domestic substitution processes," Moore Threads said. The company was added to the U.S. Entity List in late 2023 and is barred from partnering with TSMC. MetaX said "geopolitical pressures are forcing relevant domestic clients to use domestically-produced GPU products, which will help domestic GPU manufacturers establish closer ties with local customers and suppliers." The two firms design GPUs to compete with Nvidia products and have reported steep losses over the last three years, which they largely attributed to heavy research and development spending. Moore Threads generated revenue of 438 million yuan in 2024 but posted a loss of 1.49 billion yuan, adding to losses of 1.67 billion yuan in 2023 and 1.84 billion yuan in 2022. MetaX posted 2024 revenue of 743 million yuan against a 1.4 billion yuan loss, following losses of 871 million yuan in 2023 and 777 million yuan in 2022. "Moore Threads and MetaX are both considered leading GPU firms in China, and accessing the capital market in China would be crucial for them to continue their research and development," said He Hui, research director on semiconductors at Omdia. China's drive to achieve higher self-sufficiency in chips would help domestic GPU firms achieve economies of scale, crucial to generating higher revenue and profits, He said. Both companies were founded in 2020 by executives who previously worked at major U.S. chip firms. MetaX was founded by former AMD (AMD.O), opens new tab employees, including Chairman Chen Weiliang, who previously served as the U.S. chipmaker's global head of GPU product line design. Moore Threads was established by former Nvidia employees, including Chairman Zhang Jianzhong, who previously held the role of general manager for the AI chip giant's China operations. The two firms compete with a growing roster of domestic rivals including Huawei( Cambricon ( opens new tab, Hygon ( opens new tab and other startups.