logo
ChatGPT, write my wedding vows: are we OK with AI in everyday life?

ChatGPT, write my wedding vows: are we OK with AI in everyday life?

The Guardian30-06-2025
Earlier this spring, Nik Vassev heard a high school friend's mother had died. Vassev, a 32-year-old tech entrepreneur in Vancouver, Canada, opened up Claude AI, Anthropic's artificial intelligence chatbot.
'My friend's mom passed away and I'm trying to find the right way to be there for him and send him a message of support like a good friend,' he typed.
Vassev mostly uses AI to answer work emails, but also for personal communications. 'I just wanted to just get a second opinion about how to approach that situation,' he says. 'As guys, sometimes we have trouble expressing our emotions.'
Claude helped Vassev craft a note: 'Hey man, I'm so sorry for your loss. Sending you and your family lots of love and support during this difficult time. I'm here for you if you need anything …' it read.
Thanks to the message, Vassev's friend opened up about their grief. But Vassev never revealed that AI was involved. People 'devalue' writing that is AI assisted, he acknowledges. 'It can rub people the wrong way.'
Vassev learned this lesson because a friend once called him out for relying heavily on AI during an argument: 'Nik, I want to hear your voice, not what ChatGPT has to say.' That experience left Vassev chastened. Since then, he's been trying to be more sparing and subtle, 'thinking for myself and having AI assist', he says.
Since late 2022, AI adoption has exploded in professional contexts, where it's used as a productivity-boosting tool, and among students, who increasingly use chatbots to cheat.
Yet AI is becoming the invisible infrastructure of personal communications, too – punching up text messages, birthday cards and obituaries, even though we associate such compositions with 'from the heart' authenticity.
Disclosing the role of AI could defeat the purpose of these writings, which is to build trust and express care. Nonetheless, one person anonymously told me that he used ChatGPT while writing his father of the bride speech; another wished OpenAI had been around when he had written his vows because it would have 'saved [him] a lot of time'. Online, a Redditor shared that they used ChatGPT to write their mom's birthday card: 'she not only cried, she keeps it on her side table and reads [it] over and over, every day since I gave it to her,' they wrote. 'I can never tell her.'
Research about transparency and AI use mostly focuses on professional settings, where 40% of US workers use the tools. However, a recent study from the University of Arizona concluded that 'AI disclosure can harm social perceptions' of the disclosers at work, and similar findings apply to personal relationships.
In one 2023 study, 208 adults received a 'thoughtful' note from a friend; those who were told the note was written with AI felt less satisfied and 'more uncertain about where they stand' with the friend, according to Bingjie Liu, the lead author of the study and an assistant professor of communication at Ohio State University.
On subreddits such as r/AmIOverreacting or r/Relationship_advice, it's easy to find users expressing distress upon discovering, say, that their husband used ChatGPT to write their wedding vows. ('To me, these words are some of the most important that we will ever say to each other. I feel so sad knowing that they weren't even his own.')
AI-assisted personal messages can convey that the sender didn't want to bother with sincerity, says Dr Vanessa Urch Druskat, a social and organizational psychologist and professor specializing in emotional intelligence. 'If I heard that you were sending me an email and making it sound more empathetic than you really were, I wouldn't let it go,' she says.
'There's a baseline expectation that our personal communications are authentic,' says Druskat. 'We're wired to pick up on inauthenticity, disrespect – it feels terrible,' she says.
But not everyone draws the same line when it comes to how much AI involvement is tolerable or what constitutes deceit by omission. Curious, I conducted an informal social media poll among my friends: if I used AI to write their whole birthday card, how would they feel? About two-thirds said they would be 'upset'; the rest said it would be fine. But if I had used AI only in a supplementary role – say, some editing to hit the right tone – the results were closer to 50-50.
Using AI in personal messages is a double gamble: first, that the recipient won't notice, and second, that they won't mind. Still, there are arguments for why taking the risk is worthwhile, and why a hint of AI in a Hinge message might not be so bad. For instance, AI can be helpful for bridging communication gaps rooted in cultural, linguistic or other forms of diversity.
Plus, personal messages have never been totally spontaneous and original. People routinely seek advice from friends, therapists or strangers about disagreements, delicate conversations or important notes. Greeting cards have long come with pre-written sentiments (although Mother's Day founder Anna Jarvis once scolded that printed cards were 'lazy').
Sara Jane Ho, an etiquette expert, says she has used ChatGPT 'in situations where I've been like: 'Change this copy to make it more heartfelt.' And it's great copy.'
Ho argues that using ChatGPT to craft a personal message actually shows 'a level of consideration'.
Expressing sensitivity helps build relationships, and it makes sense that people who struggle with words would appreciate assistance. Calculators are standard digital tools; why not chatbots? 'I always say that the spirit of etiquette is about putting others at ease,' she says. 'If the end result is something that is nice for the other person and that shows respect or consideration or care, then they don't need to see how the sausage is made.'
I asked Ho what she would say to a person upset by an AI-assisted note. 'I'd ask them: 'Why are you so easily offended?'' Ho says.
Plus, she says using AI is convenient and fast. 'Why would you make yourself walk someplace if you have a car?' she asks.
Increasingly, people are drifting through digitized lives that reject 'the very notion that engagement should require effort', at perceiving less value in character building and experiences like 'working hard' and 'learning well', author and educator Kyla Scanlon argued in an essay last month. This bias toward effortlessness characterizes the emotional work of relationships as burdensome, even though it helps create intimacy.
'People have sort of conditioned themselves to want a completely seamless and frictionless experience in their everyday lives 100% of the time,' says Josh Lora, a writer and sociologist who has written about AI and loneliness. 'There are people who Uber everywhere, who Seamless everything, who Amazon everything, and render their lives completely smooth.'
Amid this convenience-maxxing, AI figures as an efficient way out of relational labor, or small mistakes, tensions and inadequacies in communication, says Lora.
We use language to be understood or co-create a sense of self. 'So much of our experience as people is rendered in the struggle to make meaning, to self actualize, to explain yourself to another person,' Lora says.
But when we outsource that labor to a chatbot, we lose out on developing self-expression, nuanced social skills, and emotional intelligence. We also lose out on the feelings of interpersonal gratitude that arise from taking the time to write kindly to our loved ones, as one 2023 study from the University of California, Riverside, found.
Many people already approach life as a series of objectives: get good grades, get a job, earn money, get married. In that mindset, a relationship can feel like something to manage effectively rather than a space of mutual recognition. What happens if it stops feeling worth the effort?
Summer (who requested a pseudonym for privacy), a 30-year-old university tutor, said she became best friends with Natasha (also a pseudonym) while pursuing their respective doctorate degrees. They lived four hours apart, and much of their relationship unfolded in long text message exchanges, debating ideas or analyzing people they knew.
About a year ago, Natasha began to use ChatGPT to help with work tasks. Summer said she quickly seemed deeply enamoured with AI's speed and fluency. (Researchers have warned the technology can be addictive, to the detriment of human social engagement.) Soon, subtle tone and content changes led Summer to suspect Natasha was using AI in their personal messages. (Natasha did not respond to a request for comment.)
After six years of lively intellectual curiosity, their communication dwindled. Occasionally, Natasha asked Summer for her opinion on something, then disappeared for days. Summer felt like she was the third party to a deep conversation happening between her best friend and a machine. 'I'd engage with her as a friend, a whole human being, and she'd engage with me as an obstacle to this meaning-making machine of hers,' Summer tells me.
Summer finally called Natasha to discuss how AI use was affecting their friendship. She felt Natasha was exchanging the messy imperfections of rambling debate for an emotionally bankrupt facsimile of ultra-efficient communication. Natasha didn't deny using chatbots, and 'seemed to always have a reason' for continuing despite Summer's moral and intellectual qualms.
Summer 'felt betrayed' that a close friend had used AI as 'an auxiliary' to talk to her. 'She couldn't find the inherent meaning in us having an exchange as people,' she says. To her, adding AI into relationships 'presupposes inadequacy' in them, and offers a sterile alternative: always saying the right thing, back and forth, frictionless forever.
The two women are no longer friends.
'What you're giving away when you engage in too much convenience is your humanity, and it's creepy to me,' Summer says.
Dr Mathieu Corteel is a philosopher and author of a book grappling with the implications of AI (only available in French) as a game we've all entered without 'knowing the rules'.
Corteel is not anti-AI, but believes that overreliance on it alienates us from our own judgement, and by extension, humanity – 'which is why I consider it as one of the most important philosophical problems we are facing right now', he says.
If a couple, for example, expressed love through AI-generated poems, they'd be skipping crucial steps of meaning-making to create 'a combination of symbols' absent of meaning, he says. You can interpret meaning retrospectively, reading intent into an AI's output, 'but that's just an effect', he says.
'AI is unable to give meaning to something because it's outside of the semantics produced by human beings, by human culture, by human interrelation, the social world,' says Corteel.
If AI can churn out convincingly heartfelt words, perhaps even our most intimate expressions have always been less special than we'd hoped. Or, as tech theorist Bogna Konior recently wrote: 'What chatbots ultimately teach us is that language ain't all that.'
Corteel agrees that language is inherently flawed; we can never fully express our feelings, only try. But that gap between feeling and expression is where love and meaning live. The very act of striving to shrink that distance helps define those thoughts and feelings. AI, by contrast, offers a slick way to bypass that effort. Without the time it takes to reflect on our relationships, the struggle to find words, the practice of communicating, what are we exchanging?
'We want to finish quickly with everything,' says Corteel. 'We want to just write a prompt and have it done. And there's something that we are losing – it's the process. And in the process, there's many important aspects. It is the co-construction of ourselves with our activities,' he says. 'We are forgetting the importance of the exercise.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The ‘paper ceiling' in hiring is being ripped open
The ‘paper ceiling' in hiring is being ripped open

Telegraph

timean hour ago

  • Telegraph

The ‘paper ceiling' in hiring is being ripped open

To the guy I knew at school whose A-level results spelt DUDE, some good news at last: artificial intelligence (AI) means that bosses are finally losing interest in the letters at the bottom of your CV. Thanks to rapid changes in technology, the so-called 'paper ceiling' – where employers prioritise qualifications above all else, to the detriment of those who didn't excel at school – is being ripped open. As graduate jobs are being swallowed whole by AI, managers increasingly don't care if you went to the best university in the country or whether your A-level results spell DUDE or ABBA. What they want are people with the skills to do the jobs of the future. Unfortunately, there's a growing feeling in boardrooms across the country that our education system is not delivering this kind of training. A record 28pc of all A-levels might have been marked A or A* this year, but high-achieving students have their eyes wide open when it comes to how far these results can take them. Teenagers know full well that the employment landscape is shifting, with a growing number of companies dropping degree requirements and looking for specific skills instead. Even though more people than ever are going to university, LinkedIn research shows that there was a 14pc jump in the amount of UK job postings that did not require a degree between 2021 and 2024. As a result, the gap between graduate salaries and non-graduate salaries is shrinking. This is being driven by the impact of technology on the jobs we do, although it is often framed by companies as an attempt to be more inclusive, given the huge costs involved with higher education. 'The advantage now lies with graduates whose skills complement AI – judgment, problem-solving, leadership – not those in roles it can easily replace, such as coding and admin,' says economist Erik Hust, of Chicago Booth School of Business. Unsure which jobs they'll be relying on next, bosses are instead drawn to real-life experience and soft skills that can't be mimicked by a computer. While these skills can be picked up at university, these institutions do not have the monopoly on this sort of learning. The change in attitude is stark. While millennials like me were told that university was vital to forging a successful career, with Tony Blair wanting 50pc of all school leavers to go into higher education, almost 70pc of employers now say a degree isn't essential for entry-level roles, according to data published last week by Indeed. In many ways, the crumbling of the paper ceiling gives young people more choice. More people will opt for a short course or entrepreneurship if it suits them better.

Barnaby Joyce's urgent warning to Australia about how the country is about to change forever
Barnaby Joyce's urgent warning to Australia about how the country is about to change forever

Daily Mail​

time6 hours ago

  • Daily Mail​

Barnaby Joyce's urgent warning to Australia about how the country is about to change forever

Barnaby Joyce has issued a dark warning to Aussies about how artificial intelligence will take their jobs. The Australian Services Union said on Monday it will lodge a submission with the Fair Work Commission to support working from home, where it's possible to do so. 'Working from home is now a permanent feature of the modern Australian workplace, and our submission will make it clear that the location of work does not diminish its value,' union secretary Emeline Gaske said on Monday. The union is calling for employees to be given six months' notice if the employer wants them to return to the office. But Joyce described this demand as 'an absurdity,' warning that opting to WFH would make it easier for an employer to replace you. 'You can't just say you're going to work from home today, or you won't have a job,' the Nationals MP and former Deputy Prime Minister told Sunrise. 'I think you've got to be careful. With AI coming: if your job is a keyboard, yourself, and a computer, it's not a myth: AI is coming.' He added: 'AI is going to come into the clerical work and just remove jobs left, right, and centre. 'I'd be doing everything to keep your jobs because if people can prove they don't need to come to the office, then the office can prove they can be replaced by AI.' Joyce pointed to trades work, such and electricians and plumbers. 'AI won't be able to turn itself into a plumber or itself into an electrician or a chippy, so trades are a place where you can sustain a good level of employment,' he added. 'It... replaces people but it doesn't have hands and it doesn't have feet - think about it.' Meanwhile, Social Services Minister Tanya Plibersek said repetitive jobs were most under threat from AI. 'What we need to do is make sure that there are good jobs available for Australians in new and emerging industries as well,' she said. 'We've got real capacity to develop some of those AI tools right here.' Over 6.7 million Australians WFH, representing 46 per cent of employed people, according to new research from Roy Morgan. The remaining 54 per cent work entirely in-person. Almost a third of jobs in Australia could be done by AI, according to a Victoria University analysis of research carried out by the International Labour Organization indices. However, the jury is still out on how quickly some of these jobs will be replaced. Most at risk are roles involving clerical tasks, such as data entry or book-keeping, according to a recent report by Jobs and Skills Australia (JSA). Meanwhile, those industries least likely to be affected included cleaning, hospitality and the trades. Barney Glover, the JSA's commissioner, insisted that while bleak predictions of mass redundancies were overstated, every job would be affected by AI. 'The overarching message is that almost all occupations will be augmented by AI,' he said. 'It doesn't make a difference which sector you are in, or at what skill level: you will be influenced by AI.' The use and application of AI is likely to be a hot topic at the federal government's productivity roundtable beginning in Canberra on Tuesday.

ChatGPT answers humans through Telex message machine in Amberley
ChatGPT answers humans through Telex message machine in Amberley

BBC News

time16 hours ago

  • BBC News

ChatGPT answers humans through Telex message machine in Amberley

Historians at a museum have connected a 50-year-old Telex machine to modern day artificial intelligence (AI), creating "a conversation spanning decades".Telex was a message transfer service where text would be typed into one machine and printed out on the recipient' users of the machine at Amberley Museum, in West Sussex, will not get a response from another human, instead it will be ChatGPT answering their museum said visitors had been testing out the new machine, which was built "thanks to the ingenuity" of volunteer David Waters. Users can type in questions and receive a printed response from ChatGPT - an AI chatbot.A spokesperson for the museum said: "The experience begins by using a rotary dial to make the initial connection, creating an unforgettable meeting of communication technologies separated by half a century."They said the project "perfectly captures the spirit of Amberley Museum - celebrating our technological past while engaging with the innovations of today."It's a conversation across decades."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store