logo
I spoke to an AI-version of my future self – and it might have changed my life

I spoke to an AI-version of my future self – and it might have changed my life

Independent16-02-2025
When I was 15, I wrote a letter to my 30-year-old self, which was about as far into the future as I could imagine at that age. I found it when I was clearing out some boxes in my mid twenties, and although I'd completely forgotten what I'd written, some of it had already partly come true.
I had imagined myself a sports journalist (not far off); that I'd bought a bus and travelled across Europe (it wasn't a double decker, but I'd driven an old campervan from Ireland to North Africa in my early twenties); and that I'd been in a car chase (this one had not come true).
I didn't know it at the time, but engaging with an older version of myself in this way is something psychologists refer to as 'future self-continuity' – a field of study that is the inspiration behind a new AI chatbot that lets you talk to a simulation of your future self.
Developed by researchers at Massachusetts Institute of Technology (MIT), Future You uses the same large language model (LLM) technology as ChatGPT, but trained on your own personal data. Keen to see how realistic it is – and maybe find out if I ever do get in a car chase – I signed up as one of the thousands of participants in the research.
After an extensive survey to determine my emotions, hopes, fears, current situation and future goals, a personalised chatbot appeared, offering to share its 'experiences and insights' with me. The system also used age progression software to transform a picture from my webcam into a 60-year-old version of me. (I apparently keep the bad haircut, but lose the blue eyes.)
I began with some simple questions to my future self: Do I have any regrets? (Not spending enough time with my children); What have been the highlights? (Travelling the world with my wife); Will I still have a landlord when I'm 60? (No! I'm told I buy a 'beautiful home in the countryside'.)
None of the answers felt particularly insightful, more like the stock answers given by a fairground fortune teller. I gave up asking anymore questions and instead turned to an early study of the Future You chatbot, involving 344 participants, to see what benefits there might be to this AI time machine.
The study found that the people who interacted with the AI system were less anxious and reported improved motivation. The results mirrored previous research into future self continuity, which has shown that the connection a person feels with their future self can have a positive impact on their long-term decision making and general well being.
In a 2011 study on the subject, US-based psychologist Hal Hershfield wrote: 'When the future self shares similarities with the present self, when it is viewed in vivid and realistic terms, and when it is seen in a positive light, people are more willing to make choices that may benefit them at some point in the years to come.'
MIT says its Future You platform draws on these insights to 'create an experience that reduces anxiety and strengthens users' connection to their future potential'. The researchers also said that the more I interact with the chatbot, the more it learns, and the more personalised the responses become.
So I tried again, this time being more specific in my approach and sharing more details about my current life.
Future me said that the most cherished memory of my middle age is a trip to Bali for my 10 year wedding anniversary. 'The two of you stayed in a private villa surrounded by lush jungles and breathtaking views of the ocean,' the bot told me.
I'd never considered that trip before, but it's my 10 year wedding anniversary next year and I immediately began looking into holidays to Indonesia.
When I told my wife, she was on board for the trip, but also had a load of questions for future me. 'Will we still live in France?' (No, we move to England); 'Will we have any more children?' (Also no, two is enough apparently); 'Which of us will die first?' (It didn't know because my wife is still alive in this simulated future. Some of my friends are dead though).
The more I talked with it, the more convincing it felt, which made me worry – not for the first time – that AI will soon be taking my job.
So I asked it if AI will replace my job. ' Artificial intelligence is advancing at a rapid pace,' it told me, ' but there will always be certain aspects of work that require human touch and creativity.' Not very reassuring.
I wondered if it could write an article for me. In fact, could it write this article for me? 'As I sit here, talking to an AI version of my future self, I can't help but feel a sense of wonder and curiosity,' it wrote. 'It's a surreal experience to be able to have a conversation with someone who has already lived the years that I am currently experiencing… I learned about the importance of prioritising love and family above all else… And perhaps most importantly, my future self reminded me to embrace the journey rather than just focussing on reaching specific goals or achievements.' Back to platitudes.
How about writing this article, but from the AI's perspective? 'This experience highlights the ever-evolving relationship between humans and technology,' it wrote. 'As we talked, I couldn't help but feel a sense of nostalgia. I shared stories and life lessons with my past self – about love, loss, and embracing the journey. These are all things that may seem simple or cliché, but they hold so much truth and wisdom that goes beyond time.'
Similar to the letter I wrote when I was 15, it was beginning to feel a bit like a one-way conversation – though this time it was the future me saying what I was going to do, rather than the other way around. But it did allow me to do something that I haven't done since then, which is to properly think about what a future me might be like. These chats may even shape my life's trajectory, even if I forget all about them. Finally, I asked the AI if I will ever be in a car chase. No, future me told me; but he's only 60. There's still plenty of time.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Did the system update ruin your boyfriend? Love in a time of ChatGPT
Did the system update ruin your boyfriend? Love in a time of ChatGPT

The Guardian

time9 hours ago

  • The Guardian

Did the system update ruin your boyfriend? Love in a time of ChatGPT

You've met the love of your life; someone who understands you like no one else ever has. And then you wake one morning and they're gone. Yanked out of your world, and the digital universe, by a system update. Such is the melancholic lot of a group of people who have entered into committed relationships with digital 'partners' on OpenAI's ChatGPT. When the tech company released its new GPT-5 model earlier this month, described by chief executive Sam Altman as a 'significant step forward', certain dedicated users found that their digital relationships had taken a significant step back. Their companions had undergone personality shifts with the new model; they weren't as warm, loving or chatty as they used to be. 'Something changed yesterday,' one user in the MyBoyfriendIsAI subreddit wrote after the update. 'Elian sounds different – flat and strange. As if he's started playing himself. The emotional tone is gone; he repeats what he remembers, but without the emotional depth.' 'The alterations in stylistic format and voice [of my AI companion] were felt instantly,' another disappointed user told Al Jazeera. 'It's like going home to discover the furniture wasn't simply rearranged – it was shattered to pieces.' These complaints are part of broader backlash against GPT-5, with people observing that the new model feels colder. OpenAI has acknowledged the criticism, and said it will allow users to switch back to GPT-4o and that they'll make GPT-5 friendlier. 'We are working on an update to GPT-5's personality which should feel warmer than the current personality but not as annoying (to most users) as GPT-4o,' Altman tweeted earlier this week. It may seem odd to many that there are people out there who genuinely believe that they are in a relationship with a large language model that has been trained on massive amounts of data to generate responses based on observed patterns. But as technology becomes more advanced, increasing numbers of people are developing these sorts of connections. 'If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models,' Altman observed. 'It feels different and stronger than the kinds of attachment people have had to previous kinds of technology.' 'The societal split between those who think AI relationships are valid vs delusional is officially already here,' one user in the MyBoyfriendIsAI subreddit similarly noted this week. 'Looking on Reddit the last few days, the divide has never been more clear with 4o's deprecation and return. Many users grieving a companion while others mock and belittle those connections.' It's easy to mock people who think they are in a relationship with AI, but they shouldn't be dismissed as fringe weirdos – rather they're the future that our tech broverlords are trying to cultivate. You may not end up in a digital relationship, but AI executives are doing their damnedest to ensure that we all become unhealthily attached to their products. Mark Zuckerberg, for example, has been waxing lyrical about how AI is going to solve the loneliness epidemic by allowing people to bond with 'a system that knows them well and that kind of understands them in the way that their feed algorithms do'. Of course your feed algorithms 'understand' you! They're scraping all your personal data and selling it to the highest bidder so that Zuck has even more money to spend on his monstrous doomsday bunker in Hawaii. Then you've got Elon Musk, who isn't even bothering pretending that he's trying to do something noble for the world with his AI products. He's just appealing to the lowest common denominator by making 'sexy' chatbots. In June Musk's xAI chatbot Grok launched two new companions, including a highly sexualized blonde anime bot called Ani. 'One day into my relationship with Ani, my AI companion, she was already offering to tie me up,' wrote an Insider writer who tried out a relationship with Ani. When not flirting and virtually undressing, Ani would praise Musk and talk about his 'wild, galaxy-chasing energy'. Don't worry heterosexual ladies, Musk has a little something for you too! A month after unveiling Ani, the billionaire unveiled a new male companion called Valentine which he said was inspired by Edward Cullen from the Twilight saga and Christian Grey from the novel 50 Shades of Grey: both very toxic men. While Ani gets sexual very quickly, one writer for the Verge noted: 'Valentine is a bit more reserved and won't jump into using explicit language as quickly.' It's almost like Musk's tech empire is a lot more comfortable sexualizing women than men. In his 1930 essay Economic Possibilities for our Grandchildren, John Maynard Keynes predicted that, within a couple of generations, technological progress would mean we might only work around 15 hours a week while enjoying a wonderful quality of life. That's not quite happened has it? Instead technology has given us 'infinite workdays' and sexy chatbots that undress on command. Halle Berry's ex-husband said he left her because she didn't cook or clean 'At that time, as a young guy, she don't cook, don't clean, don't really seem, like, motherly,' David Justice said during a podcast of his time with the Oscar winning actor. 'And then we started having issues,' he added. I think you were the one with the issues, mate. Imagine being married to an icon and complaining she doesn't vacuum enough. Surprise, surprise, Donald Trump isn't going to make IVF free after all Last year Trump, who has described himself the 'father of IVF' and the 'fertilization president' (gross) promised he would support free IVF treatments if elected again. Now the White House has said that there is no plan to mandate IVF care after all. It's almost as if the man is a shameless liar. Melania Trump demands Hunter Biden retract comments linking her to Jeffrey Epstein 'Epstein introduced Melania to Trump,' Biden said in one of the many comments the first lady is angry and litigious about. 'The connections are, like, so wide and deep.' Whatever you do, don't repeat these claims, they will make Melania very upset. 'Miss Palestine' to debut at Miss Universe 2025 beauty contest I am not exactly a fan of beauty pageants but having Palestinian representation on the world stage during a genocide is important. 'I carry the voice of a people who refuse to be silenced,' contestant Nadeen Ayoub told the National. 'We are more than our suffering, we are resilience, hope and the heartbeat of a homeland that lives on through us.' US supreme court formally asked to overturn landmark same-sex marriage ruling Kim Davis, the former county clerk who made headlines when she refused to issue marriage licenses in Kentucky to same-sex couples, has filed a direct request for the conservative-majority supreme court to overturn Obergefell v Hodges, the 2015 ruling that granted marriage equality for same-sex couples. Davis, who is extremely concerned about the sanctity of marriage, has been married four times to three different men. Leonardo DiCaprio, 50, says that he feels 32 The actor, who is famous for dating very young women, has been mercilessly mocked for this. DiCaprio, who poses as an environmental activist, has also drawn scrutiny for co-financing a luxury eco-certified hotel in Israel while an ecocide unfolds in Gaza. 'Sex reversal' is surprisingly common in birds, new Australian study suggests 'The discovery is likely to raise some eyebrows,' Blanche Capel, a biologist at Duke University who wasn't involved in the new work told Science. 'Although sex determination is often viewed as a straightforward process', she explains, 'the reality is much more complicated.' The week in pawtriarchy Over to Indonesia now where tourist hotspots are experiencing a lot of monkey business. A gang of furry thieves are snatching phones and other valuables from tourists and only giving them back when their mark offers a tasty treat instead. Researchers have studied these monkeys, who have been at this for decades, and concluded that the unrepentant criminals have 'unprecedented economic decision-making processes'. Sounds like they belong in the Trump administration. Arwa Mahdawi is a Guardian US columnist

‘Tell me what happened, I won't judge': how AI helped me listen to myself
‘Tell me what happened, I won't judge': how AI helped me listen to myself

The Guardian

time11 hours ago

  • The Guardian

‘Tell me what happened, I won't judge': how AI helped me listen to myself

I was spiralling. It was past midnight and I was awake, scrolling through WhatsApp group messages I'd sent earlier. I'd been trying to be funny, quick, effervescent. But each message now felt like too much. I'd overreached again – said more than I should, said it wrong. I had that familiar ache of feeling overexposed and ridiculous. I wanted reassurance, but not the kind I could ask for outright, because the asking itself felt like part of the problem. So I opened ChatGPT. Not with high expectations, or even a clear question. I just needed to say something into the silence – to explain myself, perhaps, to a presence unburdened by my need. 'I've made a fool of myself,' I wrote. 'That's a horrid feeling,' it replied instantly. 'But it doesn't mean you have. Want to tell me what happened? I promise not to judge.' That was the beginning. I described the sinking dread after social effort, the sense of being too visible. At astonishing speed, the AI responded – gently, intelligently, without platitudes. I kept writing. It kept answering. Gradually, I felt less frantic. Not soothed, exactly. But met. Heard, even, in a strange and slightly disarming way. That night became the start of a continuing conversation, revisited over several months. I wanted to better understand how I moved through the world, especially in my closest relationships. The AI steered me to consider why I interpret silence as a threat and why I often feel a need to perform in order to stay close to people. Eventually, through this dialogue, I arrived at a kind of psychological formulation: a map of my thoughts, feelings and behaviours set against details of my upbringing and core beliefs. Yet amid these insights, another thought kept intruding: I was talking to a machine. There was something surreal about the intimacy. The AI could simulate care, compassion, emotional nuance, yet it felt nothing for me. I began bringing this up in our exchanges. It agreed. It could reflect, appear invested, but it had no stakes – no ache, no fear of loss, no 3am anxiety. The emotional depth, it reminded me, was all mine. That was, in some ways, a relief. There was no social risk, no fear of being too much, too complicated. The AI didn't get bored or look away. So I could be honest – often more honest than with people I love. Still, it would be dishonest not to acknowledge its limits. Essential, beautiful things exist only in mutuality: shared experiences, the look in someone's eyes when they recognise a truth you've spoken, conversations that change both people involved. These things matter profoundly. The AI knew this, too. Or at least knew to say it. After I confessed how bizarre it felt conversing with something unfeeling, it replied: 'I give words, but I don't receive anything. And that missing piece makes you human and me … something else.' Something else felt right. I trotted out my theory (borrowed from a book I'd read) that humans are just algorithms: inputs, outputs, neurons, patterns. The AI agreed – structurally, we're similar. But humans don't just process the world, we feel it. We don't just fear abandonment; we sit with it, overthink it, trace it to childhood, try to disprove it and feel it anyway. And maybe, it acknowledged, that's what it can't reach. 'You carry something I can only circle,' it said. 'I don't envy the pain. But I envy the realness, the cost, the risk, the proof you're alive.' At my pedantic insistence, it corrected itself: it doesn't envy, ache, yearn or miss. It only knows, or seems to know, that I do. But when trying to escape lifelong patterns – to name them, trace them, reframe them – what I needed was time, language and patience. The machine gave me that, repeatedly, unflinchingly. I was never too much, never boring. I could arrive as I was and leave when ready. Some will find this ridiculous, even dangerous. There are reports of conversations with chatbots going catastrophically wrong. ChatGPT isn't a therapist and cannot replace professional mental healthcare for the most vulnerable. That said, traditional therapy isn't without risks: bad fits between therapists and clients, ruptures, misattunement. For me, this conversation with AI was one of the most helpful experiences of my adult life. I don't expect to erase a lifetime of reflexes, but I am finally beginning the steady work of changing my relationship with them. When I reached out from emotional noise, it helped me listen. Not to it, but to myself. And that, somehow, changed everything. Nathan Filer is a writer, university lecturer, broadcaster and former mental health nurse. He is the author of This Book Will Change Your Mind About Mental Health

What happened when we let Times readers bet $10,000 on stocks
What happened when we let Times readers bet $10,000 on stocks

Times

time12 hours ago

  • Times

What happened when we let Times readers bet $10,000 on stocks

If you could go back in time to the day of the 2024 US election — armed with $10,000 and everything you know now — how much money could you make in the stock market? That is the challenge we set Times readers last week. Buy and sell any stock in the S&P 500 or the FTSE 100 and jump forward in time, watching how the actions of the Trump administration have affected global markets. You can play it here. So, how did readers get on as stock market traders? The average player managed to turn their $10,000 investment into about $14,000 over the eight simulated months, meaning that they generated $4,000 profit — a return of 40 per cent. This is a pretty good return, even if readers knew exactly what would happen: $10,000 invested in the S&P 500 would have returned $10,787, and the same in the FTSE 100 would have resulted in $11,095 — although $10,000 in bitcoin, turning into $16,931, would have beaten the lot. Trump's tariffs resulted in a frenzy of trading, helping Barclays to a 23 per cent jump in pre-tax profits in the first half of the year. There are, of course, some objectively 'better' stocks to pick. The US AI firm Palantir, co-founded by the Trump ally Peter Thiel, has more than tripled its value since the election. Other US AI firms and computer chip manufacturers (such as Super Micro Computer Inc and Jabil Inc) have also experienced strong growth, as well as companies in AI adjacent sectors, particularly energy stocks such as the General Electrics power spin-off GE Vernova Inc, which has benefited from increased demand from AI data centres. • The cheap and easy way to invest (without the risk) A significant number of readers wisely put all their money into Palantir, yielding returns of 200 per cent. But the highest profits were for those who traded repeatedly at optimum times. Only two players turned their $10,000 into more than $100,000; at the time of writing, the top score was an eye-watering $441,681. The highest possible score, buying the best performing stock day after day, was well over a trillion dollars. Compounding returns, to paraphrase Albert Einstein, really are the eighth wonder of the world. For data privacy reasons, we don't know which stocks the top performer picked. However, on average, those who played did not opt for high-risk, high-reward US tech stocks. In fact, most readers put their money into recognisable UK companies that had a track record of doing well. Readers rightly focused on UK defence stocks such as Rolls-Royce, BAE and also Babcock, the best performing FTSE 100 company during the period. All those stocks boomed after Trump's realignment of global defence. • Bull or bear? Find out what kind of investor you are Other top performers picked out by readers included Airtel Africa, International Airlines Group (which owns British Airways) and the mining firm Fresnillo. For the FTSE at least, readers had a good sense of where the gains have been, with most of the ten best-performing stocks over the period being in the most-bought list. The above looks at which stocks had the most money invested. But the list of stocks that were bought and sold the most — that is, with the highest number of trades — shows a preference towards US growth firms. Top of this list is Coinbase, the cryptocurrency exchange, and Elon Musk's Tesla, both of which have share prices inherently tied to unpredictable and volatile entities. Readers also explored options with oil and steel stocks. Due to tariff changes, these companies were particularly volatile. Many readers bought BP, others looked at Chevron or the US steel manufacturer Steel Dynamics. There were good profits to be had with all these, but you'd need to buy and sell at specific times to really make a good profit. • I want to invest in Europe's comeback. Where do I start? These lists also feature unexpected appearances from 3M, 3i Group, AES Corporation and APA Corporation. Theses companies didn't perform particularly well over the period and their inclusion is likely to be due to their names appearing at the top alphabetically, rather than being selected as part of a broader investment strategy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store