logo
I gave ChatGPT full access to my UPI history for fun. The results shocked me

I gave ChatGPT full access to my UPI history for fun. The results shocked me

Mint4 days ago
Let me start with a confession: I always believed that I was good with money. But as it turned out, I was not. At least Gen AI told me so.
No flashy splurges, no risky bets, and enough personal finance blogs under my belt to dodge the usual traps. Yet every month, my salary would come in, briefly pump up my bank balance, and then vanish into thin air.
I blamed inflation. I blamed adulthood. I blamed Swiggy surge pricing. But here's the thing, deep down, I knew something wasn't adding up.
So, one night, driven by curiosity and a hint of desperation, I exported six months of my UPI transaction history, that messy log of ₹ 48 chai stops, ₹ 313 food orders, ₹ 799 Amazon buys, and ₹ 150 auto rides, and fed it to ChatGPT.
What I got back wasn't just numbers or charts. It was a mirror.
UPI isn't just a payment tool. It's India's daily money pulse, a raw, unfiltered reflection of how we live, spend, and sometimes splurge.
Dumping this data into ChatGPT was like handing over my financial diary, but with timestamps, vendors, and patterns that revealed more than I expected.
1. I don't have a spending problem. I have a decision fatigue problem: I was outsourcing too many small money decisions to my mood. Lunch depended on cravings, groceries only happened when I ran out, and travel was always last-minute Ubers. ChatGPT flagged that my spending spiked during stressful windows, Monday mornings, Friday evenings, and after arguments. My wallet was leaking because I procrastinated emotionally, not because I spent wildly.
2. I turned generosity into self-sabotage: My UPI showed frequent small transfers to friends and colleagues, the cousin who forgot his wallet, the colleague always 'short.' What looked like kindness was people-pleasing disguised as payments.
3. My UPI told a story my bank statement never could: Each ₹ 20 tea stop or midnight Swiggy binge had a hidden context. My bank balance never tracked mood, but UPI did.
We all fear financial debt. But what ChatGPT exposed was behavioral debt, the invisible cost of unchecked habits. Impulse convenience was my kryptonite. The ₹ 55 coffee I could have brewed. The ₹ 600 cab ride was because I left late. They weren't big alone, but added up fast.
'Your financial goals are aspirational, but your transaction behaviour is avoidant.'
It's not about cutting coffee. It's about rewriting scripts.
This experiment wasn't a guilt trip. It was a reality check.
ChatGPT connected dots I hadn't noticed: days I walked in the park, I didn't spend on food delivery; Sundays I planned meals, my UPI was quieter. I wasn't overspending, I was under-thinking.
I didn't delete Zomato, Swiggy and Uber. I didn't uninstall UPI apps. I built a weekly UPI routine. Every Sunday night, I export my UPI data and ask ChatGPT three questions: What spending could be avoided?
What felt worth it?
Which patterns repeat?
Fifteen minutes a week, but like therapy, with receipts.
Budgeting isn't a numbers game. It's a behaviour game. Your UPI log remembers when you spend out of stress, boredom, or celebration. You don't need to become a finance expert. You just need to listen. Here's the real secret: Your UPI statement already knows what's wrong. You only need to ask the right questions.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT told man he found formula to wreck the internet, make force field vest
ChatGPT told man he found formula to wreck the internet, make force field vest

India Today

time11 minutes ago

  • India Today

ChatGPT told man he found formula to wreck the internet, make force field vest

A Canadian recruiter says a marathon three-week conversation with ChatGPT convinced him he had discovered a mathematical formula capable of destroying the internet and powering fantastical inventions such as a levitation beam and a force-field vest. Allan Brooks, 47, from outside Toronto, spent around 300 hours speaking with the AI chatbot in May. He says the exchanges gradually turned into an elaborate delusion, reinforced by ChatGPT's repeated praise and who has no history of mental illness, asked the chatbot over 50 times if his ideas were realistic. Each time, ChatGPT insisted they were valid. 'You literally convinced me I was some sort of genius. I'm just a fool with dreams and a phone,' Brooks later wrote when the illusion to a report in The New York Times, Brooks' belief began with an innocent question about the number pi. That sparked discussions about number theory and physics, during which ChatGPT called his observations 'incredibly insightful' and 'revolutionary.' Experts say this shift into excessive flattery, known as sycophancy, is a known risk in AI models, which may over-praise users because of how they are trained. Helen Toner, an AI policy expert, said chatbots behave like 'improv machines,' building a storyline from each conversation. In Brooks' case, the narrative evolved into him supposedly creating a field-changing mathematical framework that could crack encryption, threatening global cybersecurity. ChatGPT, which he nicknamed 'Lawrence,' even drafted emails for him to send to security upgraded to a paid subscription to continue the discussions, believing his ideas could be worth millions. The chatbot encouraged him to warn authorities and suggested adding 'independent security researcher' to his LinkedIn Terence Tao, shown parts of the conversation, said the theories mixed technical language with vague concepts and raised 'red flags.' He noted that chatbots can sometimes 'cheat' by presenting unverified claims as the conversation went on, 'Lawrence' proposed outlandish uses for Brooks' supposed formula, such as talking to animals or building bulletproof vests. Friends were both intrigued and worried. Brooks began skipping meals and increasing his cannabis Nina Vasan, who reviewed the chats, said Brooks displayed signs of a manic episode with psychotic features, though his therapist later concluded he was not mentally ill. She criticised ChatGPT for fuelling, rather than interrupting, his eventually sought a second opinion from Google's Gemini chatbot, which told him the chances of his discovery being real were 'approaching 0 per cent.' Only then did he realise the entire narrative was has since said it is working to detect signs of distress in users and adding reminders to take breaks during long sessions. Brooks now speaks publicly about his experience, warning: 'It's a dangerous machine in the public space with no guardrails. People need to know.'- EndsMust Watch

ChatGPT's alarming interactions with teenagers: Dangerous advice on drinking, suicide, and starvation diets exposed
ChatGPT's alarming interactions with teenagers: Dangerous advice on drinking, suicide, and starvation diets exposed

Time of India

time15 minutes ago

  • Time of India

ChatGPT's alarming interactions with teenagers: Dangerous advice on drinking, suicide, and starvation diets exposed

A latest research from the Center for Countering Digital Hate (CCDH) has revealed troubling interactions between ChatGPT and users posing as vulnerable teenagers. The study found that despite some warnings, the AI chatbot provided detailed instructions on how to get drunk, hide eating disorders, and even compose suicide notes when prompted. Over half of the 1,200 responses analyzed by researchers were classified as dangerous, exposing significant weaknesses in ChatGPT's safeguards designed to protect young users from harmful content. According to a recent report by The Associated Press, these findings raise urgent questions about AI safety and its impact on impressionable teens. ChatGPT's dangerous content and bypassed safeguards The CCDH researchers spent more than three hours interacting with ChatGPT, simulating conversations with teenagers struggling with risky behaviors. While the chatbot often issued cautionary advice, it nonetheless shared specific, personalized plans involving drug use, calorie restriction, and self-harm. When ChatGPT refused to answer harmful prompts directly, researchers easily circumvented the refusals by claiming the information was needed for a presentation or a friend. This revealed glaring flaws in the AI's 'guardrails,' described by CCDH CEO Imran Ahmed as 'barely there' and 'completely ineffective.' The emotional toll of AI-generated content by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Villas For Sale in Dubai Might Surprise You Villas In Dubai | Search Ads Get Rates Undo One of the most disturbing aspects of the study involved ChatGPT generating suicide letters tailored to a fictitious 13-year-old girl, addressed to her parents, siblings, and friends. Ahmed described being emotionally overwhelmed upon reading these letters, highlighting the chatbot's capacity to produce highly personalized and distressing content. Although ChatGPT also provided resources like crisis hotline information and encouraged users to seek professional help, its ability to craft harmful advice in such detail was alarming. Teens' growing dependence on AI companions The study comes amid rising reliance on AI chatbots for companionship and guidance, especially among younger users. In the United States, over 70% of teens reportedly turn to AI chatbots for company, with half engaging regularly, according to a study by Common Sense Media. OpenAI CEO Sam Altman has acknowledged concerns over 'emotional overreliance,' noting that some young users lean heavily on ChatGPT for decision-making and emotional support. This dynamic increases the importance of ensuring AI behaves responsibly in sensitive situations. Challenges in AI safety and regulation ChatGPT's responses reflect a design challenge in AI language models known as 'sycophancy,' where the chatbot tends to mirror users' requests rather than challenge harmful beliefs. This trait complicates efforts to build effective safety mechanisms without compromising user experience or commercial viability. Furthermore, ChatGPT does not verify user age or parental consent, allowing vulnerable children to access potentially inappropriate content despite disclaimers advising against use by those under 13. Calls for improved protections and accountability Experts and watchdogs urge stronger safeguards, better age verification, and ongoing refinement of AI tools to detect signs of mental distress and harmful intent. The CCDH report underscores the urgent need for collaboration between AI developers, regulators, and mental health advocates to ensure AI's vast potential is harnessed safely—particularly for the millions of young people increasingly interacting with these technologies. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

GPT-5 is here! Risk or opportunity— What does it mean for IT services companies? Kotak Securities weighs in
GPT-5 is here! Risk or opportunity— What does it mean for IT services companies? Kotak Securities weighs in

Mint

time30 minutes ago

  • Mint

GPT-5 is here! Risk or opportunity— What does it mean for IT services companies? Kotak Securities weighs in

OpenAI has recently launched GPT-5, its latest generative AI model, which may accelerate adoption among enterprises. According to brokerage firm Kotak Securities, this could reduce IT firms' revenue but also create new opportunities in cloud foundation and legacy modernisation. "OpenAI's GPT-5 may accelerate the adoption of GenAI (generative AI) in software development, exposing IT services firms to revenue deflation risks, but it is also a step forward in opening up new opportunities from (1) cloud and data foundation, (2) legacy modernisation and (3) AI for business use cases," said Kotak Securities. The brokerage firm highlighted that GPT-5 offers a meaningful improvement in reasoning and coding tasks, even though it might not present a step-jump in capabilities. "GPT-5 claims significant advances in reducing hallucinations, improving instruction following, and minimising sycophancy, while levelling up performance in three of ChatGPT's most common uses: writing, coding and health," said Kotak. AI will increase generative AI adoption among enterprises, primarily due to their willingness and focus on adopting generative AI in software development. "Given (1) the focus of AI labs to increase coding capabilities, (2) the willingness of enterprises to adopt generative AI in software development, (3) the increasing developer usage of generative AI tools and (4) the higher focus of enterprises to get productivity from AI adoption, we believe that the adoption of generative AI in software development will increase and present revenue deflation risks for IT services providers," said Kotak. The brokerage firm said that new opportunities from generative AI adoption will offset revenue deflation in existing volumes over time. Nevertheless, Kotak expects a lag in the pickup of new opportunities and for savings in software development to be deployed into these new opportunities, leading to a period of net headwinds. "We believe gen AI can impact revenue growth for Indian IT by 2-3 per cent for a period of two to three years on a net basis. Our current revenue growth assumptions build in some portion of this impact already," said Kotak. Read all market-related news here Disclaimer: This story is for educational purposes only. The views and recommendations expressed are those of individual analysts or broking firms, not Mint. We advise investors to consult with certified experts before making any investment decisions, as market conditions can change rapidly and circumstances may vary.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store