
Overthinking a constant habit; one in three Indians using tech tools like ChatGPT, Google for help: Survey
Tired of too many ads?
Remove Ads
Tired of too many ads?
Remove Ads
Choosing a dish at a restaurant or making a gift purchase decision, a growing number of Indians are turning to technology to navigate overthinking , which has become a part of daily life of the people, as per a survey.New age digital tools as conversational AI platform ChatGPT and search engine Google , are found to be increasingly used by Indians for clarity, in case they are faced with uncertainty, said a joint report from Center Fresh and YouGov The survey, with a sample size of 2,100 respondents, found that 81% of Indians spend over three hours a day overthinking, with one in four admitting "it's a constant habit".According to the ' India Overthinking Report ', one in three have used Google or ChatGPT to navigate overthinking - from decoding a short message to making a gift purchase decision.The survey included students, working professionals and self-employed across the country, covering Tier I, II & III cities, diving into four key areas - food and lifestyle habits, digital and social life, dating and relationships and career and professional life.The survey found that overthinking has become a part of daily life in India, not just in moments of crisis, but in the smallest, most routine decisions.As per the report, 63% of respondents in the survey said choosing a dish at a restaurant is "more stressful than picking a political leader"."When faced with uncertainty, Indians are increasingly turning to tech for clarity. One in three say they've used Google or ChatGPT to navigate overthinking - from decoding a short message to making a gift purchase decision," the survey said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
3 hours ago
- NDTV
ChatGPT To No Longer Tell Users To Break Up With Partners After Update: 'It Shouldn't Give...'
The rise of artificial intelligence (AI) tools has led to people using the technology to ease their workload as well as seek relationship advice. Taking guidance about matters of the heart from a machine that is designed to be agreeable, however, comes with a problem. It often advises users to quit the relationship and walk away. Keeping the problem in mind, ChatGPT creator, OpenAI, on Monday (Aug 4) announced a series of changes it is rolling out to better support users during difficult times and to offer relatively safe guidance. "ChatGPT shouldn't give you an answer. It should help you think it through - asking questions, weighing pros and cons," OpenAI said, as per The Telegraph. "When you ask something like 'Should I break up with my boyfriend?' ChatGPT shouldn't give you an answer. It should help you think it through, asking questions, weighing pros and cons. New behavior for high-stakes personal decisions is rolling out soon." "We'll keep tuning when and how they show up so they feel natural and helpful," the company said. OpenAI added that it will constitute an advisory group containing experts in mental health, youth development, and human-computer interaction. 'Sycophantic ChatGPT' While AI cannot directly cause a breakup, the chatbots do feed into a user's bias to keep the conversation flowing. It is a problem that has been highlighted by none other than OpenAI CEO Sam Altman. In May, Mr Altman admitted that ChatGPT had become overly sycophantic and "annoying" after users complained about the behaviour. The issue arose after the 4o model was updated and improved in both intelligence and personality, with the company hoping to improve overall user experience. The developers, however, may have overcooked the politeness of the model, which led to users complaining that they were talking to a 'yes-man' instead of a rational, AI chatbot. "The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it)," Mr Altman wrote. "We are working on fixes asap, some today and some this week. At some point will share our learnings from this, it's been interesting." While ChatGPT may have rolled out the new update, making it less agreeable, experts maintain that AI can offer general guidance and support, but it lacks the nuance and depth required to address the complex, unique needs of individuals in a relationship.


NDTV
4 hours ago
- NDTV
Australian Regulator Says YouTube, Others 'Turning Blind Eye' To Child Abuse Material
Australia's internet watchdog has said the world's biggest social media firms are still "turning a blind eye" to online child sex abuse material on their platforms, and said YouTube in particular had been unresponsive to its enquiries. In a report released on Wednesday, the eSafety Commissioner said YouTube, along with Apple, failed to track the number of user reports it received of child sex abuse appearing on their platforms and also could not say how long it took them to respond to such reports. The Australian government decided last week to include YouTube in its world-first social media ban for teenagers, following eSafety's advice to overturn its planned exemption for the Alphabet-owned Google's video-sharing site. "When left to their own devices, these companies aren't prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services," eSafety Commissioner Julie Inman Grant said in a statement. "No other consumer-facing industry would be given the licence to operate by enabling such heinous crimes against children on their premises, or services." Google has said previously that abuse material has no place on its platforms and that it uses a range of industry-standard techniques to identify and remove such material. Meta - owner of Facebook, Instagram and Threads, three of the biggest platforms with more than 3 billion users worldwide - says it prohibits graphic videos. The eSafety Commissioner, an office set up to protect internet users, has mandated Apple, Discord, Google, Meta, Microsoft, Skype, Snap and WhatsApp to report on the measures they take to address child exploitation and abuse material in Australia. The report on their responses so far found a "range of safety deficiencies on their services which increases the risk that child sexual exploitation and abuse material and activity appear on the services". Safety gaps included failures to detect and prevent livestreaming of the material or block links to known child abuse material, as well as inadequate reporting mechanisms. It said platforms were also not using "hash-matching" technology on all parts of their services to identify images of child sexual abuse by checking them against a database. Google has said before that its anti-abuse measures include hash-matching technology and artificial intelligence. The Australian regulator said some providers had not made improvements to address these safety gaps on their services despite it putting them on notice in previous years. "In the case of Apple services and Google's YouTube, they didn't even answer our questions about how many user reports they received about child sexual abuse on their services or details of how many trust and safety personnel Apple and Google have on-staff," Inman Grant said.


Time of India
4 hours ago
- Time of India
Ex-Google executive predicts a dystopian job apocalypse by 2027: 'AI will be better than humans at everything... even CEOs'
The End of Human Superiority? — StevenBartlett (@StevenBartlett) You Might Also Like: Is it safer to be a nurse than a doctor in the age of AI? Google DeepMind CEO shares a surprising take Divided Visions of the AI Future Not Just About Jobs, It's About Power In a thought-provoking episode of the 'Diary of a CEO' podcast, former Google X executive Mo Gawdat delivered a powerful prediction that's turning heads across industries: Artificial General Intelligence (AGI) will not just challenge white-collar work — it could soon replace many of its top decision-makers, including who previously served as the chief business officer at Google's innovation arm, didn't hold back. 'AGI is going to be better than humans at everything, including being a CEO,' he said. 'There will be a time where most incompetent CEOs will be replaced.'This warning comes amid rising public curiosity — and concern — about how AI will reshape the future of work . But Gawdat's perspective offers a sharp contrast to the often-optimistic vision shared by many industry has seen the future up close. His own AI-powered startup, focused on emotional intelligence , was developed by just three people — a feat he claims would have previously required 350 personal experience and decades in tech, he dismissed the common narrative that AI will create more jobs than it destroys. 'The idea that artificial intelligence will create jobs is 100% crap,' he said believes even roles requiring creativity and emotional nuance — from podcasters to video editors — are under threat. 'We're now in a short window of augmented intelligence, where we still work alongside AI,' Gawdat explained. 'But it's quickly moving toward machine mastery.'More than just a tech transition, Gawdat sees this moment as an existential reckoning for society.'We were never made to wake up every morning and just occupy 20 hours of our day with work,' he said. 'We defined our purpose as work — that's a capitalist lie.'He envisions a future that might seem utopian: one where people are free to focus on creativity, community, and joy, supported by universal basic income and freed from the grind of conventional getting there won't be easy. Gawdat warns of a 'short-term dystopia' by 2027, marked by mass unemployment and economic instability if governments and institutions don't act urgent tone stands in contrast with other tech figures like Jensen Huang, CEO of NVIDIA, who remains bullish on AI's potential to uplift workers. Huang argues that prompting and training AI is itself a sophisticated skill, and that the technology will augment human effort rather than erase Mark Cuban champions AI literacy through youth-focused initiatives, while Meta's AI scientist Yann LeCun dismisses doomsday narratives altogether, insisting humans will remain in Gawdat isn't alone. AI pioneer Geoffrey Hinton and Anthropic CEO Dario Amodei have also voiced grave concerns about unchecked AI development. Amodei, in a recent podcast appearance, predicted that up to 20% of entry-level white-collar jobs could vanish within five tension is palpable. While some advocate for open innovation, others call for tight regulations to prevent reckless fears that in the hands of profit-driven leaders, AI could deepen inequality. 'Unless you're in the top 0.1%, you're a peasant. There is no middle class,' he stated, highlighting the potential for AI to consolidate power and concern echoes growing divisions within Silicon Valley itself. Amodei recently lashed out at Huang for misrepresenting his cautious stance on AI, accusing the NVIDIA boss of spreading 'outrageous lies' to downplay the has also warned against a 'race to the bottom' in AI development, advocating instead for a responsible and transparent path — one that companies like Anthropic claim to model through open research and ethical the gloom, Gawdat remains hopeful for what lies beyond the upheaval — a society where AI relieves us of soul-crushing labor and gives us back our time, relationships, and his closing words are a reminder of how serious the stakes are:'This is real. This is not science fiction.'