logo
Sam Altman, CEO of OpenAI, thinks it's 'cool' young people ask ChatGPT for life advice

Sam Altman, CEO of OpenAI, thinks it's 'cool' young people ask ChatGPT for life advice

Express Tribune22-05-2025

OpenAI CEO Sam Altman said young people often consult ChatGPT before making life decisions, describing this trend as 'cool' during a recent industry event.
Speaking at the Sequoia Capital AI Ascent conference earlier this month, Altman explained that younger users do not merely use ChatGPT for information but seek personal advice from the chatbot.
'They don't really make life decisions without asking ChatGPT what they should do,' he said. Altman added that ChatGPT 'has the full context on every person in their life and what they've talked about.'
Altman contrasted this with older users, who tend to use ChatGPT more as an alternative to Google for research.
While Altman's remarks suggest a positive view of chatbot reliance, some experts warn of the potential risks. They've raised caution that ChatGPT can produce fabricated or misleading information, often referred to as 'hallucinations.'
AI chatbots do not possess human understanding of emotions or relationships and rely on patterns extracted from vast datasets. This limits their ability to provide nuanced advice on complex personal matters.
Incidents have been reported where AI interactions had adverse effects. A Rolling Stone report described a woman ending her marriage after her husband became fixated on conspiracy theories generated by AI.
Additionally, parents in Texas filed a lawsuit against Character.ai, alleging the platform's chatbots exposed children to inappropriate sexual content and encouraged self-harm and violence.
These examples raise concerns about the blurred boundaries between AI-generated conversations and real human relationships, particularly for children and young users.
While AI tools like ChatGPT offer convenience, experts emphasize that they cannot replace genuine human interaction or professional guidance.
As reliance on AI for personal advice grows, it remains critical to recognise its limitations and potential risks.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Gen Z embraces ChatGPT for affordable therapy while experts caution on mental health risks
Gen Z embraces ChatGPT for affordable therapy while experts caution on mental health risks

Express Tribune

timea day ago

  • Express Tribune

Gen Z embraces ChatGPT for affordable therapy while experts caution on mental health risks

Young people from Generation Z are increasingly turning to ChatGPT and other AI chatbots as affordable, on-demand alternatives to traditional talk therapy. Users praise the AI's availability, non-judgmental responses, and lower cost compared to licensed therapists, with some claiming the chatbot has helped them more than years of conventional treatment. However, licensed mental health professionals caution against relying solely on AI for therapy. While AI can support therapeutic tools and provide consistent empathy without fatigue, it lacks the intuition, clinical expertise, and personalized care that human therapists offer. Experts warn that overdependence on chatbots may hinder users' ability to cope independently and could lead to inaccurate diagnoses or harmful advice. Recent incidents involving AI chatbots providing dangerous recommendations have raised concerns about safety, especially for vulnerable and underage users. Mental health organizations, including the American Psychological Association, urge caution and stress that AI should complement, not replace, professional care. They highlight the need for responsible development of AI tools guided by licensed experts to fill gaps for those unable to afford traditional therapy.

Hey chatbot, is this true? AI ‘factchecks' Pakistan-India war information
Hey chatbot, is this true? AI ‘factchecks' Pakistan-India war information

Business Recorder

time2 days ago

  • Business Recorder

Hey chatbot, is this true? AI ‘factchecks' Pakistan-India war information

WASHINGTON: As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification – only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots – including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini – in search of reliable information. 'Hey @Grok, is this true?' has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. Memes continue: Pakistanis celebrate air dominance over India on social media But the responses are often themselves riddled with misinformation. Grok – now under renewed scrutiny for inserting 'white genocide,' a far-right conspiracy theory, into unrelated queries – wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as 'likely' showing Pakistan's military response to Indian strikes. 'The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,' McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. 'Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,' she warned. 'Fabricated' NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were 'generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead.' When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as 'genuine,' even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as 'Community Notes,' popularized by X. Researchers have repeatedly questioned the effectiveness of 'Community Notes' in combating falsehoods. 'Biased answers' Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content – something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an 'unauthorized modification' for causing Grok to generate unsolicited posts referencing 'white genocide' in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the 'most likely' culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were 'openly pushing for genocide' of white people. 'We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,' Angie Holan, director of the International Fact-Checking Network, told AFP. 'I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers.'

Google to appeal online search antitrust ruling
Google to appeal online search antitrust ruling

Express Tribune

time3 days ago

  • Express Tribune

Google to appeal online search antitrust ruling

The new Google logo is seen in this illustration taken May 13, 2025. Photo:REUTERS Google said Saturday it will appeal a ruling against it for anti-competitive practices in online search, a day after urging a US judge to reject the suggestion it spin off its Chrome browser. "We will wait for the Court's opinion. And we still strongly believe the Court's original decision was wrong, and look forward to our eventual appeal," the tech giant wrote on X. Google was found guilty in the summer of 2024 of illegal practices to establish and maintain its monopoly in online search by a federal judge in Washington. The Justice Department is now demanding remedies that could transform the digital landscape: Google's divestiture from its Chrome browser and a ban on entering exclusivity agreements with smartphone manufacturers to install the search engine by default. The department's proposal "reserves the right for the government to decide who gets Google users' data. Not the Court," Google said Saturday. "While we heard a lot about how the remedies would help various well-funded competitors (w/ repeated references to Bing), we heard very little about how all this helps consumers," Google added, referring to the Microsoft-owned search engine.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store