logo
What is it really like to go off the rails with ChatGPT?

What is it really like to go off the rails with ChatGPT?

The Verge19 hours ago
From ChatGPT to Gemini: how AI is rewriting the internet
See all Stories Posted Aug 9, 2025 at 3:09 PM UTC Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Richard Lawler Posts from this author will be added to your daily email digest and your homepage feed. See All by Richard Lawler
Posts from this topic will be added to your daily email digest and your homepage feed. See All AI
Posts from this topic will be added to your daily email digest and your homepage feed. See All News
Posts from this topic will be added to your daily email digest and your homepage feed.
See All OpenAI
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Readers told us how they use AI in their job search — or why they avoid it
Readers told us how they use AI in their job search — or why they avoid it

Business Insider

time25 minutes ago

  • Business Insider

Readers told us how they use AI in their job search — or why they avoid it

Business Insider recently asked readers about their experiences with the job search process. You told us what you find off-putting in a job posting, and what sticks out to you as red flags in a job interview. We also wanted to know: How do you use AI in your job search, if at all? The readers who answered our poll and reported using AI in the process said they do so primarily for résumés and cover letters. Many readers said they'd ask AI to check their résumés against a job post and tell them what changes they should make to best align with the description, such as organizing bullets in the order in which items appear in the job post or emphasizing the keywords from the listing. Grant Maxfield said he asks AI to review a job listing and summarize the most important qualifications and duties that he should focus on in his application materials. He'll also feed AI a job description alongside his prior résumés and cover letters, and ask it to draft new ones tailored to the specific position. Many respondents stressed that they double-check what AI gives them, and some mentioned having experiences where AI hallucinated a fictitious job experience on their résumé or mixed up their responsibilities between different roles. "I never used the wording ChatGPT spit out, but I definitely used it to figure out where I was lacking in useful information and where I had too much," said Sarah Oglesby. Nicole McCormick said she uses AI to write thank-you emails that draw on the company's culture, values, and mission statement as taken from its website. Clare Apps also uses AI to personalize follow-up messages and to do mock interviews to prepare. "The AI conundrum is painful on both sides," she said. "A part of me is annoyed that recruiters are annoyed by job seekers using AI to assist with cover letters, résumés, and profiles. It's a cake-and-eat-it scenario." Not everyone finds it useful for résumés and cover letters, though. One reader said she only asks it to suggest possible jobs she should apply for because "it hasn't been very useful yet; I write a better résumé and cover letter than AI does at this point." And other readers avoid using AI in their job search. Kelley Murray has heard it can disqualify you if you're caught having used AI on an application. Liz Stout said she's never tried it before but that a coworker mentioned using ChatGPT to write a cover letter, and "that sounded like cheating." "I don't use AI," said Brian Bissonnette. "I put the effort in myself."

Toxic relationship with AI chatbot? ChatGPT now has a fix.
Toxic relationship with AI chatbot? ChatGPT now has a fix.

Yahoo

time32 minutes ago

  • Yahoo

Toxic relationship with AI chatbot? ChatGPT now has a fix.

ChatGPT is getting a health upgrade, this time for users themselves. In a new blog post ahead of the company's reported GPT-5 announcement, OpenAI unveiled it would be refreshing its generative AI chatbot with new features designed to foster healthier, more stable relationships between user and bot. Users who have spent prolonged periods of time in a single conversation, for example, will now be prompted to log off with a gentle nudge. The company is also doubling down on fixes to the bot's sycophancy problem, and building out its models to recognize mental and emotional distress. ChatGPT will respond differently to more "high stakes" personal questions, the company explains, guiding users through careful decision-making, weighing pros and cons, and responding to feedback rather than providing answers to potentially life-changing queries. This mirror's OpenAI's recently announced Study Mode for ChatGPT, which scraps the AI assistant's direct, lengthy responses in favor of guided Socratic lessons intended to encourage greater critical thinking. "We don't always get it right. Earlier this year, an update made the model too agreeable, sometimes saying what sounded nice instead of what was actually helpful. We rolled it back, changed how we use feedback, and are improving how we measure real-world usefulness over the long term, not just whether you liked the answer in the moment," OpenAI wrote in the announcement. "We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress." Broadly, OpenAI has been updating its models in response to claims that its generative AI products, specifically ChatGPT, are exacerbating unhealthy social relationships and worsening mental illnesses, especially among teenagers. Earlier this year, reports surfaced that many users were forming delusional relationships with the AI assistant, worsening existing psychiatric disorders, including paranoia and derealization. Lawmakers, in response, have shifted their focus to more intensely regulate chatbot use, as well as their advertisement as emotional partners or replacements for therapy. OpenAI has recognized this criticism, acknowledging that its previous 4o model "fell short" in addressing concerning behavior from users. The company hopes that these new features and system prompts may step up to do the work its previous versions failed at. "Our goal isn't to hold your attention, but to help you use it well," the company writes. "We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store