logo
ChatGPT down today? Here's why you're facing issues and what you can do

ChatGPT down today? Here's why you're facing issues and what you can do

Economic Times16-07-2025
Agencies ChatGPT Down Today
ChatGPT, OpenAI's widely used AI chatbot, faced a major disruption in the United States on Tuesday afternoon, leaving thousands of users unable to access chats or load past conversations. As per DownDetector, over 3,400 users reported issues with the platform.A large number of users reported seeing an 'unusual error' message while trying to access their chats. Many were also unable to view their chat history. According to DownDetector, 82% of the complaints were about ChatGPT not working, 12% related to the website, and 6% to the mobile app.
Also read: Chatgpt not working? From Gemini, Copilot to Grok, check list of Chatgpt AI alternative websitesOpenAI acknowledged the issue and updated its service status page. The company confirmed that the outage was affecting ChatGPT as well as other products like Sora and Codex.'We have identified that users are experiencing elevated levels of errors...,' the company posted on its service page.
The team said it was actively investigating the issue and a fix would be implemented soon.The outage is being monitored, and OpenAI is working on restoring full functionality. Users can follow OpenAI's status page or DownDetector for real-time updates.While OpenAI works on restoring services, users who rely on AI for tasks can consider alternatives. Several other AI chatbots and assistants are available that offer similar capabilities:
Claude (Anthropic): A conversational AI designed for longer, thoughtful responses.
Gemini (Google): Previously Bard, this tool is integrated with Google services and supports reasoning, summarisation, and coding help.
Microsoft Copilot: Available inside Microsoft products like Word, Excel, and Edge browser.
Perplexity AI: A fast, search-oriented AI assistant that provides cited answers to user queries.
YouChat (You.com): A chatbot that combines conversation with up-to-date web search results. These platforms can serve as temporary alternatives until ChatGPT resumes full service.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI bans ChatGPT from answering breakup questions; Sam Altman calls the new update 'annoying'
OpenAI bans ChatGPT from answering breakup questions; Sam Altman calls the new update 'annoying'

Time of India

time2 hours ago

  • Time of India

OpenAI bans ChatGPT from answering breakup questions; Sam Altman calls the new update 'annoying'

OpenAI is adjusting ChatGPT's approach to sensitive emotional queries, shifting from direct advice to facilitating user self-reflection. This change addresses concerns about AI's impact on mental well-being and aims to provide more thoughtful support. OpenAI is consulting experts and implementing safeguards like screen-time reminders and distress detection to ensure responsible AI interaction. Nowadays, many of us have turned AI platforms into a quick guidance source for everything from code to personal advice. But as artificial intelligence becomes a greater part of our emotional lives, companies are becoming aware of the risks of over-reliance on it. But can a chatbot truly understand matters of the heart and emotions? With growing concerns about how AI might affect mental well‑being, OpenAI is making a thoughtful shift in how ChatGPT handles sensitive personal topics. Rather than giving direct solutions to tough emotional questions, the AI will now help users reflect on their feelings and come to their own conclusions. OpenAI has come up with significant changes OpenAI has announced a significant change to how ChatGPT handles relationship questions. Instead of offering direct answers like 'Yes, break up,' the AI will now help users think through their dilemmas by giving self-reflection and weighing pros and cons, particularly for high-stakes personal issues. This comes as there have been several issues over AI getting too direct in emotionally sensitive areas. According to reports from The Guardian, OpenAI stated, 'When you ask something like: 'Should I break up with my boyfriend?' ChatGPT shouldn't give you an answer. It should help you think it through—asking questions, weighing pros and cons.' The company has also said that 'new behaviour for high‑stakes personal decisions is rolling out soon. We'll keep tuning when and how they show up so they feel natural and helpful,' according to OpenAI's statement via The Guardian. To ensure this isn't just window dressing, OpenAI is gathering an advisory group of experts in human-computer interaction, youth development, and mental health. The company said in a blog post, 'We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work' OpenAI CEO says that the new update is.... This change follows user complaints about ChatGPT's earlier personality tweaks. According to the Guardian, CEO Sam Altman admitted that recent updates made the bot 'too sycophant‑y and annoying.' He said, 'The last couple of GPT‑4o updates have made the personality too sycophant‑y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week.' Altman also teased future options for users to choose different personality modes OpenAI is also implementing mental health safeguards. Updates will include screen-time reminders during long sessions, better detection of emotional distress, and links to trusted support when needed

Google, schmoogle: When to ditch web search for deep research
Google, schmoogle: When to ditch web search for deep research

Mint

time2 hours ago

  • Mint

Google, schmoogle: When to ditch web search for deep research

Searching for the perfect electric car could have taken hours. Instead, I opened ChatGPT, clicked the deep research button and walked away from my computer. By the time I'd made coffee, ChatGPT delivered an impressive 6,800-word report. This year, ChatGPT and other popular AI chatbots introduced advanced research modes. When activated, the AI goes beyond basic chat, taking more time, examining more sources and composing a more thorough response. In short: It's just more. Now free users can access this feature, with limits. Recent upgrades, such as OpenAI's latest GPT-5 model, have made research even more powerful. For the past few months, I've experimented with deep research for complicated questions involving big purchases and international trip planning. Could a robot-generated report help me make tough decisions? Or would I end up with 6,000-plus words of AI nonsense? The bots answered questions I didn't think to ask. Though they occasionally led me astray, I realized my days of long Google quests were likely over. This is what I learned about what to deep research, which bots work best and how to avoid common pitfalls. Deep research is best for queries with multiple factors to weigh. (If you're just getting started, hop to my AI beginner's guide first, then come back.) For my EV journey, I first sought advice from my colleagues Joanna and Dan. But I needed to dig deeper for my specific criteria including a roomy back row for a car seat, a length shorter than my current SUV and a battery range that covers a round trip to visit my parents. I fed my many needs into several chatbots. When I hit enter, the AI showed me their 'thinking." First, they made a plan. Then, they launched searches. Lots of searches. In deep research mode, AI repeats this cycle—search then synthesize—multiple times until satisfied. Occasionally, though, the bot can get stuck in its own rabbit hole and you need to start over. Results varied. Perplexity delivered the quickest results, but hallucinated an all-wheel drive model that doesn't exist. Copilot and Gemini provided helpful tables. ChatGPT took more time because it asked clarifying questions first—a clever way to narrow the scope and personalize the report. Claude analyzed the most sources: 386. Deep research can take 30 minutes to complete. Turn on notifications so the app can let you know when your research is ready. My go-to bot is typically Claude for its strong privacy defaults. But for research, comparing results across multiple services proved most useful. Models that appeared on every list became our top contenders. Now I'm about to test drive a Kia Niro, and potentially spend tens of thousands based on a robot's recommendation. Basic chat missed the mark, proposing two models that are too big for parallel parking on city streets. Other successful deep research queries included a family-friendly San Francisco trip itinerary, a comparison of popular 529 savings plans, a detailed summary of scientific consensus on intermittent fasting and a guide to improving my distance swimming. On ChatGPT and Claude, you can add your Google Calendar and other accounts as sources, and ask the AI to, for example, plan activities around your schedule. Deep research isn't always a final answer, but it can help you get there. Ready for AI to do your research? Switch on the 'deep research" or 'research" toggle next to the AI chat box. ChatGPT offers five deep research queries a month to free users, while Perplexity's free plan includes five daily. Copilot, Gemini and Grok limit free access, but don't share specifics. Paid plans increase limits and offer access to more advanced models. Claude's research mode requires a subscription. Here are tips for the best results: Be specific. Give the AI context (your situation and your goal), requirements (must-haves) and your desired output (a report, bullets or a timeline). Chatbots can't read your mind…yet. Enable notifications. Deep research takes time. Turn on notifications so the app can ping you when your response is ready. Verify citations. AI can still make mistakes, so don't copy its work. Before making big decisions, click on citations to check source credibility and attribution. Summarize the output. Reports can be long. Ask for a scannable summary or table, then dive into the full text for details. Understand limitations. The information is only as good as its sources. These chatbots largely use publicly available web content. They can't access paywalled stuff, so think of it as a launchpad for further investigation. Whatever the imperfections of deep research, it easily beats hours and days stuck in a Google-search black hole. I have a new research partner, and it never needs a coffee break. News Corp, owner of Dow Jones Newswires and The Wall Street Journal, has a content-licensing partnership with OpenAI. Last year, the Journal's parent company, Dow Jones, sued Perplexity for copyright infringement.

Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained
Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained

News18

time3 hours ago

  • News18

Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained

Last Updated: AI veganism is abstaining from using AI systems due to ethical, environmental, or wellness concerns, or by avoiding harming AI systems, especially if they might one day be sentient Even as the world goes gaga over artificial intelligence (AI) and how it could change the way the world and jobs function, there are some who are refraining from using it. They are the AI vegans. Why is it? What are their reasons? AI veganism explained. What is AI veganism? The term refers to applying principles of veganism to AI — either by abstaining from using AI systems due to ethical, environmental, or personal wellness concerns, or by avoiding harming AI systems, especially if they might one day be sentient. Some view AI use as potentially exploitative — paralleling harm to animals via farming. Is AI so bad that we need to abstain from it? Here's what studies show: A 2024 Pew study showed that a fourth of K-12 teachers in the US thought AI was doing more harm than good. A Harvard study from May found that generative AI, while increasing the productivity of workers, diminished their motivation and increased their levels of boredom. A Microsoft Research study found that people who were more confident in using generative AI showed diminished critical thinking. Time reports growing concerns over a phenomenon labeled AI psychosis, where prolonged interaction with chatbots can trigger or worsen delusions in vulnerable individuals—especially those with preexisting mental health conditions. A study by the Center for Countering Digital Hate found that ChatGPT frequently bypasses its safeguards, offering harmful, personalized advice—such as suicide notes or instructions for substance misuse—to simulated 13-year-old users in over half of monitored interactions. Research at MIT revealed that students using LLMs like ChatGPT to write essays demonstrated weaker brain connectivity, lower linguistic quality, and poorer retention compared to peers relying on their own thinking. A study from Anthropic and Truthful AI found that AI models can covertly transmit harmful behaviors to other AIs using hidden signals—these actions bypass human detection and challenge conventional safety methods. A global report chaired by Yoshua Bengio outlines key threats from general-purpose AI: job losses, terrorism facilitation, uncontrolled systems, and deepfake misuse—and calls for urgent policy attention. AI contributes substantially to global electricity and water use, and could add up to 5 million metric tons of e-waste by 2030—perhaps accounting for 12% of global e-waste volume. Studies estimate AI may demand 4.1–6.6 billion cubic meters of water annually by 2027—comparable to the UK's total usage — while conceptually exposing deeper inequities in AI's extraction and pollution impacts. A BMJ Global Health review argues that AI could inflict harm through increased manipulation/control, weaponization, labour obsolescence, and—at the extreme—pose existential risks if self-improving AGI develops unchecked. What is the basis of the concept? Ethical Concerns: Many AI models are trained on creative work (art, writing, music) without consent from original creators. Critics argue this is intellectual theft or unpaid labor. Potential Future AI Sentience: Some fear that sentient AI might eventually emerge, and using it today could normalise treating it as a tool rather than a being with rights. Environmental Impact: AI systems — especially large language models—consume massive resources which contribute to carbon emissions and water scarcity. Cognitive and Psychological Health: Some believe overuse of AI weakens our ability to think, write, or create independently. The concern is about mental laziness or 'outsourcing" thought. Digital Overwhelm: AI makes everything faster, more accessible—sometimes too fast, leading to burnout, distraction, or dopamine addiction. Social and Cultural Disruption: AI threatens job markets—especially in creative fields, programming, and customer service. Why remaining an AI vegan may be tough? AI is deeply embedded in many systems — from communication to healthcare—making total abstinence unrealistic for most. Current AI lacks consciousness, so overlaying moral concerns meant for animals onto machines may distract from real human and animal rights issues. Potential overreach: Prioritising hypothetical sentient AI ethics could divert attention from pressing societal challenges. With Agency Inputs view comments Location : New Delhi, India, India First Published: August 10, 2025, 18:08 IST News explainers Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store