Is ChatGPT down? Why is ChatGPT not working? When will it be fixed? Partial outages hit Open AI
ChatGPT − the chatbot owned by Open AI − is experiencing issues Tuesday morning, causing partial outages across the platform.
Here's what you need to know:
According to DownDetector, ChatGpt first started seeing issues around 2:48 a.m. ET Tuesday, June 10. DownDetector showed 705 reported issues around 6:44 a.m. ET Tuesday morning.
According to DownDetector.com, the outage shows a peak of 1,127 reports as of 5:37 p.m. ET but the numbers of users reports fell to 705 by 6:44 a.m. ET.
Reports indicate 93% citing ChatGPT issues and 7% noticing app issues.
Open AI's status page reported the following issues:

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
32 minutes ago
- Yahoo
OpenAI to continue working with Scale AI after Meta deal
PARIS (Reuters) -OpenAI plans to continue working with Scale AI after rival Meta on Friday agreed to take a 49% stake in the artificial intelligence startup for $14.8 billion, OpenAI's CFO Sarah Friar told the VivaTech conference in Paris. Scale AI provides vast amounts of labelled or curated training data, which is crucial for developing sophisticated tools such as OpenAI's ChatGPT. "We don't want to ice the ecosystem because acquisitions are going to happen," she said. "And if we ice each other out, I think we're actually going to slow the pace of innovation." Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Chicago Tribune
40 minutes ago
- Chicago Tribune
Massive Google Cloud outage disrupts popular internet services
NEW YORK — Popular online services across the globe were disrupted Thursday due to ongoing issues at Google Cloud. Tens of thousands of users of Spotify, Discord and other platforms began noticing issues with their services early in the afternoon, according to Downdetector, which tracks outages. Outage reports for music streamer Spotify in particular, peaked around 3 p.m. Eastern Standard Time before dropping off, and some users began saying their access was restored. Google's Cloud status page said an incident with their systems affected clients in the U.S. and abroad. The company also posted that services are starting to recover after its engineers identified and began to mitigate the issue. 'We have identified the root cause and applied appropriate mitigations,' Google Cloud said. It added that there is no estimate for when the issue would be fully resolved. Google Cloud, which hosts a significant amount of services on the internet, has become the fastest growing part of Alphabet Inc., even though the company still makes most of its money from Google's ubiquitous search engine. Google Cloud's revenue last year totaled $43.2 billion, a 31% increase from 2023. By comparison, Alphabet's overall revenue grew by 14% last year.
Yahoo
an hour ago
- Yahoo
The Newspaper That Hired ChatGPT
The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here. For more than 20 years, print media has been a bit of a punching bag for digital-technology companies. Craigslist killed the paid classifieds, free websites led people to think newspapers and magazines were committing robbery when they charged for subscriptions, and the smartphone and social media turned reading full-length articles into a chore. Now generative AI is in the mix—and many publishers, desperate to avoid being left behind once more, are rushing to harness the technology themselves. Several major publications, including The Atlantic, have entered into corporate partnerships with OpenAI and other AI firms. Any number of experiments have ensued—publishers have used the software to help translate work into different languages, draft headlines, and write summaries or even articles. But perhaps no publication has gone further than the Italian newspaper Il Foglio. For one month, beginning in late March, Il Foglio printed a daily insert consisting of four pages of AI-written articles and headlines. Each day, Il Foglio's top editor, Claudio Cerasa, asked ChatGPT Pro to write articles on various topics—Italian politics, J. D. Vance, AI itself. Two humans reviewed the outputs for mistakes, sometimes deciding to leave in minor errors as evidence of AI's fallibility and, at other times, asking ChatGPT to rewrite an article. The insert, titled Il Foglio AI, was almost immediately covered by newspapers around the world. 'It's impossible to hide AI,' Cerasa told me recently. 'And you have to understand that it's like the wind; you have to manage it.' Now the paper—which circulates about 29,000 copies each day, in addition to serving its online readership—plans to embrace AI-written content permanently, issuing a weekly AI section and, on occasion, using ChatGPT to write articles for the standard paper. (These articles will always be labeled.) Cerasa has already used the technology to generate fictional debates, such as an imagined conversation between a conservative and a progressive cardinal on selecting a new pope; a review of the columnist Beppe Severgnini's latest book, accompanied by Severgnini's AI-written retort; the chatbot's advice on what to do if you suspect you're falling in love with a chatbot ('Do not fall in love with me'); and an interview with Cerasa himself, conducted by ChatGPT. Il Foglio's AI work is full-fledged and transparently so: natural and artificial articles, clearly divided. Meanwhile, other publications provide limited, or sometimes no, insight into their usage of the technology, and some have even mixed AI and human writing without disclosure. As if to demonstrate how easily the commingling of AI and journalism can go sideways, just days after Cerasa and I first spoke, at least two major regional American papers published a spread of more than 50 pages titled 'Heat Index,' which was riddled with errors and fabrications; a freelancer who'd contributed to the project admitted to using ChatGPT to generate at least some portions of the text, resulting in made-up book titles and expert sources who didn't actually exist. The result was an embarrassing example of what can result when the technology is used to cut corners. [Read: At least two newspapers syndicated AI garbage] With so many obvious pitfalls to using AI, I wanted to speak with Cerasa to understand more about his experiment. Over Zoom, he painted an unsettling, if optimistic, portrait of his experience with AI in journalism. Sure, the technology is flawed. It's prone to fabrications; his staff has caught plenty of them, and has been taken to task for publishing some of those errors. But when used correctly, it writes well—at times more naturally, Cerasa told me, than even his human staff. Still, there are limits. 'Anyone who tries to use artificial intelligence to replace human intelligence ends up failing,' he told me when I asked about the 'Heat Index' disaster. 'AI is meant to integrate, not replace.' The technology can benefit journalism, he said, 'only if it's treated like a new colleague—one that needs to be looked after.' The problem, perhaps, stems from using AI to substitute rather than augment. In journalism, 'anyone who thinks AI is a way to save money is getting it wrong,' Cerasa said. But economic anxiety has become the norm for the field. A new robot colleague could mean one, or three, or 10 fewer human ones. What, if anything, can the rest of the media learn from Il Foglio's approach? Our conversation has been edited for length and clarity. Matteo Wong: In your first experiment with AI, you hid AI-written articles in your paper for a month and asked readers if they could detect them. How did that go? What did you learn? Claudio Cerasa: A year ago, for one month, every day we put in our newspaper an article written with AI, and we asked our readers to guess which article was AI-generated, offering the prize of a one-year subscription and a bottle of champagne. The experiment helped us create better prompts for the AI to write an article, and helped us humans write better articles as well. Sometimes an article written by people was seen as an article written by AI: for instance, when an article is written with numbered points—first, second, third. So we changed something in how we write too. Wong: Did anybody win? Cerasa: Yes, we offered a lot of subscriptions and champagne. More than that, we realized we needed to speak about AI not just in our newspaper, but all over the world. We created this thing that is important not only because it is journalism with AI, but because it combines the oldest way to do information, the newspaper, and the newest, artificial intelligence. Wong: How did your experience of using ChatGPT change when you moved from that original experiment to a daily imprint entirely written with AI? Cerasa: The biggest thing that has changed is our prompt. At the beginning, my prompt was very long, because I had to explain a lot of things: You have to write an article with this style, with this number of words, with these ideas. Now, after a lot of use of ChatGPT, it knows better what I want to do. When you start to use, in a transparent way, artificial intelligence, you have a personal assistant: a new person that works in the newspaper. It's like having another brain. It's a new way to do journalism. Wong: What are the tasks and topics you've found that ChatGPT is good at and for which you'd want to use it? And conversely, where are the areas where it falls short? Cerasa: In general, it is good at three things: research, summarizing long documents, and, in some cases, writing. I'm sure in the future, and maybe in the present, many editors will try to think of ways AI can erase journalists. That could be possible, because if you are not a journalist with enough creativity, enough reporting, enough ideas, maybe you are worse than a machine. But in that case, the problem is not the machine. The technology can also recall and synthesize far more information than a human can. The first article we put in the normal newspaper written with AI was about the discovery of a key ingredient for life on a distant planet. We asked the AI to write a piece on great authors of the past and how they imagined the day scientists would make such a discovery. A normal person would not be able to remember all these things. Wong: And what can't the AI do? Cerasa: AI cannot find the news; it cannot develop sources or interview the prime minister. AI also doesn't have interesting ideas about the world—that's where natural intelligence comes in. AI is not able to draw connections in the same way as intelligent human journalists. I don't think an AI would be able to come up with and fully produce a newspaper generated by AI. Wong: You mentioned before that there may be some articles or tasks at a newspaper that AI can already write or perform better than humans, but if so, the problem is an insufficiently skilled person. Don't you think young journalists have to build up those skills over time? I started at The Atlantic as an assistant editor, not a writer, and my primary job was fact-checking. Doesn't AI threaten the talent pipeline, and thus the media ecosystem more broadly? Cerasa: It's a bit terrifying, because we've come to understand how many creative things AI can do. For our children to use AI to write something in school, to do their homework, is really terrifying. But AI isn't going away—you have to educate people to use it in the correct way, and without hiding it. In our newspaper, there is no fear about AI, because our newspaper is very particular and written in a special way. We know, in a snobby way, that our skills are unique, so we are not scared. But I'm sure that a lot of newspapers could be scared, because normal articles written about the things that happened the day before, with the agency news—that kind of article, and also that kind of journalism, might be the past. Article originally published at The Atlantic