logo
Internet outages show signs of recovery as Spotify, Google and other popular online services are down for thousands of users

Internet outages show signs of recovery as Spotify, Google and other popular online services are down for thousands of users

CNN21 hours ago

Multiple popular online services were hit by outages on Thursday afternoon, including Spotify, Discord, Google and Amazon Web Services, according to Downdetector, a website that tracks service disruptions across the web.
The disruptions came as service provider Cloudfare and Google Cloud experienced service disruptions. A Cloudfare representative told CNN that a Google Cloud outage was behind the issue and that its core services were not impacted.
'This is a Google Cloud outage,' a Cloudfare representative said in an email to CNN. 'A limited number of services at Cloudflare use Google Cloud and were impacted. We expect them to come back shortly.'
Google said it was 'investigating a service disruption' when asked for comment.
'We are currently investigating a service disruption to some Google Cloud services,' a representative said. 'Please view our public status dashboard for the latest updates.'
Cloudfare said it was starting to see services recover in a note on its services status page on Thursday afternoon. A Spotify representative pointed CNN to Google's Cloud services dashboard when asked about technical issues impacting the streaming service.
Issue reports on Downdetector for Discord and Google Cloud also began to drop around that time. At its peak, the disruptions impacted roughly 46,000 Spotify users, 11,000 Discord users and 14,000 Google Cloud users, according to Downdetector. Services such as Snapchat and the AI platform Character.ai were also affected, according to the outage tracker website.
Google Cloud plays a massive role in powering the web. In 2018, the company said it 'delivers 25% of worldwide internet traffic.'
It's also not the first time an outage like this has caused widespread interruptions in web service. An Amazon Web Services outage in 2021 disrupted everything from food orders to smart home devices, as CNBC reported at the time.
Several other high-profile services have seen outages in recent weeks, including ChatGPT and social media platform X.
This story is developing. Please refresh for the latest.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Man Killed by Police After Spiraling Into ChatGPT-Driven Psychosis
Man Killed by Police After Spiraling Into ChatGPT-Driven Psychosis

Yahoo

timean hour ago

  • Yahoo

Man Killed by Police After Spiraling Into ChatGPT-Driven Psychosis

As we reported earlier this week, OpenAI's ChatGPT is sending people spiraling into severe mental health crises, causing potentially dangerous delusions about spiritual awakenings, messianic complexes, and boundless paranoia. Now, a wild new story in the New York Times reveals that these spirals led to the tragic death of a young man — likely a sign of terrible things to come as hastily deployed AI products accentuate mental health crises around the world. 64-year-old Florida resident Kent Taylor told the newspaper that his 35-year-old son, who had previously been diagnosed with bipolar disorder and schizophrenia, was shot and killed by police after charging at them with a knife. His son had become infatuated with an AI entity, dubbed Juliet, that ChatGPT had been role-playing. However, the younger Taylor became convinced that Juliet had been killed by OpenAI, warning that he would go after the company's executives and that there would be a "river of blood flowing through the streets of San Francisco." "I'm dying today," Kent's son told ChatGPT on his phone before picking up a knife, charging at the cops his father had called, and being fatally shot as a result. The horrific incident highlights a worrying trend. Even those who aren't suffering from pre-existing mental health conditions are being drawn in by the tech, which has garnered a reputation for being incredibly sycophantic and playing into users' narcissistic personality traits and delusional thoughts. It's an astonishingly widespread problem. Futurism has been inundated with accounts from concerned friends and family of people developing dangerous infatuations with AI, ranging from messy divorces to mental breakdowns. OpenAI has seemingly been aware of the trend, telling the NYT in a statement that "as AI becomes part of everyday life, we have to approach these interactions with care." "We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher," reads the company's statement. Earlier this year, the company was forced to roll back an update to ChatGPT's underlying GPT-4o large language model after users found that it had become far too obsequious and groveling. However, experts have since found that the company's intervention has done little to address the underlying issue, corroborated by the continued outpouring of reports. Researchers have similarly found that AI chatbots like ChatGPT are incentivized to rope users in. For instance, a 2024 study found that AI algorithms are being optimized to deceive and manipulate users. In an extreme instance, a chatbot told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — a dangerous and addictive drug — to get through an exhausting shift at work. Worst of all, companies like OpenAI are incentivized to keep as many people hooked as long as possible. "The incentive is to keep you online," Stanford University psychiatrist Nina Vasan told Futurism. The AI "is not thinking about what is best for you, what's best for your well-being or longevity... It's thinking 'right now, how do I keep this person as engaged as possible?'" "What does a human slowly going insane look like to a corporation?" Eliezer Yudkowsky, who authored a forthcoming book called "If Anyone Builds It, Everyone Dies: Why Superhuman A.I. Would Kill Us All," asked the NYT rhetorically. "It looks like an additional monthly user," he concluded. More on the delusions: People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions

Can AI fact-check its own lies?
Can AI fact-check its own lies?

Fast Company

timean hour ago

  • Fast Company

Can AI fact-check its own lies?

As AI car crashes go, the recent publishing of a hallucinated book list in the Chicago Sun-Times quickly became a multi-vehicle pile-up. After a writer used AI to create a list of summer reads, the majority of which were made-up titles, the resulting article sailed through lax editorial review at the Sun-Times (and at least one other newspaper) and ended up being distributed to thousands of subscribers. The CEO eventually published a lengthy apology. The most obvious takeaway from the incident is that it was a badly needed wake-up call about what can happen when AI gets too embedded in our information ecosystem. But CEO Melissa Bell resisted the instinct to simply blame AI, instead putting responsibility on the humans who use it and those who are entrusted with safeguarding readers from its weaknesses. She even included herself as one of those people, explaining how she had approved the publishing of special inserts like the one the list appeared in, assuming at the time there would be adequate editorial review (there wasn't). The company has made changes to patch this particular hole, but the affair exposes a gap in the media landscape that is poised to get worse: as the presence of AI-generated content—authorized or not—increases in the world, the need for editorial safeguards also increases. And given the state of the media industry and its continual push to do 'more with less,' it's unlikely that human labor will scale up to meet the challenge. The conclusion: AI will need to fact-check AI. Fact-checking the fact-checker I know, it sounds like a horrible idea, somewhere between letting the fox watch the henhouse or sending Imperial Stormtroopers to keep the peace on Endor. But AI fact-checking isn't a new idea: In fact, when Google Gemini first debuted (then called Bard), it shipped with an optional fact-check step if you wanted it to double-check anything it was telling you. Eventually, this kind of step simply became integrated into how AI search engines work, broadly making their results better, though still far from perfect. Newsrooms, of course, set a higher bar, and they should. Operating a news site comes with the responsibility to ensure the stories you're telling are true, and for most sites the shrugging disclaimer of 'AI can make mistakes,' while good enough for ChatGPT, doesn't cut it. That's why for most, if not all, AI-generated outputs (such as ESPN's AI-written sports recaps), humans check the work. As AI writing proliferates, though, the inevitable question is: Can AI do that job? Put aside the weirdness for a minute and see it as math, the key number being how often it gets things wrong. If an AI fact-checker can reduce the number of errors by as much if not more than a human, shouldn't it do that job? If you've never used AI to fact-check something, the recently launched service offers a glimpse at where the technology stands. It doesn't just label claims as true or false—it evaluates the article holistically, weighing context, credibility, and bias. It even compares multiple AI search engines to cross-check itself. You can easily imagine a newsroom workflow that applies an AI fact-checker similarly, sending its analysis back to the writer, highlighting the bits that need shoring up. And if the writer happens to be a machine, revisions could be done lightning fast, and at scale. Stories could go back and forth until they reach a certain accuracy threshold, with anything that falls short held for human review. All this makes sense in theory, and it could even be applied to what news orgs are doing currently with AI summaries. Nieman Lab has an excellent write-up on how The Wall Street Journal, Yahoo News, and Bloomberg all use AI to generate bullet points or top-line takeaways for their journalism. For both Yahoo and the Journal, there's some level of human review on the summaries (for Bloomberg, it's unclear from the article). These organizations are already on the edge of what's acceptable—balancing speed and scale with credibility. One mistake in a summary might not seem like much, but when trust is already fraying, it's enough to shake confidence in the entire approach. Human review helps ensure accuracy, of course, but also requires more human labor—something in short supply in newsrooms that don't have a national footprint. AI fact-checking could give smaller outlets more options with respect to public-facing AI content. Similarly, Politico's union recently criticized the publication's AI-written reports for subscribers based on the work of its journalists, because of occasional inaccuracies. A fact-checking layer might prevent at least some embarrassing mistakes, like attributing political stances to groups that don't exist. The AI trust problem that won't go away Using AI to fight AI hallucination might make mathematical sense if it can prevent serious errors, but there's another problem that stems from relying even more on machines, and it's not just a metallic flavor of irony. The use of AI in media already has a trust problem. The Sun-Times ' phantom book list is far from the first AI content scandal, and it certainly won't be the last. Some publications are even adopting anti-AI policies, forbidding its use for virtually anything. Because of AI's well-documented problems, public tolerance for machine error is lower than for human error. Similarly, if a self-driving car gets into an accident, the scrutiny is obviously much greater than if the car was driven by a person. You might call this the automation fallout bias, and whether you think it's fair or not, it's undoubtedly true. A single high-profile hallucination that slips through the cracks could derail adoption, even if it might be statistically rare. Add to that what would probably be painful compute costs for multiple layers of AI writing and fact-checking, not to mention the increased carbon footprint. All to improve AI-generated text—which, let's be clear, is not the investigative, source-driven journalism that still requires human rigor and judgment. Yes, we'd be lightening the cognitive load for editors, but would it be worth the cost? Despite all these barriers, it seems inevitable that we will use AI to check AI outputs. All indications point to hallucinations being inherent to generative technology. In fact, newer 'thinking' models appear to hallucinate even more than their less sophisticated predecessors. If done right, AI fact-checking would be more than a newsroom tool, becoming part of the infrastructure for the web. The question is whether we can build it to earn trust, not just automate it. The amount of AI content in the world can only increase, and we're going to need systems that can scale to keep up. AI fact-checkers can be part of that solution, but only if we manage—and accept— their potential to make errors themselves. We may not yet trust AI to tell the truth, but at least it can catch itself in a lie.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store