
Global ChatGPT Outage Disrupts Users in India and UAE
OpenAI's status page noted 'elevated error rates on ChatGPT record mode, Sora and Codex,' and the company confirmed its engineers were actively working on mitigation measures. The widespread impact began just after 6:10 am IST, affecting users across North America, Europe, and Asia.
Users faced blank interfaces, verification loops, and failure to load previous conversations. Some were unable even to initiate new sessions. OpenAI offered guidance to users striving to navigate the disruption, but withheld a timeline or root cause.
ADVERTISEMENT
The company did not identify a specific trigger, but Downdetector data shows this marks the second significant outage this month — the first occurred on 10 June and a major one emerged earlier this month. This recurring instability has raised concerns about OpenAI's infrastructure resilience, especially as AI tools become integral to education, business, coding and creative processes.
In the UAE and India, users began logging issues around 5 am. Chat histories failed to load and sessions stalled, exacerbating frustrations among professional and casual users alike. According to Downdetector, 82 per cent of reports were tied to ChatGPT access issues, 12 per cent to website availability, and 6 per cent to mobile app failures.
On social media platforms like X, users shared screenshots and vented frustrations. Common complaints included error loops and service unavailability, sparking concerns within developer and content-creator communities that rely on OpenAI's tools in real time.
OpenAI subsequently issued a statement confirming that services had been fully restored by about 7 am local time in affected regions, and reassured users that full functionality was reinstated. However, the company offered little explanation regarding the failure's origins.
Analysts note that as AI systems like ChatGPT become embedded in daily workflows—from drafting documents to coding support—even brief outages can have outsize repercussions. The frequency of these outages this month has drawn scrutiny, prompting calls for OpenAI to bolster system capacity and redundancy.
Enterprise clients dependent on GPT-powered tools for generating content, automating customer support responses, or powering educational platforms may face operational setbacks. In response, OpenAI told users it is implementing mitigation strategies to prevent repeats of such disruptions.
Market watchers will track whether these outages trigger enterprise clients to reassess vendor risk, potentially exploring multi‑vendor AI strategies or hosting localised AI solutions with stronger reliability guarantees.
Despite the disruption, by mid-morning on Wednesday ChatGPT and affiliated services were reported as fully operational. Yet, as AI becomes a mission‑critical component of digital ecosystems, the pressure to ensure consistent performance has surged dramatically.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arabian Post
11 hours ago
- Arabian Post
LLMs Fail to Deliver Real Intelligence Despite Huge Investment
The trajectory of large language models like GPT and its counterparts has raised numerous questions in recent months. As companies such as OpenAI continue to pour billions into scaling these models, the fundamental issue of their cognitive limitations remains glaring. The hype surrounding LLMs, though widely praised for their fluency and utility, overlooks a critical flaw in their design. These models may perform tasks that mimic intelligent behaviour but do not actually possess the ability to think, reason, or understand. A growing chorus of AI researchers and experts argues that no amount of funding, data, or compute power will transform LLMs into entities capable of genuine intelligence. Despite ambitious plans from companies like OpenAI to expand the infrastructure behind LLMs to an unimaginable scale, their current model architecture continues to hit the same cognitive wall. At the core of this issue is the realization that LLMs are fundamentally engineered to mimic intelligence rather than to achieve it. OpenAI's recent announcements have been staggering. The company has unveiled plans to deploy up to 100 million GPUs—an infrastructure investment that could exceed $3 trillion. These resources would be used to enhance the size and speed of existing LLMs. Such efforts would consume enormous amounts of energy, rivaling that of entire countries, and generate vast quantities of emissions. The scale of the operation is unprecedented, but so too is the question: What exactly will this achieve? Will adding more tokens to a slightly bigger and faster model finally lead to true intelligence? ADVERTISEMENT The simple answer appears to be no. LLMs are not designed to possess cognition. They are designed to predict, autocomplete, summarise, and assist with routine tasks—but these are functions of performance, not understanding. The biggest misconception in AI development today is the conflation of fluency with intelligence. Proponents of scaling continue to tout that more data, more models, and more compute will unlock something that is fundamentally elusive. But as the limitations of LLMs become increasingly apparent, the vision of artificial general intelligence using current methodologies seems like a pipe dream. The reality of AI's current state is jarring: a vast, burning of resources with little to show for it. Companies like Meta, xAI, and DeepMind are all investing heavily in LLMs, creating an illusion of progress by pushing for bigger and more powerful systems. However, these innovations are essentially 'performance theatre,' with much of the energy and resources funnelled into creating benchmarks and achieving superficial gains in fluency rather than advancing the underlying technology. This raises important questions: Why is there so little accountability for the environmental impact of such projects? Where is the true innovation in cognitive science? LLMs, despite their capacity to accomplish specific tasks effectively, are essentially still limited by their design. The push to scale them further, under the assumption that doing so will lead to breakthroughs in artificial intelligence, ignores the inherent problems that cannot be solved with brute force alone. The architecture behind LLMs—based on pattern recognition and statistical correlation—simply cannot generate the complex, dynamic processes involved in real cognition. Experts argue that the AI community must acknowledge these limitations and pivot toward new approaches. The vast majority of AI researchers now agree that a shift in paradigm is necessary. LLMs, no matter how large or finely tuned, cannot produce the kind of intelligence required to understand, reason, or adapt in a human-like way. To move forward, a radically different model must be developed—one that incorporates cognitive architecture and a deeper understanding of how real intelligence functions. The current momentum in AI, driven by large companies and investors, seems to be propelled by a desire for immediate results and visible performance metrics. But it's crucial to remember that speed means little if it's headed in the wrong direction. Without a rethinking of the very foundations of AI research, the race to scale LLMs will continue to miss the mark. In fact, there's a real risk that the over-emphasis on the scalability of these models could stifle the kind of breakthroughs needed to move the field forward.


TECHx
a day ago
- TECHx
Starlink Outage Hits Thousands After Software Failure
Home » Latest news » Starlink Outage Hits Thousands After Software Failure SpaceX's Starlink suffered a major outage on Thursday, affecting tens of thousands of users globally. The Starlink outage was caused by an internal software failure, according to the company. Users across the U.S. and Europe reported connectivity issues around 3 p.m. EDT (1900 GMT), as per Downdetector, which recorded up to 61,000 user reports. Starlink, which serves over 6 million users in nearly 140 countries and territories, acknowledged the problem on its official X account. The company announced that it was actively working on a solution. Service was mostly restored after 2.5 hours, Michael Nicolls, vice president of Starlink Engineering, revealed on X. 'The outage was due to failure of key internal software services that operate the core network,' Nicolls wrote, apologizing for the disruption and confirming efforts to find the root cause. Elon Musk also apologized, stating, 'Sorry for the outage. SpaceX will remedy root cause to ensure it doesn't happen again.' Experts commented on the incident, noting the unusual scale of the Starlink outage. Doug Madory, an analyst at internet monitoring firm Kentik, reported that the outage was global. 'This is likely the longest outage ever for Starlink, at least while it became a major service provider,' he said. The outage has sparked discussions about possible causes, including a software glitch, a failed update, or even a cyberattack. Gregory Falco, director of a space and cybersecurity lab at Cornell University, speculated, 'I'd guess this is a bad software update, not entirely dissimilar to the CrowdStrike issue with Windows last year, or a cyberattack.' Key details include: SpaceX launched more than 8,000 Starlink satellites since 2020 to build its low-Earth orbit network. The company is partnering with T-Mobile to provide direct-to-cell text messaging services in rural areas. Starlink's growing user base has pushed SpaceX to focus on enhancing network speed and bandwidth. The company's Starshield unit, which handles military satellite services, holds significant contracts with the Pentagon and U.S. intelligence agencies. It remains unclear if these services were affected by the Starlink outage. The incident is being compared to a 2024 CrowdStrike cybersecurity software update that caused worldwide flight cancellations and disrupted 8.5 million Microsoft Windows devices. Source: Reuters


Zawya
2 days ago
- Zawya
Elon Musk's Starlink network suffers rare global outage
SpaceX's Starlink suffered one of its biggest international outages on Thursday when an internal software failure knocked tens of thousands of users offline, a rare disruption for Elon Musk's powerful satellite internet system. Users in the U.S. and Europe began experiencing the outage at around 3 p.m. EDT (1900 GMT), according to Downdetector, a crowdsourced outage tracker that said as many as 61,000 user reports to the site were made. Starlink, which has more than 6 million users across roughly 140 countries and territories, later acknowledged the outage on its X account and said "we are actively implementing a solution." Starlink service mostly resumed after 2.5 hours, Michael Nicolls, Starlink vice president of Starlink Engineering, wrote on X. "The outage was due to failure of key internal software services that operate the core network," Nicolls said, apologizing for the disruption and vowing to find its root cause. Musk had also apologized: "Sorry for the outage. SpaceX will remedy root cause to ensure it doesn't happen again," the SpaceX CEO wrote on X. The outage was a rare hiccup for SpaceX's most commercially sensitive business that had experts speculating whether the service, known for its resilience and rapid growth, was beset by a glitch, a botched software update or even a cyberattack. Doug Madory, an expert at the internet analysis firm Kentik, said the outage was global and that such a sweeping interruption was unusual. "This is likely the longest outage ever for Starlink, at least while it became a major service provider," Madory said. As Starlink gained more users, SpaceX has focused heavily in recent months on updating its network to accommodate demands for higher speed and bandwidth. The company in a partnership with T-Mobile is also expanding the constellation with larger, more powerful satellites to offer direct-to-cell text messaging services, a line of business in which mobile phone users can send emergency text messages through the network in rural areas. SpaceX has launched more than 8,000 Starlink satellites since 2020, building a uniquely distributed network in low-Earth orbit that has attracted intense demand from militaries, transportation industries and consumers in rural areas with poor access to traditional, fiber-based internet. "I'd speculate this is a bad software update, not entirely dissimilar to the CrowdStrike mess with Windows last year, or a cyberattack," said Gregory Falco, director of a space and cybersecurity laboratory at Cornell University. An update to CrowdStrike's widely used cybersecurity software led to worldwide flight cancellations and impacted industries around the globe in July last year. The outage disrupted internet services, affecting 8.5 million Microsoft Windows devices. It was unclear whether Thursday's outage affected SpaceX's other satellite-based services that rely on the Starlink network. Starshield, the company's military satellite business unit, has billions of dollars' worth of contracts with the Pentagon and U.S. intelligence agencies. (Reporting by Juby Babu in Mexico City, and Raphael Satter and Joey Roulette in Washington; Editing by Anil D'Silva and Matthew Lewis)