
US tech companies skewered over false tsunami information
When a massive 8.8 magnitude earthquake struck off Russia's Pacific coast on Tuesday, one core worry immediately emerged: A life-threatening tsunami. Around the Pacific Ocean, weather authorities leapt into action, modelling the threat and releasing warnings and advisories to prepare their communities for what could be a horrific night. As the news unfolded, residents of Hawaii, Japan and North America's West Coast jumped onto their devices to seek evacuation plans and safety guidance.
Online, some got their information from artificial intelligence chatbots. And in the moment of potential crisis, a few of those newly prevalent tools appear to have badly bungled the critical task at hand.
Grok, the chatbot made by Elon Musk's Bay Area-based xAI and embedded in the social media site X, repeatedly told the site's users that Hawaii's tsunami warning had been cancelled when it actually hadn't, incorrectly citing sources. Social media users reported similar problems with Google Search's AI overviews after receiving inaccurate information about authorities' safety warnings in Hawaii and elsewhere. Thankfully, the tsunami danger quickly subsided on Tuesday night and Wednesday morning without major damage.
Still, the issues speak to the growing role of AI tools in people's information diets – above other search results, Google often places AI overviews that compile information from around the Internet, and Grok, the chatbot, automatically responds to users that tag "@Grok" on X posts – and to the tools' potentially dangerous fallibility.
On Tuesday evening, Grok wrote that the National Weather Services's Pacific Tsunami Warning Center had cancelled Hawaii's tsunami warning at 3.23pm local time "after data showed no significant waves. No impacts occurred – Hawaii is safe." This wasn't true. Authorities had issued the tsunami warning at 2.43pm, and waves didn't reach the islands until later in the evening.
In reality, the Honolulu-based weather authority kept its warnings active until after 10.30pm, when it downgraded the guidance to an advisory. (Grok wasn't wrong everywhere; just four minutes after its "3.23pm" claim, it accurately told another user that "evacuations are ongoing" and that warnings were active for Hawaii and parts of Alaska.)
X users blasted the chatbot for the mistakes, with one calling AI a "disaster for real time events." Another user tagged Musk and included several screenshots of Grok's errors, writing that the chatbot had "spread life-endangering misinformation during the tsunami event."
A critic of Google – who prompted the search tool to show an AI overview by adding "+ai" to their search – called the text that showed up "dangerously wrong."
X did not respond to SFGATE's request for comment. Google, which dubs high-stakes queries "Your Money or Your Life," said in a May document explaining the AI overviews that such queries give the tool "an even higher bar for showing supporting information from reliable and trustworthy sources."
Google spokesperson Ned Adriance wrote, "Search quickly and automatically updates based on fresh web content and the latest information. In this case, Search now shows a dedicated alert feature that provides up-to-date information for people searching about this topic." ("Tsunami" searches on Wednesday morning by this reporter yielded a small "advisory" symbol and then, once the advisory was cancelled in the Bay Area, just a selection of news stories.)
Grok, in reply to one of the posters complaining about its errors, wrote, "We'll improve accuracy." – SFGate, San Francisco/Tribune News Service

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Straits Times
3 hours ago
- New Straits Times
APEC steps up efforts to ensure AI-based economic transformation safe, inclusive and sustainable
KUALA LUMPUR: The Asia Pacific Economic Cooperation (APEC) is stepping up efforts to ensure that the use of artificial intelligence (AI) in transforming and reshaping economies and societies across the Asia-Pacific is safe, inclusive, and sustainable. More than 200 policymakers, technologists and standards experts convened for the APEC AI Standards Conference in Incheon, South Korea, to drive convergence in AI governance and technical alignment across the region. Korean Agency for Technology and Standards (KATS) director Dr Jin-seok Bae said AI presented both enormous opportunities and complex challenges that no single economy can resolve alone. "AI is rapidly transforming our industries and daily lives, but without common standards, we risk fragmentation and mistrust. "This conference is a chance for APEC economies to align efforts, exchange knowledge and support international cooperation to shape safe, inclusive and forward-looking AI governance," he said in a statement today. The two-day conference offered hands-on training, shared best practices and brought in perspectives from global standards bodies, reinforcing that international cooperation is key to responsible AI development. During the opening session of the conference, International Organisation for Standardisation (ISO) deputy secretary-general Silvio Dulinski said the role of international standards in building trust, fairness and security in AI. He emphasised that standards offer scalable, consensus-driven tools for aligning regulation and ensuring diversity and inclusion in AI governance. Participants discussed the use of AI across sectors and examined emerging frameworks to guide testing, conformity assessment and implementation. They also examined real-world applications of AI standards in areas such as ethics, human-AI interaction, risk management and environmental impact. Meanwhile, APEC Sub-Committee on Standards and Conformance (SCSC) chair Dr Kang Byung-Goo said standardisation is not just a technical process, it is a foundation for sustainable innovation and inclusive growth. "We hope that the workshop provides an opportunity for collaboration among APEC economies on AI standards. By working together, APEC economies can ensure AI systems are not only effective, but also aligned with our shared values," he added.


The Star
7 hours ago
- The Star
Patients have less trust in doctors who use AI, study shows
When doctors work with artificial intelligence, their patients consider them less competent, empathetic and trustworthy, new research suggests. — Photo: Sebastian Kahnert/dpa BERLIN: Doctors who work with artificial intelligence (AI) are rated less favourably by patients, according to psychologists from the University of Würzburg and Charité Berlin. A study they conducted found that doctors who state they use AI in their work are perceived as less competent, trustworthy, and empathetic. These reservations were evident even when AI was not used for diagnostic or therapeutic purposes but only for administrative tasks. The information was obtained after 1,276 adult participants from the United States were shown advertisements for medical practices and asked to evaluate various characteristics of the doctors depicted, according to the study published in the scientific journal JAMA Network Open. The advertisements differed in only one aspect, according to the psychologists: a statement indicating that the doctor used AI for administrative, diagnostic or therapeutic purposes. A control group was shown the same advertisement without any mention of AI. Why do patients apparently have a negative perception of doctors who work with AI? The authors of the study, Moritz Reis and Professor Wilfried Kunde from Würzburg and Florian Reis from the Institute of Medical Informatics at Charité Berlin, suspect one reason could be the fear that doctors might blindly follow AI recommendations. As in many other fields, AI is playing an increasingly significant role in modern medicine. However, the researchers noted that a trusting relationship between patient and doctor is crucial for successful treatment. "When doctors inform their patients about the use of AI, they should aim to address potential concerns and highlight possible benefits," the authors summarised in a statement from the Julius Maximilian University. Doctors, for instance, could have more time for their patients if part of the administrative workload is handled by AI. "Despite increasing technologisation, AI could even make our healthcare system more humane in this way," they said. – dpa


The Star
a day ago
- The Star
Space oddities take centre stage
Local artist Hangno Hartono sitting near an alien figure during the Indonesia UFO Festival in Yogyakarta.— AP A metallic spacecraft glistens under ambient light as two figures with elongated limbs, bulbous heads and featureless faces gaze with black eyes into the distance of the island of Java. The extraterrestrial arrival in Indonesia is one of the science fiction scenes that has captivated groups of outer space enthusiasts at the country's UFO Festival throughout July. Since 2016, organisers have welcomed hundreds of fans of outer space realities and mysteries from Indonesia and beyond. The festival aims to be a creative and intellectual crossroads for enthusiasts of space science and exploration, Unidentified Flying Objects, Unidentified Aerial Phenomena and SETI, short for the Search for Extraterrestrial Intelligence. This year's free event included an Alien Village, UFO Village and UFO Camp at a site near Yogyakarta, about 420km from the Indonesian capital Jakarta. Visitors admired a replica of a Mars exploration module and galleries with artistic interpretations of outer space, while workshops for adults and students offered cosmic activities such as spacesuit construction. Venzha Christ, director of the Indonesia Space Science Society, said a focus on the term UFO allows participants to wonder without boundaries. 'I think the Indonesian generation needs a platform that can transform and socialise creative ideas, whether from the perspective of technology, science or even art,' Christ said. 'Imagining outer space, imagining the universe, is imagining limitlessness.' — AP