
AI is already beating traditional forecasters when it comes to predicting weather
A new AI model is outperforming the world's top forecasting systems for weather, pollution and cyclones, according to a new study, boosting hopes of weather forecasting becoming cheaper and more accurate.
The model, called Aurora, accurately predicted cyclone paths and produced weather forecasts in a matter of seconds instead of hours.
It was trained on a vast collection of atmospheric data, like weather observations, climate simulations and satellite measurements, by researchers at Microsoft and the University of Pennsylvania.
When evaluated against global forecasting benchmarks, the AI system consistently produced faster forecasts than traditional models and, in many cases, offered greater accuracy, according to the new research published in Nature.
Aurora was able to predict the path of Doksuri, the costliest Pacific typhoon of 2023, four days before landfall. While official weather agencies forecast landfall in Taiwan, Aurora correctly placed it in the northern Philippines.
It also tracked the path and wind speeds of the storm Ciarán, which struck northwestern Europe last autumn, outperforming traditional models as well as newer systems based on AI like GraphCast and FourCastNet.
According to the study, Aurora was the only model to correctly estimate peak winds from the storm.
The results mark a major advance in modelling complex Earth systems with speed and accuracy. 'Earth's climate is perhaps the most complex system we study, with interactions spanning from quantum scales to planetary dynamics,' noted Dr Paris Perdikaris, associate professor at the University of Pennsylvania.
'With Aurora, we addressed a fundamental challenge in Earth system prediction: how to create forecasting tools that are both more accurate and dramatically more computationally efficient."
The system is not limited to weather. Aurora has also been tested for forecasting air quality and ocean waves. In one case study, it accurately predicted a large sandstorm in Iraq, which closed airports and led to over 5,000 hospitalisations, a day before it occurred.
The model managed to do this despite being trained without explicit knowledge of atmospheric chemistry. Aurora 'did not have any prior knowledge about atmospheric chemistry or how nitrogen dioxide, for instance, interacts with sunlight," said study co-first author Dr Megan Stanley of Microsoft Research, 'that wasn't part of the original training.'
"And yet,' she said, 'in fine-tuning, Aurora was able to adapt to that because it had already learned enough about all of the other processes'.
The model was also able to simulate complex ocean wave patterns generated by typhoons such as Nanmadol, which struck Japan in 2022. Aurora captured wave heights and direction with more detail and higher accuracy than the standard ocean forecasting systems in use today.
'When we compared Aurora to official forecasts from agencies like the National Hurricane Centre, China Meteorological Administration and others, Aurora outperformed all of them across different basins worldwide," said Dr Perdikaris.
The model works by identifying patterns in large environmental datasets instead of solving physical equations. This allows it to generate 10-day weather forecasts and 5-day air quality predictions in under a minute, compared to the hours needed by traditional models running on supercomputers.
Unlike traditional systems that need supercomputers, a key advantage of Aurora is that it can run on simpler machines. This could make accurate local forecasts possible even in countries with limited resources.
'The most transformative aspect is democratising access to high-quality forecasts,' Dr Perdikaris said. 'Traditional systems require supercomputers and specialised teams, putting them out of reach for many communities worldwide. Aurora can run on modest hardware while matching or exceeding traditional model performance.'
The new AI model's foundation architecture allows it to be fine-tuned for various forecasting tasks, from local rain patterns to seasonal trends. "Knowledge gained from one area, such as atmospheric dynamics used in weather forecasting, enhances its predictive performance in other domains, including air quality modelling or predicting tropical cyclone formation," noted Dr Perdikaris.
'This cross-domain learning is central to the foundation model philosophy that guides my broader research programme.'
Each new application requires only a small amount of additional data.
According to Microsoft, some fine-tuning experiments took only a few weeks compared to the years typically needed to build numerical models.
Although Aurora still needs existing data sources to generate forecasts, researchers say its speed and flexibility could make it useful for real-time applications in the future.
Microsoft says the source code and model weights are publicly available and Aurora is already being used to improve weather services on its MSN platform.
The researchers are interested in extending the model to generate predictions on a wider range of Earth system behaviours, including local and seasonal weather, extreme rainfall and urban flooding.
"What excites me most about this technology is its broader applicability," Dr Perdikaris. "At Penn, we are exploring how similar foundation model approaches can address other prediction challenges beyond weather – from urban flooding to renewable energy forecasting to air quality management – making powerful predictive tools accessible to communities that need them most."
Its developers believe that similar systems could eventually be adapted for other forecasting challenges, including floods, heatwaves and agriculture.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NBC News
24 minutes ago
- NBC News
Trump administration cuts 'Safety' from AI Safety Institute
The Trump administration says it's reforming a Biden-era artificial intelligence safety institute, renaming and reformulating one of the only federal government departments dedicated to oversight of the burgeoning technology. Commerce Secretary Howard Lutnick said in a news release Tuesday that the Trump administration would transform the former U.S. AI Safety Institute — which former President Joe Biden established in November 2023 — into the Center for AI Standards and Innovation. The reframing away from 'safety' is in line with the Trump administration's statements and actions signaling its belief that oversight efforts for AI companies could unnecessarily dull the United States' competitive edge in the space. 'For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards,' Lutnick said in the release. 'CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards.' The U.S. AI Safety Institute was created to evaluate and test AI models and create standards for safety and security. It also formed a consortium on AI safety, which was made up of over 200 members, including OpenAI, Meta and Anthropic. Although it's unclear whether the transformation will mean any major changes to the institute's operations, the move appears to reflect the Trump administration's 'pro-innovation' approach to deregulating AI technology. Unlike Biden's executive order on AI and the former institute, the reformed center is set to focus on additional aspects like evaluating 'potential security vulnerabilities and malign foreign influence arising from use of adversaries' AI systems, including the possibility of backdoors and other covert, malicious behavior,' as well as 'guard against burdensome and unnecessary regulation of American technologies by foreign governments.' In January, the Chinese-created AI app DeepSeek heightened national security concerns around AI with its latest release, which made waves with its advancements. President Donald Trump said the app 'should be a wake-up call' about the prospect of international competition for American tech companies. Lawmakers introduced a bill to ban DeepSeek from government devices, and the Navy advised its members not to use it 'in any capacity.' The move to reform the institute appears to have been in development for a while. Reuters reported this year that no one from the U.S. AI Safety Institute's staff would attend an AI summit in Paris in February alongside Vice President JD Vance. The institute's inaugural director, Elizabeth Kelly, also announced she would step down that month. In his speech at the summit, Vance echoed Lutnick's sentiments, saying, 'We need international regulatory regimes that fosters the creation of AI technology rather than strangles it.' He also spoke about how he believes AI should be free from 'ideological bias.' Since he returned to office, Trump has made it clear that his administration wants to embrace the expansion of AI. Within his first week, Trump announced the creation of the $500 billion Stargate initiative in collaboration with OpenAI, Oracle and SoftBank, which aims to make the United States a world leader in AI. Trump also signed an executive order on AI in his first week in office that focuses on easing regulations on AI technology and revoking 'existing AI policies and directives that act as barriers to American AI innovation.' Biden's executive order on AI, which focused on safety and privacy standards for the technology, has been scrapped from the White House's website.


The Guardian
32 minutes ago
- The Guardian
Reddit sues AI company Anthropic for allegedly ‘scraping' user comments to train chatbot
Social media platform Reddit has sued the artificial intelligence company Anthropic, alleging that it is illegally 'scraping' the comments of Reddit users to train its chatbot Claude. Reddit claims that Anthropic has used automated bots to access the social network's content despite being asked not to do so, and 'intentionally trained on the personal data of Reddit users without ever requesting their consent'. Anthropic didn't immediately return a request for comment. The claim was filed on Wednesday in the superior court of California in San Francisco. 'AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data,' said Ben Lee, Reddit's chief legal officer, in a statement on Wednesday. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion Reddit has previously entered licensing agreements with Google, OpenAI and other companies to enable them to train their AI systems on Reddit commentary. The large quantity of text generated by Reddit's 100 million daily active users has played a part in the creation of many large language models, the type of AI that underpins ChatGPT, Claude and others. Those agreements 'enable us to enforce meaningful protections for our users, including the right to delete your content, user privacy protections, and preventing users from being spammed using this content', Lee said.


The Sun
2 hours ago
- The Sun
Major retailer's systems DOWN with shoppers ‘unable to pay in stores across UK'
SHOPPERS were unable to buy products from a major retailed across UK stores today after a reported system failure. H&M customers at branches up and down the country were turned away for several hours after an issue with the company's payment tech, as reported by MailOnline. 2 A worker at London site reported they had been unable sell any items for around two hours. It is not known if online customers were affected, or if this was a nationwide problem. The cause of the apparent system failure, and whether it was a cyber attack, is also unknown. A spokesperson for H&M told The Sun: "We are aware of the problem and are looking into resolving it as quickly as possible. "We apologise to our customers for the inconvenience." This comes after jeweller Cartier and fashion brand The North Face are the latest victims of cyber attacks. Cartier, whose watches are worn by stars such as Cate Blanchett, said: 'An unauthorised party gained temporary access to our system. "We contained the issue and further enhanced the protection of our systems and data.' North Face said it discovered a 'small-scale' attack in April. It advised customers to change their passwords. Both brands said customer names and email addresses were taken, but not financial information. It is the latest in a wave of attacks that began with Marks And Spencer and the Co-Op facing severe disruption in April. Last month, Harrods joined to list, followed by Adidas which was also hit. Co-op stores have even started flogging Christmas pastries in May — amid stock supply chaos after a crippling cyber attack. In a desperate bid to fill empty shelves the chain shipped out the chicken and cranberry Festive Slices to some supermarkets. The £1.90 treats are typically sold in November and December. But ongoing online problems meant they were available again last week. Co-op's internal ordering system was hit last month before the retailer pulled the plug on its own software to limit the damage. As a result some branches have been sent the winter warmers instead of the anticipated summer supplies. Private customer data was stolen in the hack but the Co-op said that did not include passwords, bank or credit card details. The criminals suspected to be behind the attack are known collectively as "Scattered Spider" - one of the most prolific cybergangs of the past 18 months. The gang specialises in ransomware - a type of attack designed to steal information or access in exchange for a sum of money. They have been gaining a reputation for targeting large, customer-facing organisations through social engineering and identity-focused tactics. What is a cyber attack? A CYBER attack is any deliberate attempt to disrupt, damage, or gain unauthorised access to computer systems, networks, or digital devices. These attacks can target individuals, businesses, or even governments, and their motives can range from financial gain to political disruption. Cyber attacks can take many forms, employing various techniques to achieve their malicious goals. Common types of cyber attacks include: Malware: Malicious software designed to damage or gain control of a system. Examples include viruses, worms, ransomware, and spyware. Phishing: Deceptive attempts to trick individuals into revealing sensitive information such as usernames, passwords, or credit card details, often through fake emails or websites. Denial-of-Service (DoS) Attacks: Flooding a network or server with traffic to overwhelm its resources and make it unavailable to legitimate users. SQL Injection: Exploiting vulnerabilities in website databases to gain unauthorised access to data. Ransomware: Malware that encrypts a victim's data and demands a ransom for its release. Social Engineering: Manipulating individuals into performing actions or divulging confidential information. 2