
A.I. Is Poised to Revolutionize Weather Forecasting. A New Tool Shows Promise.
The latest entrant is Aurora, an A.I. weather model from Microsoft, and it stands out for several reasons, according to a report published Wednesday in the journal Nature. It's already in use at one of Europe's largest weather centers, where it's running alongside other traditional and A.I.-based models.
The Aurora model can make accurate 10-day forecasts at smaller scales than many other models, the paper reports.
And it was built to handle not only weather, but also any Earth system with data available. That means it can be trained, relatively easily, to forecast things like air pollution and wave height in addition to weather events like tropical cyclones. Users could add almost any system they like down the road; for instance, one start-up has already honed the model to predict renewable energy markets.
'I'm most excited to see the adoption of this model as a blueprint that can add more Earth systems to the prediction pipeline,' said Paris Perdikaris, a professor at the University of Pennsylvania who led the development of Aurora while working at Microsoft.
It's also fast, able to return results in seconds as opposed to the hours that non-A.I. models can take.
Traditional models, the basis of weather forecasting over the last 70 years, use layers of complex mathematical equations to represent the physical world: the sun heating the planet, winds and ocean currents swirling around the globe, clouds forming, and so on.
Researchers then add real weather data and ask the computer models to predict what will happen next. Human forecasters look at results from many of these models and combine those with their own experience to tell the public what scenario is most likely.
'Final forecasts are ultimately made by a human expert,' Dr. Perdikaris said. (That is true for A.I.-based forecasts, too.)
This system has worked well for decades. But the models are incredibly complex and require expensive supercomputers. They also take many years to build, making them difficult to update, and hours to run, slowing down the forecasting process.
Artificial intelligence weather forecasting models are faster to build, run and update. Researchers feed the models on huge amounts of weather and climate data and train them to recognize patterns. Then, based on these patterns, the model predicts what comes next.
But the A.I. models still need equation-based models and real-world data for their starting points, and for reality checks.
'It doesn't know the laws of physics, so it could make up something completely crazy,' said Amy McGovern, a computer scientist and meteorologist at the University of Oklahoma who was not involved in the study. So most, but not all, A.I. weather forecasting models still rely on data and the physics-based models in some capacity, and human forecasters need to interpret results carefully.
Dr. Perdikaris and his collaborators built Aurora using this method, training it on data from physics-based models and then making purely A.I. predictions, but they didn't want it to be limited to weather. So they trained it on multiple, big Earth system data sets, creating a broad background of artificial expertise
Aurora 'is an important step toward more versatile forecasting systems,' said Sebastian Engelke, a professor of statistics at the University of Geneva who was not involved in the study. The model's flexibility and resolution are its most novel contributions, he said.
As in other areas, there's been a big push toward using A.I. for weather forecasting in the past few years, but the major A.I. forecasting models are still global, not local. Forecasts at the scale of a single storm barreling toward a city need to come from a specialized model, and those are mostly the old-school variety, at least for now.
Extreme weather events like heat waves or heavy downpours are still challenging for both traditional and A.I. models to predict.
A.I. forecasting models need careful calibration and human verification before they're widely used, Dr. Perdikaris said. But some are already being tested in the real world. The European Center for Medium-Range Weather Forecasting, which provides meteorological forecasts to dozens of countries, developed their own A.I. forecasting model, which they deployed in February. They use that, along with Aurora and other A.I. models, for their weather services. They've had a good experience using A.I. models so far.
'It's absolutely an exciting time,' said Peter Düben, who leads the European center's Earth modeling team.
Other researchers are more conservative, given the checks and improvements the models need. And artificial intelligence tools come with a significant energy cost to train, though Dr. Perdikaris said this would be worth it in the long run as more people use the models.
'We're all in the hype right now,' said Dr. McGovern, who leads the NSF's institute that studies trust in A.I. applications to climate and weather problems. 'A.I. weather is amazing. But I think there's still a long way to go.'
And the Trump administration's cuts to agencies including the National Oceanic and Atmospheric Administration, the National Science Foundation and the National Weather Service could stymie further improvements in A.I. forecasting tools, because federal data sets and models are critical to developing and improving A.I. models, Dr. Perdikaris said.
'It's quite unfortunate, because I think it's going to slow down progress,' he said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
an hour ago
- Forbes
7 Terrifying AI Risks That Could Change The World
There's no doubt about it, AI can be scary. Anyone who says they aren't at least a little bit worried is probably very brave, very stupid, or a liar. It makes total sense because the unknown is always frightening, and when it comes to AI, there are a lot of unknowns. How exactly does it work? Why can't we explain certain phenomena like hallucinations? And perhaps most importantly, what impact is it going to have on our lives and society? Many of these fears have solidified into debates around particular aspects of AI—its impact on human jobs, creativity or intellectual property rights, for example. And those involved often make it clear that the potential implications are terrifying. So here I will overview what I have come to see as some of the biggest fears. These are potential outcomes of the AI revolution that no one wants to see, but we can't be sure they aren't lurking around the corner… 1. Impact On Jobs One of the most pressing fears, and perhaps the one that gets the most coverage, is that huge swathes of us will be made redundant by machines that are cheaper to run than human workers. Having robots do all the work for us sounds great, but in reality, most people need a job to earn a living. Some evangelize about a post-scarcity economy where robot labor creates an abundance of everything we need, but this is highly theoretical. What's real is that workers in fields as diverse as software engineering, voice acting and graphic design are already reportedly being replaced. Fueling this fear is that while international bodies and watchdogs like the WEF have issued warnings about the potential threat, governments have been slow to come out with plans for a centralized, coordinated response. 2. Environmental Harm Operating generative AI language models requires huge amounts of compute power. This is provided by vast data centers that burn through energy at rates comparable to small nations, creating poisonous emissions and noise pollution. They consume massive amounts of water at a time when water scarcity is increasingly a concern. Critics of the idea that the benefits of AI are outweighed by the environmental harm it causes often believe that this damage will be offset by efficiencies that AI will create. But again, a lot of these advances are currently theoretical, while the environmental impact of AI is happening today. 3. Surveillance The threat that AI poses to privacy is at the root of this one. With its ability to capture and process vast quantities of personal information, there's no way to predict how much it might know about our lives in just a few short years. Employers increasingly monitoring and analyzing worker activity, the growing number of AI-enabled cameras on our devices, and in our streets, vehicles and homes, and police forces rolling out facial-recognition technology, all raise anxiety that soon no corner will be safe from prying AIs. 4. Weaponization Another common and entirely rational fear is that AI will be used to create weapons unlike anything seen before outside of science fiction. Robot dogs have been deployed in the Ukraine war for reconnaissance and logistics, and autonomous machine guns are capable of targeting enemies on a battlefield and shooting when given human authorization. Lethal autonomous AI hasn't yet been deployed as far as we know, but the fear is that this is inevitably just a matter of time. From computer-vision-equipped hunter-killer drones to AI-powered cyber attacks capable of knocking out critical infrastructure across entire regions, the possibilities are chilling. 5. Intellectual Property Theft If you're an author, artist or other creative professional, you may be among the many who are frustrated by the fact that multinational technology companies can train their AIs on your work, without paying you a penny. This has sparked widespread protest and backlash, with artists and their unions arguing that tech companies are effectively monetizing their stolen IP. Legal debate and court cases are in progress, but with the likes of OpenAI and Google throwing huge resources into their missions for more and more training data, there are legitimate fears that the rights of human creators might be overlooked. 6. Misinformation AI enables and accelerates the spread of misinformation, making it quicker and easier to disseminate, more convincing, and harder to detect from Deepfake videos of world leaders saying or doing things that never happened, to conspiracy theories flooding social media in the form of stories and images designed to go viral and cause disruption. The aim is often to destabilize, and this is done by undermining trust in democratic institutions, scientific consensus or fact-based journalism. One very scary factor is that the algorithmic nature of AI reinforces views by serving up content that individuals are likely to agree with. This can result in them becoming trapped in 'echo-chambers' and pushed towards fringe or extremist beliefs. 7. AI Will Hurt Us Right back to Mary Shelley's Frankenstein, via Space Odyssey, Terminator and The Matrix, cautionary tales have warned us of the potential dangers of giving our creations the power of thought. Right now, that gulf between fiction and reality still seems uncrossable; it's hard to comprehend how we would go from ChatGPT to machines intent or even capable of maliciously harming us. But the threat of 'runaway AI', where AI begins developing and evolving by itself in ways that might not be aligned with our best interests, is treated very seriously. Many leading AI researchers and alliances have spoken openly about the need for safeguards and transparency to prevent unknowable circumstances from emerging in the future. While this may seem a more distant and perhaps fanciful threat than some of the others covered here, it's certainly not one that can be ignored. Ultimately, fear alone is not a strategy. While it is vital to acknowledge and address the risks of AI, it is equally important to focus on building the safeguards, governance frameworks, and ethical guidelines that can steer this technology toward positive outcomes. By confronting these fears with informed action, we can shape a future where AI serves humanity rather than threatens it.


Forbes
an hour ago
- Forbes
Some Random Thoughts On Lawyers And The AI Revolution
In case you've somehow missed it, the Artificial Intelligence ("AI") revolution is already upon us. While it would be fun to muse upon its impact upon society in general, here we are going to talk about some very practical ramifications for lawyers. The most important thing to realize is that AI is coming whether you like it or not ― like the computing revolution, you can either get on top of the wave and ride it, or you can be trampled under its electronic feet. Ignoring AI will not make it go away and it literally is going to change everything about the practice of law. So you should welcome our new AI Overlords, at least until we come to our own Butlerian Jihad (see Frank Herbert's Dune series) where we have to destroy all thinking machines in a final attempt save humanity. But I digress. The immediate benefit to AI will be to reduce many time-intensive tasks. Want to summarize a few dozens court opinions or a large deposition transcript? AI can even now do that in a few seconds. Having AI write first-drafts of briefs and other legal documents is already here, albeit the experience of a few unfortunates who trusted AI too much provides a lesson in the need to carefully check and verify. Just give it a few years, probably less than five, and you'll be able to tell your AI assistant all the pertinent facts and AI will thereafter take over pretty much all drafting and filing for you as if it were the best paralegal you ever had. This is not a pipe dream. It is reality, and it is just around the corner. The real benefit of AI is none of this. The real benefit to AI is that it will free up lawyers to do the one thing that they should be doing but which they find little time to actually do: Think about the situation. It is true that AI will very soon be able to do this too. As AI crosses the Rubicon to where it is more intelligent than humans (expected no later than 2027), the ability of AI to resolve complex legal issues will make the best and most knowledgeable attorneys look like first year law students. There is simply no way that human attorneys can compete with a superintelligence that is able to simultaneously access every legal resource electronically stored and simultaneously apply numerous complex rules in the right order to reach the most correct conclusion. But therein lies the greatest weakness of AI as well, at least as it applies to the law. Humans are not rational. Humans are irrational. If humans were rational, then we would not need laws to begin with. In the simplest terms, we would not have laws that deal with speeding infractions because nobody would speed. It is simply human nature to push boundaries and we have laws because some push those boundaries too far. Whether a particular defendant has pushed the boundaries too far is a question, at least for now, for human juries comprised of persons who are not themselves perfect rational. Although relatively few legal matters ever actually make their way as far as a jury, this idea permeates all bodies of law and legal planning. Take for example an attorney drafting a simple will who has to determine whether the distributions will be minimally fair enough that they will past muster with a probate court comprised of a human judge. Or an attorney considering whether the design of a manufacturing facility will fall within the fair boundaries of the environmental protection laws. Yes, AI will be able to tell attorneys where the bright lines are to a degree of accuracy never before considered, but it will not be able to anticipate what a human court or board may say about the subject. Which gets us back to the promise of AI for attorneys, which is that it will provide those bright lines (in mere seconds!) but it will still take humans versed in the foibles of humans to make those determinations. Now attorneys will have the time to really sit back and think about all that is going on and how the matter should be best handled and finally resolved. This will of course upend the economics of law practice. Currently, attorneys bill their clients for a lot of the stuff that AI can do in seconds with this being quite possibly greater than 95% of what attorneys normally do falling into this category. All that billing will now go away and attorneys will be spending more time just thinking the matter over and then advising their clients accordingly. Oh sure, there will be still be settlement conferences and mediations and the occasional trial, but a good deal of legal matters will henceforth be solved by the attorney thinking through the situation as opposed to simply generating mounds of paperwork. At the same time, attorneys are fearing all the wrong things about the AI revolution. The biggest fear is that clients will simply use AI and advise themselves. That has already happened as I've been asked several times to review AI output on legal issues. The thing is that these folks will use AI about the same way that folks in the past used legal forms ― they'll get the right result, but for the wrong situation. It almost always takes more billable time to advise these do-it-yourselfer folks about why their cherished chatbot output doesn't apply to their situation than it does to simply advise them about their situation in the first place. Learning to deal with these folks will be something that lawyers will need to do too. Suffice it to say that dealing with the mistakes of these folks who relied on their AI output without really understanding it will provide full employment for litigators. Those who are at the lower end of the legal totem pole are at the most risk, meaning legal secretaries, paralegals and new attorneys. These are the persons who do so much of the routine legal work that AI will replace. Yet, all new technologies also introduce new efficiencies and expanding economies that sooner or later will take back up this slack. Those who will survive in the near-term will be those who embrace AI technologies and find ways to make themselves useful as managers of that technology. New attorneys in particular have a wonderful chance to get in on the ground floor of AI and become a resource for the dinosaurs who will have a hard time adjusting. The thing about the AI revolution is that it is going to come like a giant tsunami and bring widespread and deep change faster than anything before. This is because AI development is tied less to physical things than ever before. The computer revolution came slowly because the computers themselves and the infrastructure to support them had their own serious manufacturing and implementation limits. The internet revolution came quicker but had similar limitations, including a shortage a computer programmers. The AI revolution has little of that, particularly since AI doesn't require much new hardware and the AI algorithms program themselves. This rapidity of implementation means that attorneys cannot wait to see how things might develop but must start preparing, or at least learning about it, now. Interesting times ahead, at least until the Butlerian Jihad anyway.


New York Times
an hour ago
- New York Times
Trump's Plans for A.I. Might Hit a Wall. Thank Europe.
President Trump wants to unleash American A.I. companies on the world. For the United States to win the unfolding A.I. arms race, his logic goes, tech companies should be unfettered by regulations and free to develop artificial intelligence technology as they generally see fit. He is convinced that the benefits of American supremacy in this technology outweigh the risks of ungoverned A.I., which experts warn could include heightened surveillance, disinformation or even an existential threat to humanity. This conviction is at the heart of the administration's recently unveiled A.I. Action Plan, which looks to roll back red tape and onerous regulations that it says paralyze A.I. development. But Mr. Trump can't single-handedly protect American A.I. companies from regulation. Washington may be able to eliminate the rules of the road at home, but it can't do so for the rest of the world. If American companies want to operate in international markets, they must follow the rules of those markets. That means that the European Union, an enormous market that is committed to regulating A.I., could well thwart Mr. Trump's techno-optimist vision of a world dominated by self-regulated, free-market U.S. companies. In the past, the E.U.'s digital regulations have resonated well beyond the continent, with technology companies extending those rules across their global operations in a phenomenon I have termed the Brussels Effect. Companies like Apple and Microsoft now broadly use the E.U.'s General Data Protection Regulation, which gives users more control over their data, as their global privacy standard in part because it is too costly and cumbersome for them to follow different privacy policies in each market. Other governments also often look to E.U. rules when drafting their own laws regulating the tech sector. The same phenomenon could at least partly hold for A.I. technology. Over the past decade, the E.U. has put in place a number of regulations aimed at balancing A.I. innovation, transparency and accountability. Most important is the A.I. Act, the world's first comprehensive and binding artificial intelligence law, which entered into force in August 2024. The act establishes guardrails against the possible risks of artificial intelligence, such as the loss of privacy, discrimination, disinformation and A.I. systems that could endanger human life if left unchecked. This law, for instance, restricts the use of facial recognition technology for surveillance and limits the use of potentially biased artificial intelligence for hiring or credit decisions. American developers looking to get access to the European market will have to comply with these rules and others. Some companies are already pushing back. Meta has accused the E.U. of overreach and even sought the Trump administration's help in opposing Europe's regulatory ambitions. But other companies, such as OpenAI, Google and Microsoft, are signing on to Europe's A.I. code of practice. These tech giants see an opportunity: Playing nice with the European Union could help build trust among users, pre-empt other regulatory challenges and streamline their policies around the world. Individual American states looking to govern A.I., too, could use E.U. rules as a template when writing their own bills, as California did when developing its privacy laws. By holding its ground, Europe can steer global A.I. development toward models that protect fundamental rights, ensure fairness and don't undermine democracy. Standing firm would also boost Europe's tech sector by creating fairer competition between foreign and European A.I. firms, which have to abide by E.U. laws. Want all of The Times? Subscribe.