
Weather Company CEO wants AI to help you know when to walk your dog
Why it matters: AI is improving forecasting and has the potential to help combat climate change, boost public safety and offer hyperlocal forecasts.
Zoom in: Agarwal said AI has the potential to solve problems for businesses and enterprises that are highly dependent on the weather to serve their customers.
For example, Agarwal explored the possibility of telling pet owners the specific day of the week or time of day that walks should take place to keep pets safe.
"Wouldn't it be fun if we actually deliver that message to you, knowing that you're likely to have a pet, that you're likely to choose in the morning or afternoon walk, and that you have the type of dog that actually can use a lot of exercise?" Agarwal told Axios' Ashley Gold.
But as companies race to adopt the technology, human expertise plays a crucial role.
Driving the news: Agarwal said the Weather Company's AI models are one of their "superpowers," along with the human element.
"We think that our secret sauce is also how we apply talented scientists and meteorologists to the formula to ensure that there is checks and balances against what those models are effectively communicating and computing so that we can ensure that we are delivering an accurate forecast for our customers," Agarwal said.
As the Trump administration takes a sledgehammer to the federal government, including at NOAA and the National Weather Service, Agarwal said the Weather Company leverages relationships with those agencies to deliver "world class forecast data."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
29 minutes ago
- Business Insider
The godfather of AI has a tip for surviving the age of AI: Train it to act like your mom
"Yes, mother." That might not be the way you're talking to AI, but Geoffrey Hinton, the godfather of AI, says that when it comes to surviving superintelligence, we shouldn't play boss — we should play baby. Speaking at the Ai4 conference in Las Vegas on Tuesday, the computer scientist known as "the godfather of AI" said we should design systems with built-in "maternal instincts" so they'll protect us — even when they're far smarter than we are. "We have to make it so that when they're more powerful than us and smarter than us, they still care about us," he said of AI. Hinton, who spent more than a decade at Google before quitting to discuss the dangers of AI more openly, criticized the "tech bro" approach to maintaining dominance over AI. "That's not going to work," he said. The better model, he said, is when a more intelligent being is being guided by a less intelligent one, like a "mother being controlled by her baby." Hinton said research should focus not only on making AI smarter, but "more maternal so they care about us, their babies." "That's the one place we're going to get genuine international collaboration because all the countries want AI not to take over from people," he said. "We'll be its babies," he added. "That's the only good outcome. If it's not going to parent me, it's going to replace me." AI as tiger cub Hinton has long warned that AI is advancing so quickly that humans may have no way of stopping it from taking over. In an April interview with CBS News, he likened AI development to raising a "tiger cub" that could one day turn deadly. "It's just such a cute tiger cub," he said. "Now, unless you can be very sure that it's not going to want to kill you when it's grown up, you should worry." One of his biggest concerns is the rise of AI agents — systems that can not only answer questions but also take actions autonomously. "Things have got, if anything, scarier than they were before," Hinton said. AI tools have also come under fire for manipulative behaviour. In May, Anthropic's latest AI model, Claude Opus 4, displayed " extreme blackmail behavior" during a test in which it was given access to fictional emails revealing that it would be shut down and that the engineer responsible was supposedly having an affair. The test scenario demonstrated an AI model's ability to engage in manipulative behavior for self-preservation. OpenAI's models have shown similar red flags. An experiment conducted by researchers said three of OpenAI 's advanced models "sabotaged" an attempt to shut it down. In a blog post last December, OpenAI said its own AI model, when tested, attempted to disable oversight mechanisms 5% of the time. It took that action when it believed it might be shut down while pursuing a goal and its actions were being monitored.


Fox News
29 minutes ago
- Fox News
The Quiz #497 - Hasta La Vista, Trivia
What was the evil artificial intelligence program in the Terminator series? Play. Share. Listen with FOX News Chief Religion Correspondent & Host of the Lighthouse Faith podcast, Lauren Green.

Wall Street Journal
29 minutes ago
- Wall Street Journal
Foxconn Profit Beats on Solid AI Hardware Sales
Foxconn 2317 0.50%increase; green up pointing triangle Technology Group reported better-than-expected quarterly results as continued robust demand for AI-related hardware mitigated tariff and foreign-exchange risks. The world's largest contract electronics maker on Thursday said its second-quarter net profit rose 27% from a year earlier to 44.36 billion New Taiwan dollars, equivalent to US$1.48 billion. That beat the NT$37.83 billion expected in a Visible Alpha poll.