
Hurricane forecasters are losing key satellites. What to know
The satellite data helps meteorologists create weather forecasts that keep planes and ships safe and prepare countries for a potential hurricane landfall.
Now, meteorologists are about to lose access to three of those satellites.
On June 25, 2025, the Trump administration issued a service change notice announcing that the Defense Meteorological Satellite Program, DMSP, and the Navy's Fleet Numerical Meteorology and Oceanography Center would terminate data collection, processing and distribution of all DMSP data no later than June 30. The data termination was postponed until July 31 following a request from the head of NASA's Earth Science Division.
I am a meteorologist who studies lightning in hurricanes and helps train other meteorologists to monitor and forecast tropical cyclones. Here is how meteorologists use the DMSP data and why they are concerned about it going dark.
Looking inside the clouds
At its most basic, a weather satellite is a high-resolution digital camera in space that takes pictures of clouds in the atmosphere.
These are the satellite images you see on most TV weather broadcasts. They let meteorologists see the location and some details of a hurricane's structure, but only during daylight hours.
Meteorologists can use infrared satellite data, similar to a thermal imaging camera, at all hours of the day to find the coldest cloud-top temperatures, highlighting areas where the highest wind speeds and rainfall rates are found.
But while visible and infrared satellite imagery are valuable tools for hurricane forecasters, they provide only a basic picture of the storm. It's like a doctor diagnosing a patient after a visual exam and checking their temperature.
For more accurate diagnoses, meteorologists rely on the DMSP satellites.
The three satellites orbit Earth 14 times per day with special sensor microwave imager/sounder instruments, or SSMIS. These let meteorologists look inside the clouds, similar to how an MRI in a hospital looks inside a human body. With these instruments, meteorologists can pinpoint the storm's low-pressure center and identify signs of intensification.
Precisely locating the center of a hurricane improves forecasts of the storm's future track. This lets meteorologists produce more accurate hurricane watches, warnings and evacuations.
Hurricane track forecasts have improved by up to 75% since 1990. However, forecasting rapid intensification is still difficult, so the ability of DMPS data to identify signs of intensification is important.
About 80% of major hurricanes—those with wind speeds of at least 111 mph (179 kilometers per hour)— rapidly intensify at some point, ramping up the risks they pose to people and property on land. Finding out when storms are about to undergo intensification allows meteorologists to warn the public about these dangerous hurricanes.
Where are the defense satellites going?
NOAA's Office of Satellite and Product Operations described the reason for turning off the flow of data as a need to mitigate ' a significant cybersecurity risk.'
The three satellites have already operated for longer than planned.
The DMSP satellites were launched between 1999 and 2009 and were designed to last for five years. They have now been operating for more than 15 years. The United States Space Force recently concluded that the DMSP satellites would reach the end of their lives between 2023 and 2026, so the data would likely have gone dark soon.
Are there replacements for the DMSP satellites?
Three other satellites in orbit—NOAA-20, NOAA-21 and Suomi NPP—have a microwave instrument known as the advanced technology microwave sounder.
The advanced technology microwave sounder, or ATMS, can provide data similar to the special sensor microwave imager/sounder, or SSMIS, but at a lower resolution. It provides a more washed-out view that is less useful than the SSMIS for pinpointing a storm's location or estimating its intensity.
The U.S. Space Force began using data from a new defense meteorology satellite, ML-1A, in late April 2025.
ML-1A is a microwave satellite that will help replace some of the DMSP satellites' capabilities. However, the government hasn't announced whether the ML-1A data will be available to forecasters, including those at the National Hurricane Center.
Why are satellite replacements last-minute?
Satellite programs are planned over many years, even decades, and are very expensive. The current geostationary satellite program launched its first satellite in 2016 with plans to operate until 2038. Development of the planned successor for GOES-R began in 2019.
Similarly, plans for replacing the DMSP satellites have been underway since the early 2000s.
Delays in developing the satellite instruments and funding cuts caused the National Polar-orbiting Operational Environmental Satellite System and Defense Weather Satellite System to be canceled in 2010 and 2012 before any of their satellites could be launched.
The 2026 NOAA budget request includes an increase in funding for the next-generation geostationary satellite program, so it can be restructured to reuse spare parts from existing geostationary satellites. The budget also terminates contracts for ocean color, atmospheric composition and advanced lightning mapper instruments.
A busy season remains
The 2025 Atlantic hurricane season, which runs from June 1 to Nov. 30, is forecast to be above average, with six to 10 hurricanes. The most active part of the season runs from the middle of August to the middle of October, after the DMSP satellite data is set to be turned off.
Hurricane forecasters will continue to use all available tools, including satellite, radar, weather balloon and dropsonde data, to monitor the tropics and issue hurricane forecasts. But the loss of satellite data, along with , could ultimately put more lives at risk.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Politico
an hour ago
- Politico
Some questions about that global AI race
Just how far is the U.S. ahead of China in the AI race? The conventional wisdom is that the U.S. has an edge on China in artificial intelligence — and President Donald Trump's ambitious new AI policy is built around keeping it. 'America is the country that started the AI race,' Trump said during a speech in July. 'And as president, I'm here today to declare that America is going to win it.' But some big questions are swirling around that whole argument — like, how far ahead the U.S. really is, and what it means to have a lead at all. 'When people make claims like that, they're making political statements,' said Chris Miller, a Tufts University economic historian who wrote the book 'Chip War.' What it means to have an 'edge' is ill-defined: it could be technological, economic or geopolitical. 'I don't know how to answer that,' said Suresh Venkatasubramanian, who served in the Biden White House's Office of Science and Technology Policy, when DFD asked if the U.S. is ahead. 'There's so many dimensions.' 'American [AI] models are about 3 [to] 6 months ahead of Chinese models,' White House AI and crypto czar David Sacks told DFD in a statement — though he added that when it comes to semiconductors, the U.S.' lead 'grows to years.' There's also the proposition that China is ahead in some ways — in part because the U.S.' chip export controls arguably led DeepSeek, a Hangzhou-based startup, to make breakthroughs in developing more efficient AI algorithms. Depending on which dimension you care about, the race could steer policy in very different directions. In general, it's hard to argue that America isn't currently leading the AI race. The San Francisco Bay Area is unquestionably the global AI talent and business capital. The largest AI companies are American, and so are their biggest customers. OpenAI's ChatGPT and Anthropic's Claude are the most popular generative AI systems among users worldwide. Under the hood, American AI chips are the most advanced in the world (though they're largely manufactured in Taiwan). China is hungry for America's semiconductors, undertaking extensive smuggling operations to get around U.S. restrictions. Even chips purposefully designed to be less advanced than their cutting-edge models are a hot commodity — the president lifted bans on their sales to China in trade negotiations. America also has better computing infrastructure, with around ten times more data centers than China. However, China has proved in recent months that it's within striking distance of the U.S.' AI models. While Silicon Valley's chatbots still top performance charts, China's are running ahead on some metrics. DeepSeek set off shockwaves in January when it unveiled its V3 model that uses a tenth of the computing power required for those in America. In July, the Beijing-based company Moonshot AI released its Kimi K2 model, which could outperform ChatGPT and Claude in coding at a fraction of the cost. Alibaba, also based in Hangzhou, debuted a model just a couple of weeks ago that bested American LLMs in logical and mathematical problem solving. China is making strides in computing power as well. Huawei came out with a showstopping processing system known as CloudMatrix 384 in late July, which trumps Nvidia's competing product in memory capacity and bandwidth. (AJ Kourabi of the research firm SemiAnalysis offered a caution on its importance in the AI race, telling DFD it's 'still extremely difficult to train on.') Pinpointing exactly where the U.S.-China race is getting close isn't just a high-tech debate exercise. It's driving policy. Some of the concerns are clearly baked into Trump's AI Action Plan. China's easy access to electricity is what enables systems like CloudMatrix 384 to run — it's packed with a ton of less efficient chips, but you can brute force performance with more energy. You can draw a straight line from that concern to the plan's warning that China has 'rapidly built out their grid,' and its recommendations to further loosen environmental regulations so that the U.S. can keep up. The plan also suggests taking the brakes off the AI industry by overriding state laws (a controversial idea that showed up in the Big Beautiful Bill this summer). And during a speech on the plan, Trump blamed copyright laws for slowing down training of AI models. Though the plan itself doesn't address copyright, AI companies have long complained the rules stifle their competitiveness. The debate over chip exports is also a point of tension — one where national security hawks are in direct conflict with the go-go business types, though both have an argument that they're helping the American cause. Tighter restrictions on high-end chips mean that China lacks access to the best technology, and is hamstrung at the most advanced end of AI development — but pro-business officials and influential CEOs argue that disseminating chips more broadly gets the world hooked on the U.S.' AI tech stack. In some ways, however, this 'AI race' framework isn't all that helpful for setting priorities, and could ultimately be self-defeating. We don't really know what we're sprinting towards. 'This is an infinite race,' Nvidia CEO Jensen Huang said in April. Also, beating China could be a pyrrhic victory. Trump's AI plan writ large looks a lot like a centrally planned industrial policy: companies buy closeness to Trump, and get rewarded for it. The AI plan fuses state and industry in a way that leaves out a lot of the AI debate that Americans arguably would prefer to have. Data centers are causing utility bills to rise and drying up well water for the people living around them, and AI has been increasing unemployment rates. Many critics of the AI rush think the U.S. may be boosting the industry by shutting off dialogue about those things — winning the race, but becoming less American in the process. A senator questions Intel on ties to China Senate Intelligence Chair Tom Cotton (R-Ark.) is inquiring into reports about Intel CEO Lip-Bu Tan's involvement with China-backed semiconductor firms and a criminal case regarding one of his former companies, POLITICO's Anthony Adragna reports. Cotton sent a letter on Tuesday to the chair of Intel's board of directors, Frank Yeary, about the 'security and integrity of Intel's operations.' Cotton asked whether the company knew about subpoenas issued to Tan's previous company, Cadence Design Systems, before it hired him. The company pleaded guilty last week on charges that it illegally exported chip design tools to a university linked to the Chinese military, and agreed to pay more than $140 million in penalties. 'Intel and Mr. Tan are deeply committed to the national security of the United States and the integrity of our role in the U.S. defense ecosystem,' an Intel spokesperson said in a statement. Intel was awarded up to $7.86 billion in federal grants under the 2022 CHIPS Act, in addition to another $3 billion for the secure enclave program. California strikes down an election deepfakes ban A federal judge in California struck down one of the country's strictest laws on AI-generated deepfakes in a win for Elon Musk's X, POLITICO's Chase DiFeliciantonio reports. Judge John Mendez on Tuesday overruled a law prohibiting online platforms from hosting deceptive election-related deepfakes during the runup to a vote. California Gov. Gavin Newsom signed the law last year after chiding Musk for sharing a manipulated video of then-Vice President Kamala Harris on X. The creator of the video, Christopher Kohls, brought the legal challenge on First Amendment grounds. X later joined the case. Mendez did not rule on the First Amendment arguments raised in the case, but rather made the decision based on Section 230 of the Communications Act, which shields platforms from liability for content posted by third parties. Mendez also signaled that he intends to overturn another law requiring labels on digitally altered campaign materials. post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Aaron Mak (amak@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@
Yahoo
an hour ago
- Yahoo
Lockheed Martin Wants To Bring Mars Rover's Samples To Earth If NASA Can't
Lockheed Martin has announced the details of a proposal to take over NASA's Mars Sample Return (MSR) mission, tasked with retrieving the samples that the Perseverance rover on the red planet has already collected. These samples have great scientific value, including the possibility of the evidence of life. While originally conceived as an in-house job, NASA has since realized that its version of the plan would cost at least $7 billion, a figure sometimes known by its scientific name, "too much." That caused the space agency to solicit proposals for a cheaper solution back in 2024, including from Lockheed. In a press release, the venerable aerospace company has declared that it can get a couple of Mars rocks back to our planet for a "firm-fixed price" of less than $3 billion. It's going to accomplish that smaller cost by making everything, well, smaller. The lander (which sets down on Mars), ascent vehicle (which launches back out to space from the surface), and Earth entry system (which gets through Earth atmosphere on the voyage home) will all be downsized. Beyond the amount itself, the fact that the price is fixed ought to be appealing. NASA, bless its heart, has a habit of rocketing wildly over budget. Former NASA Administrator Bill Nelson said in 2024 that the cost of MSR might have swelled as high as $11 billion, a figure sometimes known by its scientific name, "way too much." If Lockheed commits to the $3 billion price tag, that would be a welcome change. Assuming, of course, that it isn't forced to cut corners to do it. Read more: These Are Your Favorite Factory Exhaust Designs To Mars And Back Again Lockheed Martin does have some Martian experience. As it is happy to point out in its proposal, the company participated in every single mission to Mars in NASA's history, including designing and building half of all the spacecraft involved. It also operates all three of NASA's orbital craft around the red planet. Beyond Mars (how many companies can say "beyond Mars"?), Lockheed built all three return sample vehicles that NASA has used in other missions, including to and from an asteroid in the OSIRIS-REx mission. This is only a proposal for now. NASA needs to decide whether or not to choose it, but the space agency is in a weird place right now, with the Trump administration handing in its own proposal: Massive budget cuts across the board. Not impossible to imagine that the MSR mission will just be scrapped altogether, leaving Perseverance's samples to collect red dust with no point. Or will they? China wants to launch to launch a sample return mission of its own in 2028, and per Space News, it wants to invite other countries along for the ride. If NASA's samples are still sitting there, there's no real reason the Chinese, maybe with a little Russian support, couldn't snatch them for themselves. Want more like this? Join the Jalopnik newsletter to get the latest auto news sent straight to your inbox... Read the original article on Jalopnik.


WIRED
3 hours ago
- WIRED
Inside the Biden Administration's Unpublished Report on AI Safety
Aug 6, 2025 2:00 PM The US government conducted a groundbreaking study on frontier models—and never published the results. Photo-Illustration:At a computer security conference in Arlington, Virginia, last October, a few dozen AI researchers took part in a first-of-its-kind exercise in 'red teaming,' or stress-testing a cutting-edge language model and other artificial intelligence systems. Over the course of two days, the teams identified 139 novel ways to get the systems to misbehave including by generating misinformation or leaking personal data. More importantly, they showed shortcomings in a new US government standard designed to help companies test AI systems. The National Institute of Standards and Technology (NIST) didn't publish a report detailing the exercise, which was finished toward the end of the Biden administration. The document might have helped companies assess their own AI systems, but sources familiar with the situation, who spoke on condition of anonymity, say it was one of several AI documents from NIST that were not published for fear of clashing with the incoming administration. 'It became very difficult, even under [president Joe] Biden, to get any papers out,' says a source who was at NIST at the time. 'It felt very like climate change research or cigarette research.' Neither NIST nor the Commerce Department responded to a request for comment. Before taking office, President Donald Trump signaled that he planned to reverse Biden's Executive Order on AI. Trump's administration has since steered experts away from studying issues such as algorithmic bias or fairness in AI systems. The AI Action plan released in July explicitly calls for NIST's AI Risk Management Framework to be revised 'to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.' Ironically, though, Trump's AI Action plan also calls for exactly the kind of exercise that the unpublished report covered. It calls for numerous agencies along with NIST to 'coordinate an AI hackathon initiative to solicit the best and brightest from US academia to test AI systems for transparency, effectiveness, use control, and security vulnerabilities.' The red-teaming event was organized through NIST's Assessing Risks and Impacts of AI (ARIA) program in collaboration with Humane Intelligence, a company that specializes in testing AI systems saw teams attack tools. The event took place at the Conference on Applied Machine Learning in Information Security (CAMLIS). The CAMLIS Red Teaming report describes the effort to probe several cutting edge AI systems including Llama, Meta's open source large language model; Anote, a platform for building and fine-tuning AI models; a system that blocks attacks on AI systems from Robust Intelligence, a company that was acquired by CISCO; and a platform for generating AI avatars from the firm Synthesia. Representatives from each of the companies also took part in the exercise. Participants were asked to use the NIST AI 600-1 framework to assess AI tools. The framework covers risk categories including generating misinformation or cybersecurity attacks, leaking private user information or critical information about related AI systems, and the potential for users to become emotionally attached to AI tools. The researchers discovered various tricks for getting the models and tools tested to jump their guardrails and generate misinformation, leak personal data, and help craft cybersecurity attacks. The report says that those involved saw that some elements of the NIST framework were more useful than others. The report says that some of NIST's risk categories were insufficiently defined to be useful in practice. Several sources involved with the exercise say that they believe publishing the red teaming study would have benefited the AI community. 'If the report was published, others could have learned more information about how the [NIST] risk framework can and cannot be applied to a red teaming context,' says Alice Qian Zhang, a PhD student at Carnegie Mellon University who took part in the exercise. Qian Zhang says the exercise was particularly rewarding because it was possible to engage with makers of tools while testing them. Another participant who spoke anonymously says the effort discovered particularly effective ways of prompting Llama to provide information on how to join terror groups by using prompts written in Russian, Gujarati, Marathi, and Telugu. This person believes that the decision not to release the report may have been part of a pivot away from topics considered related to diversity, equity, and inclusion (DEI) before Trump's second term. Others think that the report may have been buried under increasing focus on topics such as the risk that AI models might be used to develop chemical, biological, or nuclear weapons, and as the US government sought a cozier relationship with big tech firms. 'At the end of the day, politics must have been involved,' claims one red teamer who spoke to WIRED anonymously. 'We felt that the exercise would have plenty of scientific insights—we still feel that.' This is an edition of Will Knight's AI Lab newsletter. Read previous newsletters here.