logo
Google's Newest AI Model Acts like a Satellite to Track Climate Change

Google's Newest AI Model Acts like a Satellite to Track Climate Change

WIRED4 days ago
AlphaEarth Foundations is a chip off Google DeepMind block—and it's here to help save the world. Images: Alpha Earth Foundation
Google's newest AI model is going to scour the Earth and, ideally, help it out. That's the plan, anyway. The mission is to find out once and for all, in fine detail, what we are doing to our planet. Crucially, once the model has supposedly done this it will also, apparently, explain where we might be able to best put things in place to help our world.
AlphaEarth Foundations, an offshoot of Google's DeepMind AI model, aims to leverage machine learning and all the gobs and gobs of data that Google has absorbed about our planet over the last two decades, in order to understand how specific areas are changing over time.
The model uses a system called 'embeddings' that takes terabytes of data collected from satellites every day, analyzes it, and compresses it down to save storage space. The result is a model of different filters overlaid over maps that are color coded to indicate material properties, vegetation types, groundwater sources, and human constructions such as buildings and farms. Google says the system will act as a sort of 'virtual satellite,' letting users call up on demand detailed information about any given spot on the planet.
The goal, Google says, is for users of the service to be able to better understand how specific ecosystems on the planet work, including how air quality, sunlight, groundwater, and even human construction projects vary and change across a landscape. Ultimately, the company wants the model to help answer questions from paying governments and corporations that wish to know, for example, which ecosystems may have more access to sunlight or groundwater that can help determine the best spots to grow a certain crop. Alternatively, it may aid in identifying areas to plop down solar panels with maximum payoff, or build structures in more climate resilient locations.
Google's new model has already mapped a complex surface in Antarctica—an area notoriously difficult to capture due to irregular satellite imaging—in clear detail. It has also supposedly outlined variations in Canadian agricultural land use that are invisible to the naked eye.
Google's new model assigns colors to AlphaEarth Foundations' embedding fields. In Ecuador, the model sees through persistent cloud cover to detail agricultural plots in various stages of development. Photograph: Alpha Earth Foundation
Chris Brown, a research engineer at Google DeepMind, says that historically, there have been two main problems for making reliable information about the planet more accessible: Getting overloaded with too much data; and that information being inconsistent. 'Before, the challenge was getting a look at it all,' Brown said in a press briefing. 'Now, the challenge is to unify all the ways that we have to observe and model our planet and get a complete picture.'
Google, of course, has been at this for a while. While AlphaEarth isn't a broader consumer-facing application, Google Earth has had its own similar timelapse feature since 2021 that shows how global geography has changed over decades—largely due to climate change. Google has also gotten into the game of putting more specific types of satellites into orbit, such as ones that are designed to spot wildfires from space.
The models aren't perfect. Google, in its frenzied push to build robust AI models, has hit a few snags with the accuracy of its AI generations, mostly when its AI overviews in Search have gone off the rails. But sucking up petabytes of satellite images and finding the trends is, weirdly, a more straightforward task for AI.
Google says the models can generate accurate enough data about an ecosystem down to an area of 10 meters—and while it may get some things wrong, it is apparently 23.9 percent more accurate than similar AI models. (Goggle didn't name which ones it was talking about, but companies such as Privateer have been at this for years.)
How AlphaEarth Foundations works: by taking non-uniformly sampled frames from a video sequence to index any position in time, the model creates a continuous view of the location while outlining measurements. Video: Alpha Earth Foundation
Google has worked with partners to test the new system, such as MayBiomas in Brazil and the Global Ecosystems Atlas, which aim to better classify undermapped ecosystems including dense rainforests, deserts, and wetlands.
'It's very difficult to compress all the information available for a piece of land in a traditional way, which is spent literally hours and hours and hours of preparing,' said Tasso Azeved, founder of MayBiomas. After working with Google to test AlphaEarth for the past 18 months or so, Azeved says the software has made it easier to analyze great swathes of rainforest yet keep that data from overwhelming their storage capabilities. 'We were not even scratching everything that would be possible,' Azeved says.
AlphaEarth is also being added in a more limited capacity into Google Earth Engine, the cloud-based platform that first launched in 2010 and is used for mapping by agencies and companies such as NASA, Unilever, and the Forest Service. It's worth noting here that this is separate from the more consumer-friendly Google Earth.
Previously, Google's Earth Engine processes and analyzes satellite data that has then been used to create interactive, high-resolution maps of deforestation across the world and compile detailed views of bodies of water—rivers, lakes, oceans, and seas—and how these had changed over time. Now, annual snapshots of AlphaEarth's embeddings will be available as a dataset to track long-term trends, which a company rep says can be used to do more advanced custom mapping if users have a 'light coding background.'
No stranger to privacy concerns, Google is eager to wave off any worries people might have about this new system scouring pictures from the sky. The company says the AlphaEarth dataset cannot capture individual objects, people, or faces.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How AI Adoption Is Sitting With Workers
How AI Adoption Is Sitting With Workers

Time​ Magazine

timean hour ago

  • Time​ Magazine

How AI Adoption Is Sitting With Workers

T here's a danger to focusing primarily on CEO statements about AI adoption in the workplace, warns Brian Merchant, a journalist-in-residence at the AI Now Institute, an AI policy and research institute. 'There's a wide gulf between the prognostications of tech company CEOs and what's actually happening on the ground,' he says. Merchant in 2023 published Blood in the Machine, a book about how the historical Luddites resisted automation during the industrial revolution. In his substack newsletter by the same name, Merchant has written about how AI implementation is now reshaping work. To better understand workers' perspectives on how AI is changing jobs, we spoke with Merchant. Here are excerpts from our conversation, edited for length and clarity: There have been a lot of headlines recently about how AI adoption has led to headcount reductions. How do you define the AI jobs crisis? There is a real crisis in work right now, and AI poses a distinct kind of threat. But that threat to me, based on my understanding of technological trends in history, is less that we're looking at a widespread, mass-automation, job-wipe-out event and more at a particular set of logics that generative AI gives management and employers. There are jobs that are uniquely vulnerable. They might not be immense in number, but they're jobs that people think are pretty important—writing and artistic creation and that kind of thing. So you do have those jobs being threatened, but then we also have this crisis where AI supplies managers and bosses with this imperative where, whether or not the AI can replace somebody, it's still being pushed as a justification for doing so. We saw this a lot with DOGE and the hollowing out of the public workforce and the AI-first strategies that were touted over there. More often than facilitating outright job replacement, automation is used by bosses to break down tasks, deskill labor, or use as leverage against workers. This was true in the Luddites' time, and it's true right now. A lot of the companies that say they're 'AI-first' are merely taking the opportunity to reduce salaried headcount and replace it with cheaper, more precarious contract labor. This is what happened with Klarna, the fintech company that has famously been one of the most vocal advocates of AI anywhere. [Editor's note: In May, Klarna CEO Sebastian Siemiatkowski told Bloomberg that the company was reversing its well-publicized move to replace 700 human call-center workers with AI and instead hiring humans again. 'As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality,' Siemiatkowski said.] After all, firms still need people to ensure the AI output is up to par, edit it, or to 'duct tape it' to make sure it works well enough with existing systems—bosses just figure they can take the opportunity to call that 'unskilled' work and pay the people who are doing it less. Your project, 'AI Killed My Job,' is an ongoing, multi-part series that dives deeper into how the AI jobs crisis is impacting workers day-to-day. What themes or patterns are emerging from those stories? I invited workers who have been impacted by AI to reach out and share their stories. The project has just begun, and I've already gotten hundreds of responses at this point. I expected to see AI being used as a tool by management to try to extract more labor and more value from people, to get people to work harder, and to have it kind of deteriorate conditions rather than replace work outright. That's been born out, and that's what I've seen. The first installment that I ran was around tech workers. Some people have the assumption that the tech industry is a little bit more homogeneous in its enthusiasm for AI, but that's really not the case. A lot of the workers who have to deal with them are not happy with AI and the way that AI is being used in their companies and the impact it's having on their work. There's a few people [included in the first installment] who have lost their jobs as part of layoffs initiated by a company that has an AI-first strategy, including at CrowdStrike and Dropbox, and I'm hearing from many people who haven't quite lost their jobs yet, but are exponentially concerned that they will. But, by and large, what you're seeing now is managers using AI to justify speeding up work, trying to get employees to use it to be more productive at the expense of quality or the things that people used to enjoy about their jobs. There are people who are frustrated to see management really encouraging the use of more AI at the expense of security or product quality. There's a story from a Google worker who watched colleagues feed AI-generated code into key infrastructures, which was pretty unsettling to many. That such an important and powerful company that runs such crucial web infrastructure would allow AI-generated code to be used in their systems with relatively few safeguards was really surprising. [Editor's note: A Google spokesperson said that the company actively encourages AI use internally, with roughly 30% of the company's code now being AI generated. They cited CEO Sundar Pichai's estimate that AI has increased engineering velocity by 10% but said that engineers have rigorous code review, security, and maintenance standards.] We're also seeing it being used to displace accountability, with managers using AI as a way to deflect blame should something go wrong, or, 'It's not my fault; it's AI's fault.' Your book, Blood in the Machine, tells the story of the historical Luddites' uprising against rising automation during the industrial revolution. What can we learn from that era that's still relevant today? One lesson we can learn from the Luddites is that we should be seeking ways to make more people and stakeholders involved in the process of developing and deploying technology. The Luddites were not anti-technology. They rose up and they smashed the machine because they had no other choice. The deck was stacked against them, and a lot of them were quite literally starving. Collective bargaining was illegal for them. And, just like today, conditions were increasingly difficult as the democratic levers that people can pull to demand a seat at the table were vanishingly few. (I mean, Silicon Valley just teamed up with the GOP to try and get an outright 10-year ban passed on states' abilities to regulate AI). That leads to strife, it leads to anger, it leads to feeling like you don't have a say or any options. Now, we're looking at artists and writers and content creators and coders and you name it, watching their livelihoods becoming more precarious with worsening conditions, if not getting erased outright. As you squeeze these more and more populations of people, then it's not unthinkable that you would see what happened then happen again in some capacity. You're already seeing the roots of that with people vandalizing Waymo cars, which they see as the agents of big tech and automation. That's a reason employers might want to consider that human element rather than putting the pedal to the metal with regards to AI automation because there's a lot of fear, anxiety, and anger at the way that all of this has taken shape and it's playing out. What should employers do instead? When it comes to employers, at the end of the day, if you're shelling out for a bunch of AI, then you're either hoping that your employees will use it to be more productive for you and work harder for you, or you're hoping to get rid of employees. Ideally, the employer would say it's the former. It would trust its employees to know how best to generate more value and make them more productive. In reality, even if a company goes that far, they can still turn around and trim labor costs elsewhere and mandate workers to use AI to pick up laid-off colleagues' workloads and ratchet up productivity. So what you really need is a union contract or something codified in law that you can't just fire people and replace them with AI. You see some union contracts that include language about the ways that AI or automation can be implemented and when it can't, and what the worker has say over. Right now, that is the best means of giving people power over a technology that's going to affect their working life. The problem with that is we have such low union density in the United States that it limits who can enjoy such a benefit to those who are sort of formally organized. There are also attempts at legislation that put checks on what automation can and can't touch, when AI can be used in the hiring process or what kinds of data it can collect. Overall, there has to be a serious check on the power of Silicon Valley before we can hope to get workers' voices heard in terms of how the technology's affecting them.

AI's Overlooked $97 Billion Contribution to the Economy
AI's Overlooked $97 Billion Contribution to the Economy

Wall Street Journal

timean hour ago

  • Wall Street Journal

AI's Overlooked $97 Billion Contribution to the Economy

The U.S. economy grew at an annual rate of 3% in the second quarter, which is great news. Does that mean artificial intelligence is delivering on its long-promised benefits? No, because gross domestic product isn't the best place to look for AI's contribution. Yet the official government numbers substantially underestimate the benefits of AI. First-quarter 2025 GDP was down an annualized 0.5%. Labor productivity growth ticked up a respectable but hardly transformative 2.3% in 2024, following a few lean years of gains and losses. Is AI overhyped?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store