Latest news with #ProjectZero


News18
a day ago
- Business
- News18
The New-Gen Threat: Google Says Its New AI Agent Has Stopped A Major Cyberattack
Google's security division has developed the Big Sleep AI agent that is now doing the God's work to detect and stop major cyberattacks. AI tools are making hackers smarter, helping them build powerful malware that can be hard to detect. However, AI is also enabling companies to learn about these attacks even before they wreak havoc. Google has built a new AI agent that is claimed to have stopped a major cyberattack by analysing its footprints and blocking its path to cause destruction. As Google Chief Sundar Pichai has said in this post, the AI agent called Big Sleep was deployed to detect and stop these cyberattacks from affecting billions and it seems to have done just that recently. AI agents are making lives easier for people, allowing them to delegate tasks and work effectively. It seems these agents are doing much more than that. New from our security teams: Our AI agent Big Sleep helped us detect and foil an imminent exploit. We believe this is a first for an AI agent – definitely not the last – giving cybersecurity defenders new tools to stop threats before they're widespread.— Sundar Pichai (@sundarpichai) July 15, 2025 Google announced Big Sleep last year and it has been developed by its DeepMind and Project Zero team. The AI agent has been trained to look for active security issues that are unknown to the industry and researchers. The company claims in November 2024 it was able to identify the first real security vulnerability and prevented it from becoming a bigger incident. Things have only gotten better with Big Sleep, and recently it has discovered a vulnerability in SQLite that could have exploited many. With the help of the AI agent, Google was able to predict the threat and prevent it from making its move in the open. Using AI The Right Way Google is clearly proud of its ability to build an AI agent that can prevent these attacks. It says these cybersecurity agents are a 'game changer' and allows the security teams to keep an eye out for bigger threats in the wild. The company says these AI agents have been developed with privacy in mind, and it even has a white paper which details how these agents are being built to stop possible cyber intrusions at a large scale. There is also an issue tracker page set up by Google to give you all the updates around the work done by Big Sleep to stop security issues from having a major impact on the internet. view comments First Published: July 17, 2025, 10:08 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.


News18
a day ago
- News18
Microsoft's New AI System Can Alert Users About Malware Threats And Fight It
Last Updated: Microsoft's new AI system has been built to fight dangerous malware threats and block their entry before they cause massive damage. Microsoft is the latest tech giant to deploy AI to fight the big malware threat and protect its users. The company's new AI system called Project Ire is looking to change the way cybersecurity attacks are handled and prevented before they even make a dent. Hackers are relying on AI to manipulate and evolve their mode of attacks so it was obvious that the likes of Google and Microsoft will put AI to work and help them fight the ugly battle. Microsoft's AI Weapon: How It Fights The Malware Microsoft has developed the AI system to fully understand the nature of the malware and find its origin to kill the threat at its core. The company did a test where Project Ire was accurately able to create a threat report which managed to block the advanced malware risk. It even says the detection about the malware was 98 percent accurate with a 2 percent false positive rate also mentioned. Using AI to build new tools to generate videos and images is all good but the actual purpose of AI is to help sophisticate the existing mechanisms and future-proof for evolved attacks where AI will be fully in control for most of the actions. Tackle The Next-Gen Threats An AI to fight the AI seems to be the right focus here and Microsoft is clearly working steadfast to make its own internal security tools are up in arms and up-to-date with the latest attacking trends. These updates from Microsoft come a few after Google claimed to have thwarted a major cyberattack with the help of its own Project Ire-like tools. Google has built a new AI agent that is claimed to have stopped a major cyberattack by analysing its footprints and blocking its path to cause destruction. Sundar Pichai, CEO, Google said the AI agent called Big Sleep was deployed to detect and stop these cyberattacks from affecting billions and it seems to have done just that recently. AI agents are making lives easier for people, allowing them to delegate tasks and work effectively. It seems these agents are doing much more than that. The new AI agent has been developed by its DeepMind and Project Zero team. The AI agent has been trained to look for active security issues that are unknown to the industry and researchers. We'll probably need more advanced AI systems to tackle these new-gen threats and the giants are coming through with their response in the best way possible. view comments First Published: August 07, 2025, 14:25 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.


Time of India
4 days ago
- Time of India
Google's AI bug hunter ‘Big Sleep' finds 20 security flaws in open source software
google Tech giant Google has announced that its AI-powered vulnerability researcher — Big Sleep has identified 20 security vulnerabilities widely used open source software. Google VP security Heather Adkins posted on social media platform X (formerly known as Twitter) about this achievement. 'Today as part of our commitment to transparency in this space, we are proud to announce that we have reported the first 20 vulnerabilities discovered using our AI-based "Big Sleep" system powered by Gemini,' wrote Adkins. Developed jointly by Google's DeepMind and elite Project Zero teams, the Big Sleep managed to flag flaws in various tools such as FFmpeg and ImageMagick which are used for audio, video and image processing. The company has not yet disclosed the nature of the vulnerabilities, but has confirmed that the issue was found and reproduced by the AI agent without any human intervention. However, a human expert later reviewed the reports before submission. How Big Sleep works The AI bug hunter designed by Google operates by stimulating the actions of malicious actor and systematically probe the code and network services for potential exploits. The AI took is also capable of learning from its environment, adapt new strategies and identify complex and multi-step vulnerabilities. The 20 vulnerabilities identified by Big Sleep span across a range of Google's own products and some open-source projects. "This is not about replacing human security researchers, but about augmenting their capabilities," a Google spokesperson said. "Our AI bug hunter can perform thousands of tests in the time it takes a human to run a few. This allows our security teams to focus on the more intricate and strategic aspects of cybersecurity, while the AI handles the repetitive and time-consuming work." AI Masterclass for Students. Upskill Young Ones Today!– Join Now
Yahoo
4 days ago
- Yahoo
Google says its AI-based bug hunter found 20 security vulnerabilities
Google's AI-powered bug hunter has just reported its first batch of security vulnerabilities. Heather Adkins, Google's vice president of security, announced Monday that its LLM-based vulnerability researcher Big Sleep found and reported 20 flaws in various popular open source software. Adkins said that Big Sleep, which is developed by the company's AI department DeepMind as well as its elite team of hackers Project Zero, reported its first-ever vulnerabilities, mostly in open source software such as audio and video library FFmpeg and image-editing suite ImageMagick. Given that the vulnerabilities are not fixed yet, we don't have details of their impact or severity, as Google does not yet want to provide details, which is a standard policy when waiting for bugs to be fixed. But the simple fact that Big Sleep found these vulnerabilities is significant, as it shows these tools are starting to get real results, even if there was a human involved in this case. 'To ensure high quality and actionable reports, we have a human expert in the loop before reporting, but each vulnerability was found and reproduced by the AI agent without human intervention,' Google's spokesperson Kimberly Samra told TechCrunch. Royal Hansen, Google's vice president of engineering, wrote on X that the findings demonstrate 'a new frontier in automated vulnerability discovery.' LLM-powered tools that can look for and find vulnerabilities are already a reality. Other than Big Sleep, there's RunSybil and XBOW, among others. XBOW has garnered headlines after it reached the top of one of the U.S. leaderboards at bug bounty platform HackerOne. It's important to note that in most cases, these reports have a human at some point of the process to verify that the AI-powered bug hunter found a legitimate vulnerability, as is the case with Big Sleep. Vlad Ionescu, co-founder and chief technology officer at RunSybil, a startup that develops AI-powered bug hunters, told TechCrunch that Big Sleep is a 'legit' project, given that it has 'good design, people behind it know what they're doing, Project Zero has the bug finding experience and DeepMind has the firepower and tokens to throw at it.' There is obviously a lot of promise with these tools, but also significant downsides. Several people who maintain different software projects have complained of bug reports that are actually hallucinations, with some calling them the bug bounty equivalent of AI slop. 'That's the problem people are running into, is we're getting a lot of stuff that looks like gold, but it's actually just crap,' Ionescu previously told TechCrunch. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


TechCrunch
4 days ago
- TechCrunch
Google says its AI-based bug hunter found 20 security vulnerabilities
Google's AI-powered bug hunter has just reported its first batch of security vulnerabilities. Heather Adkins, Google's vice president of security, announced Monday that its LLM-based vulnerability researcher Big Sleep found and reported 20 flaws in various popular open source software. Adkins said that Big Sleep, which is developed by the company's AI department DeepMind as well as its elite team of hackers Project Zero, reported its first-ever vulnerabilities, mostly in open source software such as audio and video library FFmpeg and image editing suite ImageMagick. Given that the vulnerabilities are not fixed yet, we don't have details of their impact or severity, as Google does not yet want to provide details, which is a standard policy when waiting for bugs to be fixed. But the simple fact that Big Sleep found these vulnerabilities is significant, as it shows these tools are starting to get real results, even if there was a human involved in this case. 'To ensure high quality and actionable reports, we have a human expert in the loop before reporting, but each vulnerability was found and reproduced by the AI agent without human intervention,' Google's spokesperson Kimberly Samra told TechCrunch. Royal Hansen, Google's vice president of engineering, wrote on X that the findings demonstrate 'a new frontier in automated vulnerability discovery.' LLM-powered tools that can look for and find vulnerabilities are already a reality. Other than Big Sleep, there's RunSybil, and XBOW, among others. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW XBOW has garnered headlines after it reached the top of one of the U.S. leaderboards at bug bounty platform HackerOne. It's important to note that in most cases, these reports have a human at some point of the process to verify that the AI-powered bug hunter found a legitimate vulnerability, as is the case with Big Sleep. Vlad Ionescu, co-founder and chief technology officer at RunSybil, a startup that develops AI-powered bug hunters, told TechCrunch that Big Sleep is a 'legit' project, given that it has 'good design, people behind it know what they're doing, Project Zero has the bug finding experience and DeepMind has the firepower and tokens to throw at it.' There is obviously a lot of promise with these tools, but also significant downsides. Several people who maintain different software projects have complained of bug reports that are actually hallucinations, with some calling them the bug bounty equivalent of AI slop. 'That's the problem people are running into, is we're getting a lot of stuff that looks like gold, but it's actually just crap,' Ionescu previously told TechCrunch.