
Google's AI bug hunter ‘Big Sleep' finds 20 security flaws in open source software
Tech giant Google has announced that its AI-powered vulnerability researcher — Big Sleep has identified 20 security vulnerabilities widely used open source software. Google VP security Heather Adkins posted on social media platform X (formerly known as Twitter) about this achievement. 'Today as part of our commitment to transparency in this space, we are proud to announce that we have reported the first 20 vulnerabilities discovered using our AI-based "Big Sleep" system powered by Gemini,' wrote Adkins.
Developed jointly by Google's DeepMind and elite Project Zero teams, the Big Sleep managed to flag flaws in various tools such as FFmpeg and ImageMagick which are used for audio, video and image processing. The company has not yet disclosed the nature of the vulnerabilities, but has confirmed that the issue was found and reproduced by the AI agent without any human intervention. However, a human expert later reviewed the reports before submission.
How Big Sleep works
The AI bug hunter designed by Google operates by stimulating the actions of malicious actor and systematically probe the code and network services for potential exploits. The AI took is also capable of learning from its environment, adapt new strategies and identify complex and multi-step vulnerabilities.
The 20 vulnerabilities identified by Big Sleep span across a range of Google's own products and some open-source projects. "This is not about replacing human security researchers, but about augmenting their capabilities," a Google spokesperson said. "Our AI bug hunter can perform thousands of tests in the time it takes a human to run a few. This allows our security teams to focus on the more intricate and strategic aspects of cybersecurity, while the AI handles the repetitive and time-consuming work."
AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Hindustan Times
4 hours ago
- Hindustan Times
OpenAI introduces free customizable AI models: Price, features and more
OpenAI announced two new open-weight AI language models, gpt-oss-120b and gpt-oss-20b, on Tuesday, August 5. These models offer similar capabilities to the company's O-series, as reported by TechCrunch. This marks the first time since OpenAI rolled out GPT-2 in 2019. OpenAI releases two new AI models.(Reuters) The company stated that the text-only models have been specially designed to be used as lower-cost options, which can be easily run and customized by developers and researchers. OpenAI releases lower-cost models OpenAI's latest release comes after tech giants like Meta and Mistral AI, backed by Microsoft, as well as Chinese startup DeepSeek, unveiled their open-weight models in the past few years. According to CNBC, the AI model is considered open weight when all its parameters or elements that have been utilized to improve the output and prediction at the time of training are publicly made available. Such models not only offer control and transparency in the respective field, but also remain quite different from the open-source models, where the full source code is made available to customers to modify and use. OpenAI President Greg Brockman said it is "exciting" to see the developing ecosystem and that the company is looking forward to contributing to it. "...we are excited to contribute to that and really push the frontier and then see what happens from there,' the outlet quoted Brockman as saying. Also read: Google's AI tool 'Big Sleep' flags 20 security flaws in open-source software without human input All about OpenAI's new models Among the two models, gpt-oss-120b is large and holds the capability to run on a single Nvidia GPU, while the gpt-oss-20b model is the lighter version to run on a consumer laptop having 16GB of memory. The two models, which have been dubbed "state-of-the-art," remain freely available to download from the Hugging Face platform. For this project, OpenAI partnered with Nvidia, Advanced Micro Devices, Cerebras, and Groq to see how the models work on various chips. In a statement, Nvidia CEO Jensen Huang said that OpenAI has shown "what could be built on Nvidia AI — and now they're advancing innovation in open-source software.' During a briefing, the company highlighted that its new open models are capable of sending complex queries to AI models in the cloud. This means whenever it is unable to perform a specific task, then the developers will be able to connect it to the more capable closed models of the company, per TechCrunch. The two new AI models, gpt-oss-120b and gpt-oss-20b, underwent a coding test on Codeforces and scored 2622 and 2516. They have outperformed DeepSeek's R1 model. The release of OpenAI's open weight models comes after the company delayed the launch on a repeated basis. Last month, OpenAI CEO Sam Altman informed in an X post that it required extra time to 'run additional safety tests and review high-risk areas.' FAQs: 1. How to try new OpenAI models? As per the company, users can download them from Hugging Face and GitHub under the Apache 2.0 license. These are being made available to run on PCs via LM Studio and Ollama. 2. What are the new AI models from OpenAI? These include open-weight AI language models, gpt-oss-120b and gpt-oss-20b. 3. What's the price for new OpenAI models? They are free to download and users can customize them accordingly.


NDTV
4 hours ago
- NDTV
Australian Regulator Says YouTube, Others 'Turning Blind Eye' To Child Abuse Material
Australia's internet watchdog has said the world's biggest social media firms are still "turning a blind eye" to online child sex abuse material on their platforms, and said YouTube in particular had been unresponsive to its enquiries. In a report released on Wednesday, the eSafety Commissioner said YouTube, along with Apple, failed to track the number of user reports it received of child sex abuse appearing on their platforms and also could not say how long it took them to respond to such reports. The Australian government decided last week to include YouTube in its world-first social media ban for teenagers, following eSafety's advice to overturn its planned exemption for the Alphabet-owned Google's video-sharing site. "When left to their own devices, these companies aren't prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services," eSafety Commissioner Julie Inman Grant said in a statement. "No other consumer-facing industry would be given the licence to operate by enabling such heinous crimes against children on their premises, or services." Google has said previously that abuse material has no place on its platforms and that it uses a range of industry-standard techniques to identify and remove such material. Meta - owner of Facebook, Instagram and Threads, three of the biggest platforms with more than 3 billion users worldwide - says it prohibits graphic videos. The eSafety Commissioner, an office set up to protect internet users, has mandated Apple, Discord, Google, Meta, Microsoft, Skype, Snap and WhatsApp to report on the measures they take to address child exploitation and abuse material in Australia. The report on their responses so far found a "range of safety deficiencies on their services which increases the risk that child sexual exploitation and abuse material and activity appear on the services". Safety gaps included failures to detect and prevent livestreaming of the material or block links to known child abuse material, as well as inadequate reporting mechanisms. It said platforms were also not using "hash-matching" technology on all parts of their services to identify images of child sexual abuse by checking them against a database. Google has said before that its anti-abuse measures include hash-matching technology and artificial intelligence. The Australian regulator said some providers had not made improvements to address these safety gaps on their services despite it putting them on notice in previous years. "In the case of Apple services and Google's YouTube, they didn't even answer our questions about how many user reports they received about child sexual abuse on their services or details of how many trust and safety personnel Apple and Google have on-staff," Inman Grant said.


Time of India
4 hours ago
- Time of India
Ex-Google executive predicts a dystopian job apocalypse by 2027: 'AI will be better than humans at everything... even CEOs'
The End of Human Superiority? — StevenBartlett (@StevenBartlett) You Might Also Like: Is it safer to be a nurse than a doctor in the age of AI? Google DeepMind CEO shares a surprising take Divided Visions of the AI Future Not Just About Jobs, It's About Power In a thought-provoking episode of the 'Diary of a CEO' podcast, former Google X executive Mo Gawdat delivered a powerful prediction that's turning heads across industries: Artificial General Intelligence (AGI) will not just challenge white-collar work — it could soon replace many of its top decision-makers, including who previously served as the chief business officer at Google's innovation arm, didn't hold back. 'AGI is going to be better than humans at everything, including being a CEO,' he said. 'There will be a time where most incompetent CEOs will be replaced.'This warning comes amid rising public curiosity — and concern — about how AI will reshape the future of work . But Gawdat's perspective offers a sharp contrast to the often-optimistic vision shared by many industry has seen the future up close. His own AI-powered startup, focused on emotional intelligence , was developed by just three people — a feat he claims would have previously required 350 personal experience and decades in tech, he dismissed the common narrative that AI will create more jobs than it destroys. 'The idea that artificial intelligence will create jobs is 100% crap,' he said believes even roles requiring creativity and emotional nuance — from podcasters to video editors — are under threat. 'We're now in a short window of augmented intelligence, where we still work alongside AI,' Gawdat explained. 'But it's quickly moving toward machine mastery.'More than just a tech transition, Gawdat sees this moment as an existential reckoning for society.'We were never made to wake up every morning and just occupy 20 hours of our day with work,' he said. 'We defined our purpose as work — that's a capitalist lie.'He envisions a future that might seem utopian: one where people are free to focus on creativity, community, and joy, supported by universal basic income and freed from the grind of conventional getting there won't be easy. Gawdat warns of a 'short-term dystopia' by 2027, marked by mass unemployment and economic instability if governments and institutions don't act urgent tone stands in contrast with other tech figures like Jensen Huang, CEO of NVIDIA, who remains bullish on AI's potential to uplift workers. Huang argues that prompting and training AI is itself a sophisticated skill, and that the technology will augment human effort rather than erase Mark Cuban champions AI literacy through youth-focused initiatives, while Meta's AI scientist Yann LeCun dismisses doomsday narratives altogether, insisting humans will remain in Gawdat isn't alone. AI pioneer Geoffrey Hinton and Anthropic CEO Dario Amodei have also voiced grave concerns about unchecked AI development. Amodei, in a recent podcast appearance, predicted that up to 20% of entry-level white-collar jobs could vanish within five tension is palpable. While some advocate for open innovation, others call for tight regulations to prevent reckless fears that in the hands of profit-driven leaders, AI could deepen inequality. 'Unless you're in the top 0.1%, you're a peasant. There is no middle class,' he stated, highlighting the potential for AI to consolidate power and concern echoes growing divisions within Silicon Valley itself. Amodei recently lashed out at Huang for misrepresenting his cautious stance on AI, accusing the NVIDIA boss of spreading 'outrageous lies' to downplay the has also warned against a 'race to the bottom' in AI development, advocating instead for a responsible and transparent path — one that companies like Anthropic claim to model through open research and ethical the gloom, Gawdat remains hopeful for what lies beyond the upheaval — a society where AI relieves us of soul-crushing labor and gives us back our time, relationships, and his closing words are a reminder of how serious the stakes are:'This is real. This is not science fiction.'