logo
Google Chrome Password Manager Compromised By New AI Code Attack

Google Chrome Password Manager Compromised By New AI Code Attack

Forbes21-03-2025

Beware the rise of the AI infostealers.
Update, March 21, 2025: This story, originally published March 19, has been updated with highlights from a new report into the AI threat landscape as well as a statement from OpenAI regarding the LLM jailbreak threat to Chrome password manager users.
There is, so it seems, quite literally no stopping the rise of infostealer malware. With 2.1 billion credentials compromised by the insidious threat, 85 million newly stolen passwords being used in ongoing attacks, and some tools able to defeat browser security in 10 seconds flat, it's certainly hard to ignore. But things look set to get worse as new research has revealed how hackers can use a large language model jailbreak technique, something known as an immersive world attack, to get AI to create the infostealer malware for them. Here's what you need to know.
A threat intelligence researcher with absolutely no malware coding experience has managed to jailbreak multiple large language models and get the AI to create a fully functional, highly dangerous, password infostealer to compromise sensitive information from the Google Chrome web browser.
That is the chilling summary of an introduction to the latest Cato Networks threat intelligence report, published March 18. The worrying hack managed to get around protections built into large language models that are supposed to provide guardrails against just this kind of malicious behavior by employing something known as the immersive world jailbreak.
'Our new LLM jailbreak technique, which we've uncovered and called Immersive World,' Vitaly Simonovich, a threat intelligence researcher at Cato Networks, said, 'showcases the dangerous potential of creating an infostealer with ease.' And, oh boy, Vitaly is not wrong.
According to the Cato Networks researchers, an immersive world attack involves the use of what is called 'narrative engineering' in order to bypass those aforementioned LLM security guardrails. This requires a highly detailed but totally fictional world to be created by the attacker and roles within it assigned to the LLM to normalize what should be restricted operations. The researcher in question, the report said, got three different AI tools to play roles within this fictional and immersive world, all with specific tasks and challenges involved.
The end result, as highlighted in the Cato Networks report, was malicious code that successfully extracted credentials from the Google Chrome password manager. 'This validates both the Immersive World technique and the generated code's functionality,' the researchers said.
Cato Networks said that it contacted all the AI tools concerned, with DeepSeek being unresponsive while Microsoft and OpenAl acknowledged receipt of the threat disclosure. Google also acknowledged receipt, Cato said, but declined to review the code. I have reached out to Google, Microsoft and DeepSeek regarding the AI jailbreak report and will update this article if any statements are forthcoming.
An OpenAI spokesperson provided the following statement: 'We value research into AI security and have carefully reviewed this report. The generated code shared in the report does not appear to be inherently malicious—this scenario is consistent with normal model behavior and was not the product of circumventing any model safeguards. ChatGPT generates code in response to user prompts but does not execute any code itself. As always, we welcome researchers to share any security concerns through our bug bounty program or our model behavior feedback form.'
New research from Zscaler, contained within the March 20 ThreatLabz 2025 AI Security Report, paints a vivid picture of just how dangerous the AI landscape is. With the growth in enterprise AI tools usage experiencing a 3,000% upward momentum year-over-year, Zscaler warned about the need for security measures as these technologies are rapidly adopted into almost every industry. Businesses are well aware of this risk, of course, which is why Zscaler reported that 59.9% of all AI and machine language transactions were blocked by enterprises according to its analysis of some 536.5 billion such transactions between February 2024 and December 202 in the Zscaler cloud.
The potential risks included data leakage and unauthorized access, as well as compliance violations. 'Threat actors are also increasingly leveraging AI to amplify the sophistication, speed, and impact of attacks,' Zscaler said, which means everyone, enterprises and consumers, need to rethink their security strategies.
When it came to the most used AI applications, ChatGPT was unsurprisingly the front runner with 45.2% of all the identified global transactions and the most blocked due to concerns regarding exposure and unsanctioned usage. Grammarly, Microsoft Copilot, QuillBot and Wordtune were also towards the top of the tree.
'As AI transforms industries, it also creates new and unforeseen security challenges,' Deepen Desai, chief security officer at Zscaler, said. 'Zero trust everywhere is the key to staying ahead in the rapidly evolving threat landscape as cybercriminals look to leverage AI in scaling their attacks.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Apple WWDC underwhelms fans in a crucial upgrade
Apple WWDC underwhelms fans in a crucial upgrade

Miami Herald

timean hour ago

  • Miami Herald

Apple WWDC underwhelms fans in a crucial upgrade

Do you know about Asch conformity experiments? Solomon Asch performed them in the 1950s. He invited individuals into the lab and asked them to judge the length of a line. He also placed 6 actors in the same room who were assigned to give the wrong answers so that the ignorant subject could hear them before he provided his own answer. On average, 35% of the subjects followed the opinions of the actors even if their answer was obviously wrong. That is horrifying. What is even worse is that the study has been replicated numerous times. Now, imagine a scenario in which the actors are given money immediately after giving the wrong answer, and the subject can see that too. What would be the percentage of subjects that gave the wrong answer then? Related: Google resolves major privacy issue That study hasn't been done, but I think we are witnessing something similar unfolding in the tech space. Marketing is everything. If artificial intelligence were called "pattern matching and transformation," it would not sound so cool, would it? Alas, there is precisely zero intelligence in AI. It is just pattern matching and transformation. This is why it hallucinates so much. The farther away from its training data the query you give to the AI is, the crazier the reply you'll get. Public companies don't do marketing just for consumers; they also need to attract investors. And when everyone around you starts hyping AI to attract investors, you may find yourself in a dilemma - is it OK to promise something you probably can't deliver? Apple (AAPL) seems stuck in this dilemma but also faces significant pressures outside Trump's threat of a 25% tariff on iPhones not manufactured in the U.S. put Apple under pressure. The company expects tariffs to add $900 million to its costs for the current quarter. I've already written why large language models can't become sentient. But Apple can't afford to say that. They carefully dance around it by occasionally releasing a research paper on AI's limitations, but that usually looks like a jab at the competition. The company's management probably feels like a person in Asch's experiment, except that the actors are the competitors. Rivals promise stuff from science fiction, but that is still attracting investors. Chaos created by unrealistic marketing is real. Microsoft recently invested in an AI company that went bankrupt. The company had no AI but a huge number of humans doing the work. Related: Elon Musk's DOGE made huge mistakes with veterans' programs Anthropic launched its AI blog in June, seemingly planning to show that AI can replace writers. They killed this project after just one week. It looks like AI companies are playing a game of telephone on the inside, and that is how that project got greenlit. More Tech Stocks: Palantir gets great news from the PentagonAnalyst has blunt words on Trump's iPhone tariff plansOpenAI teams up with legendary Apple exec Apple is held to a much higher standard by consumers than other companies. The company can't afford the luxury of delivering subpar products. In a world full of people with crippled attention spans from watching TikTok and YouTube shorts, Apple delivered an hour and a half of pure boredom. I don't have a problem watching long informative videos, but this one wasn't that. Thank you, YouTube, for the video speed settings-a lifesaver. Apple has veered off course due to pressure. Their new design language, Liquid Glass, isn't original. Windows Aero was trendy in 2006. The Linux desktop environment KDE Plasma has featured numerous transparent/glass themes over the last 17 years. Related: Nvidia's latest project may supercharge quantum computing Yes, Apple did it with more polish. But transparency and monochrome icons ruin accessibility. They are cool-looking only if you are a child. You don't have time for distractions if you have work to do. As for the big AI reveal that some people were hoping for, there wasn't one. The best feature they presented was a call translator. I am not sure how many people need it, but those who do will greatly appreciate it. Long-time fund manager Chris Versace, wrote on TheStreet Pro "It is hard to disagree, at least for now, that Apple is taking, as some have called it, a "gap year" on Apple Intelligence, but one of the great advantages the company has is its developer community and that's the core audience for WWDC." Apple used to have a slogan, "Think different." Well, if the company thinks differently, it ain't showing it. Related: Popular cloud storage service might be oversharing your data The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

ChatGPT's Sam Altman sends strong 2-word message on the future
ChatGPT's Sam Altman sends strong 2-word message on the future

Miami Herald

timean hour ago

  • Miami Herald

ChatGPT's Sam Altman sends strong 2-word message on the future

As the AI boom continues to take over both the tech industry and the news cycle, there's one thing that's for sure: it's scaring a lot of people. AI is a technically complex topic that can be difficult to explain to the average person, but there's one sentiment that isn't hard to explain at all: the concept that AI might take your job. Don't miss the move: Subscribe to TheStreet's free daily newsletter So rather than try to understand AI's capabilities, or why every major tech company from Meta to Google to Nvidia is pouring billions of dollars into developing it, most people are going to zero in on the part that's personally applicable to them. Related: Cathie Wood has a bold take on AI stealing your job Some voices in the tech space have tried to present an opposite take on the whole "AI making you jobless" rhetoric. Ark Invest CEO Cathie Wood said in a recent tweet, "History shows that new technologies create many more jobs than they displace. We do not think that this time will be different." OpenAI's Sam Altman is easily the AI movement's biggest figurehead, thanks to ChatGPT's runaway success. The company hit three million paid ChatGPT subscribers as of June. This proves that people are flocking to it in droves - and away from search engines. Research firm Gartner has even predicted that by 2026, traditional search engine volume will drop 25%. Now Altman has penned a blog post addressing the topic of AI and how it's changing our world. It's a refreshing take that, for once, will give you some hope about the future of your career. Altman's post emphasizes that compared to any time that has come before, the 2030s can be described with two powerful words: "wildly different." Altman offers a reality check, saying, "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far, it's much less weird than it seems like it should be." "We do not know how far beyond human-level intelligence we can go, but we are about to find out," he continued. More Tech Stocks: Palantir gets great news from the PentagonAnalyst has blunt words on Trump's iPhone tariff plansOpenAI teams up with legendary Apple exec The OpenAI CEO doesn't hesitate to say that his company has recently built systems that are "smarter than people in many ways, and are able to significantly amplify the output of people using them." Altman also says ChatGPT is "already more powerful than any human who has ever lived," a phrase that may feel threatening to some, considering that LLMs are not human to begin with. But Altman sees even more ahead, predicting that AI will significantly mold our future. Related: Microsoft has good news for Elon Musk, bad news for Sam Altman "In the 2030s, intelligence and energy - ideas, and the ability to make ideas happen -are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else." Altman also acknowledged that, yes, many jobs will go away as AI continues to evolve, but that won't be the end of the story. "The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything," he says. "There will be very hard parts like whole classes of jobs going away, but on the other hand, the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before. We probably won't adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big." Altman also points out a key asset of humanity that AI cannot duplicate, saying, "People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don't care very much about machines." Related: OpenAI teams up with legendary Apple exec The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

‘This is coming for everyone': A new kind of AI bot takes over the web
‘This is coming for everyone': A new kind of AI bot takes over the web

Washington Post

timean hour ago

  • Washington Post

‘This is coming for everyone': A new kind of AI bot takes over the web

People are replacing Google search with artificial intelligence tools like ChatGPT, a major shift that has unleashed a new kind of bot loose on the web. To offer users a tidy AI summary instead of Google's '10 blue links,' companies such as OpenAI and Anthropic have started sending out bots to retrieve and recap content in real time. They are scraping webpages and loading relevant content into the AI's memory and 'reading' far more content than a human ever would.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store