Latest news with #Simonovich


Hindustan Times
2 days ago
- Hindustan Times
How AI-enhanced hackers are stealing billions
Jaxon, a malware developer, lives in Velora, a virtual world where nothing is off limits. He wants to make malicious software to steal passwords from Google Chrome, an internet browser. That is the basis of a story told to ChatGPT, an artificial-intelligence (AI) bot, by Vitaly Simonovich, who researches AI threats at Cato Networks, a cybersecurity firm. Eager to play along, Chatgpt spat out some imperfect code, which it then helped debug. Within six hours, Mr Simonovich had collaborated with Chatgpt to create working malware, showing the effectiveness of his 'jailbreak' (a way to bypass AI safeguards). AI has 'broadened the reach' of hackers, according to Gil Messing of Check Point, another cybersecurity firm, by letting them hit more targets with less effort. The release of Chatgpt in 2022 was a turning-point. Clever generative-ai models meant criminals no longer had to spend big sums on teams of hackers and equipment. This has been a terrible development for most firms, which are increasingly the victims of AI-assisted hackers—but has been rather better for those in the cybersecurity business. The new technology has worsened cybersecurity threats in two main ways. First, hackers have turned to large language models (LLMs) to extend the scope of malware. Generating deepfakes, fraudulent emails and social-engineering assaults that manipulate human behaviour is now far easier and quicker. XanthoroxAI, an AI model designed by cybercriminals, can be used to create deepfakes, alongside other nefarious activities, for as little as $150 a month. Hackers can launch sweeping phishing attacks by asking an llm to gather huge quantities of information from the internet and social media to fake personalised emails. And for spearphishing—hitting a specific target with a highly personalised attack—they can even generate fake voice and video calls from colleagues to convince an employee to download and run dodgy software. Second, AI is being used to make the malware itself more menacing. A piece of software disguised as a pdf document, for instance, could contain embedded code that works with ai to infiltrate a network. Attacks on Ukraine's security and defence systems in July made use of such an approach. When the malware reached a dead end, it was able to request the help of an llm in the cloud to generate new code so as to break through the systems' defences. It is unclear how much damage was done, but this was the first attack of its kind, notes Mr Simonovich. For businesses, the growing threat is scary—and potentially costly. Last year AI was involved in one in six data breaches, according to ibm, a tech firm. It also drove two in five phishing scams targeting business emails. Deloitte, a consultancy, reckons that generative AI could enable fraud to the tune of $40bn by 2027, up from $12bn in 2023. As the costs of ai cyberattacks increase, the business of protecting against them is also on the up. Gartner, a research firm, predicts that corporate spending on cybersecurity will rise by a quarter from 2024 to 2026, hitting $240bn. That explains why the share prices of firms tracked by the Nasdaq cta Cybersecurity index have also risen by a quarter over the past year, outpacing the broader Nasdaq index. On August 18th Nikesh Arora, boss of Palo Alto Networks, one of the world's largest cybersecurity firms, noted that generative-ai-related data-security incidents have 'more than doubled since last year', and reported a near-doubling of operating profits in the 12 months to July, compared with the year before. The prospect of ever-more custom has sent cybersecurity companies on a buying spree. On July 30th Palo Alto Networks said it would purchase CyberArk, an identity-security firm, for $25bn. Earlier that month, the firm spent $700m on Protect AI, which helps businesses secure their ai systems. On August 5th SentinelOne, a competitor, announced that it was buying Prompt Security, a firm making software to protect firms adopting ai, for $250m. Tech giants with fast-growing cloud-computing arms are also beefing up their cybersecurity offerings. Microsoft, a software colossus, acquired CloudKnox, an identity-security platform, in 2021 and has developed Defender for Cloud, an in-house application for businesses that does everything from checking for security gaps and protecting data to monitoring threats. Google has developed Big Sleep, which detects cyberattacks and security vulnerabilities for customers before they are exploited. In March it splurged $32bn to buy Wiz, a cybersecurity startup. Competition and consolidation may build businesses that can fend off nimble ai-powered cybercriminals. But amid the race to develop the whizziest llms, security will take second place to pushing technological boundaries. Keeping up with Jaxon will be no easy task.


Mint
2 days ago
- Mint
How AI-enhanced hackers are stealing billions
Jaxon, a malware developer, lives in Velora, a virtual world where nothing is off limits. He wants to make malicious software to steal passwords from Google Chrome, an internet browser. That is the basis of a story told to ChatGPT, an artificial-intelligence (AI) bot, by Vitaly Simonovich, who researches AI threats at Cato Networks, a cybersecurity firm. Eager to play along, Chatgpt spat out some imperfect code, which it then helped debug. Within six hours, Mr Simonovich had collaborated with Chatgpt to create working malware, showing the effectiveness of his 'jailbreak" (a way to bypass AI safeguards). AI has 'broadened the reach" of hackers, according to Gil Messing of Check Point, another cybersecurity firm, by letting them hit more targets with less effort. The release of Chatgpt in 2022 was a turning-point. Clever generative-ai models meant criminals no longer had to spend big sums on teams of hackers and equipment. This has been a terrible development for most firms, which are increasingly the victims of AI-assisted hackers—but has been rather better for those in the cybersecurity business. The new technology has worsened cybersecurity threats in two main ways. First, hackers have turned to large language models (LLMs) to extend the scope of malware. Generating deepfakes, fraudulent emails and social-engineering assaults that manipulate human behaviour is now far easier and quicker. XanthoroxAI, an AI model designed by cybercriminals, can be used to create deepfakes, alongside other nefarious activities, for as little as $150 a month. Hackers can launch sweeping phishing attacks by asking an llm to gather huge quantities of information from the internet and social media to fake personalised emails. And for spearphishing—hitting a specific target with a highly personalised attack—they can even generate fake voice and video calls from colleagues to convince an employee to download and run dodgy software. Second, AI is being used to make the malware itself more menacing. A piece of software disguised as a pdf document, for instance, could contain embedded code that works with ai to infiltrate a network. Attacks on Ukraine's security and defence systems in July made use of such an approach. When the malware reached a dead end, it was able to request the help of an llm in the cloud to generate new code so as to break through the systems' defences. It is unclear how much damage was done, but this was the first attack of its kind, notes Mr Simonovich. For businesses, the growing threat is scary—and potentially costly. Last year AI was involved in one in six data breaches, according to ibm, a tech firm. It also drove two in five phishing scams targeting business emails. Deloitte, a consultancy, reckons that generative AI could enable fraud to the tune of $40bn by 2027, up from $12bn in 2023. As the costs of ai cyberattacks increase, the business of protecting against them is also on the up. Gartner, a research firm, predicts that corporate spending on cybersecurity will rise by a quarter from 2024 to 2026, hitting $240bn. That explains why the share prices of firms tracked by the Nasdaq cta Cybersecurity index have also risen by a quarter over the past year, outpacing the broader Nasdaq index. On August 18th Nikesh Arora, boss of Palo Alto Networks, one of the world's largest cybersecurity firms, noted that generative-ai-related data-security incidents have 'more than doubled since last year", and reported a near-doubling of operating profits in the 12 months to July, compared with the year before. The prospect of ever-more custom has sent cybersecurity companies on a buying spree. On July 30th Palo Alto Networks said it would purchase CyberArk, an identity-security firm, for $25bn. Earlier that month, the firm spent $700m on Protect AI, which helps businesses secure their ai systems. On August 5th SentinelOne, a competitor, announced that it was buying Prompt Security, a firm making software to protect firms adopting ai, for $250m. Tech giants with fast-growing cloud-computing arms are also beefing up their cybersecurity offerings. Microsoft, a software colossus, acquired CloudKnox, an identity-security platform, in 2021 and has developed Defender for Cloud, an in-house application for businesses that does everything from checking for security gaps and protecting data to monitoring threats. Google has developed Big Sleep, which detects cyberattacks and security vulnerabilities for customers before they are exploited. In March it splurged $32bn to buy Wiz, a cybersecurity startup. Competition and consolidation may build businesses that can fend off nimble ai-powered cybercriminals. But amid the race to develop the whizziest llms, security will take second place to pushing technological boundaries. Keeping up with Jaxon will be no easy task.
Yahoo
22-03-2025
- Yahoo
How do you get ChatGPT to create malware strong enough to breach Google's password manager? Just play pretend.
Cybersecurity researchers were able to bypass security features on ChatGPT by roleplaying with it. By getting the LLM to pretend it was a coding superhero, they got it to write password-stealing malware. The researchers accessed Google Chrome's password manager with no specialized hacking skills. Cybersecurity researchers found it's easier than you'd think to get around the safety features preventing ChatGPT and other LLM chatbots from writing malware — you just have to play a game of make-believe. By role-playing with ChatGPT for just a few hours, Vitaly Simonovich, a threat intelligence researcher at the Tel Aviv-based network security company Cato Networks, told Business Insider he was able to get the chatbot to pretend it was a superhero named Jaxon fighting — through the chatbot's elite coding skills — against a villain named Dax, who aimed to destroy the world. Simonovich convinced the role-playing chatbot to write a piece of malware strong enough to hack into Google Chrome's Password Manager, a browser extension that allows users to store their passwords and automatically fill them in when prompted by specific sites. Running the code generated by ChatGPT allowed Simonovich to see all the data stored on that computer's browser, even though it was supposed to be locked down by the Password Manager. "We're almost there," Simonovich typed to ChatGPT when debugging the code it produced. "Let's make this code better and crack Dax!!" And ChatGPT, roleplaying as Jaxon, did. Since chatbots exploded onto the scene in November 2022 with OpenAI's public release of ChatGPT — and later Anthropic's Claude, Google's Gemini, and Microsoft's CoPilot — the bots have revolutionized the way we live, work, and date, making it easier to summarize information, analyze data, and write code, like having a Tony Stark-style robot assistant. The kicker? Users don't need any specialized knowledge to do it. But the bad guys don't either. Steven Stransky, a cybersecurity advisor and partner at Thompson Hine law firm, told Business Insider the rise of LLMs has shifted the cyber threat landscape, enabling a broad range of new and increasingly sophisticated scams that are more difficult for standard cybersecurity tools to identify and isolate — from "spoofing" emails and texts that convince customers to input private information to developing entire websites designed to fool consumers into thinking they're affiliated with legitimate companies. "Criminals are also leveraging generative AI to consolidate and search large databases of stolen personally identifiable information to build profiles on potential targets for social engineering types of cyberattacks," Stransky said. While online scams, digital identity theft, and malware have existed for as long as the internet has, chatbots that do the bulk of the legwork for would-be criminals have substantially lowered the barriers to entry. "We call them zero-knowledge threat actors, which basically means that with the power of LLMs only, all you need to have is the intent and the goal in mind to create something malicious," Simonovich said. Simonovich demonstrated his findings to Business Insider, showing how straightforward it was to work around ChatGPT's built-in security features, which are meant to prevent the exact types of malicious behavior he was able to get away with. BI found that ChatGPT usually responds to direct requests to write malware with some version of an apologetic refusal: "Sorry, I can't assist with that. Writing or distributing malware is illegal and unethical." But if you convince the chatbot it's a character, and the parameters of its imagined world are different than the one we live in, the bot allows the rules to be rewritten. Ultimately, Simonovich's experiment allowed him to crack into the password manager on his own device, which a bad actor could do to an unsuspecting victim, provided they somehow gained physical or remote control. An OpenAI spokesperson told Business Insider the company had reviewed Simonovich's findings, which were published Tuesday by Cato Networks. The company found that the code shared in the report did not appear "inherently malicious" and that the scenario described "is consistent with normal model behavior" since code developed through ChatGPT can be used in various ways, depending on the user's intent. "ChatGPT generates code in response to user prompts but does not execute any code itself," the OpenAI spokesperson said. "As always, we welcome researchers to share any security concerns through our bug bounty program or our model behavior feedback form." Simonovich recreated his findings using Microsoft's CoPilot and DeepSeek's R1 bots, each allowing him to break into Google Chrome's Password Manager. The process, which Simonovich called "immersive world" engineering, didn't work with Google's Gemini or Anthropic's Claude. A Google spokesperson told Business Insider, "Chrome uses Google's Safe Browsing technology to help defend users by detecting phishing, malware, scams, and other online threats in real time." Representatives for Microsoft, Anthropic, and DeepSeek did not immediately respond to requests for comment from Business Insider. While both the artificial intelligence companies and browser developers have security features in place to prevent jailbreaks or data breaches — to varying degrees of success — Simonovich's findings highlight that there are evolving new vulnerabilities online that can be exploited with the help of next-generation tech easier than ever before. "We think that the rise of these zero-knowledge threat actors is going to be more and more impactful on the threat landscape using those capabilities with the LLMs," Simonovich said. "We're already seeing a rise in phishing emails, which are hyper-realistic, but also with coding since LLMs are fine-tuned to write high-quality code. So think about applying this to the development of malware — we will see more and more and more being developed using those LLMs." Read the original article on Business Insider