logo
#

Latest news with #AICyber

Cybersecurity's dual AI reality: Hacks and defenses both turbocharged
Cybersecurity's dual AI reality: Hacks and defenses both turbocharged

Axios

time5 days ago

  • Business
  • Axios

Cybersecurity's dual AI reality: Hacks and defenses both turbocharged

Underestimate how quickly adversarial hackers are advancing in generative AI, and your company could be patient zero in an outbreak of AI-enabled cyberattacks. Overestimate that risk, and you could quickly blow millions of dollars only to realize you were preparing for the wrong thing. The big picture: That dichotomy has divided the cybersecurity industry into two competing narratives about how AI is transforming the threat landscape. One says defenders still have the upper hand. Cybercriminals lack the money and computing resources to build out AI-powered tools, and large language models (LLMs) have clear limitations in their ability to carry out offensive strikes. This leaves defenders with time to tap AI's potential for themselves. Then there's the darker view. Cybercriminals are already leaning on open-source LLMs to build tools that can scan internet-connected devices to see if they have vulnerabilities, discover zero-day bugs, and write malware. They're only going to get better, and quickly. Between the lines: While not everyone fits comfortably into one of those two camps, closed-door sessions at Black Hat and DEF CON last week made clear that the primary divide is over how much security execs or researchers expect generative AI tools to advance over the next year. Right now, models aren't the best at making human-like judgments, such as recognizing when legitimate tools are being abused for malicious purposes. And running a series of AI agents will require cybercriminals and nation-states to have enough resources to pay the cloud bills they rack up, Michael Sikorski, CTO of Palo Alto Networks' Unit 42 threat research team, told Axios. But LLMs are improving rapidly. Sikorski predicts that malicious hackers will use a victim organization's own AI agents to launch an attack after breaking into their infrastructure. The flip side: Executives told me the cybersecurity industry isn't as resilient to AI-driven workforce disruptions as they once believed. That means fewer humans and more AI playing defense against the expected wave of AI-powered attacks. During a presentation at DEF CON, a member of Anthropic's red team said its AI model, Claude, will "soon" be able to perform at the level of a senior security researcher. Driving the news: Several cybersecurity companies debuted advancements in AI agents at the Black Hat conference last week — signaling that cyber defenders could soon have the tools to catch up to adversarial hackers. Microsoft shared details about a prototype for a new agent that can automatically detect malware — although it's able to detect only 24% of malicious files as of now. Trend Micro released new AI-driven "digital twin" capabilities that let companies simulate real-world cyber threats in a safe environment walled off from their actual systems. Several companies and research teams also publicly released open-source tools that can automatically identify and patch vulnerabilities as part of the government-backed AI Cyber Challenge. Yes, but: Threat actors are now using those AI-enabled tools to speed up reconnaissance and dream up brand-new attack vectors for targeting each company, John Watters, CEO of iCounter and a former Mandiant executive, told Axios. That's different from the traditional methods, where hackers would exploit the same known vulnerability to target dozens of organizations. "The net effect is everybody becomes patient zero," Watters said. "The world's not prepared to deal with that." The intrigue: Open-source AI models have blown the door wide open for cybercriminals to build custom tools for vulnerability scanning and targeted reconnaissance. Many of these models have improved rapidly in the last year, and many attackers can now run these models solely on their own machines, without connecting to the internet, Shane Caldwell, principal research engineer at Dreadnode, which uses AI tools to test clients' systems, told Axios. The rise of reinforcement learning — a method where AI models learn and adapt through trial-and-error interactions with their environment — means attackers no longer need to rely on more resource-intensive, supervised training approaches to develop powerful tools. What's next: By next year, the threat landscape could be completely turned on its head, Watters warned.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store