logo
Exclusive: Google's John Hultquist warns cyber attackers are getting younger & faster

Exclusive: Google's John Hultquist warns cyber attackers are getting younger & faster

Techday NZ6 days ago
Children and teenagers are behind some of the most aggressive and profitable cyberattacks in the world, and many are getting away with it because they know they're unlikely to face serious consequences.
It comes as John Hultquist, Chief Analyst at Google's Threat Intelligence Group, spoke with TechDay exclusively to reveal who exactly is behind these attacks.
"We're talking tens of millions - if not hundreds of millions - of dollars that these kids are making," Hultquist said. "There's clearly a financial motive, but it's also about reputation. They feed off the praise they get from peers in this subculture."
The average cybercriminal today is not a shadowy figure backed by a government agency, but often a teenager with a high tolerance for risk and little fear of repercussions.
And according to Hultquist, that combination is proving incredibly difficult for law enforcement to counter.
"There's no deterrent," he said. "They know they're unlikely to face serious consequences, and they exploit that. One reason I wouldn't do cybercrime - aside from the ethical one - is I don't want to go to jail. These kids know they probably won't."
His concern is echoed by Mandiant Consulting's latest global data.
In 2024, 55% of cyberattacks were financially motivated, the majority involving ransomware or extortion.
Mandiant also observed that teen-driven groups like UNC3944 (aka Scattered Spider) are behind many of the most damaging breaches, often relying on stolen credentials and social engineering to bypass defences.
"Younger actors are willing to cross lines even the Russian criminals won't - threatening families, for example," Hultquist said. "They don't worry about norms outside their subculture. Inside their world, they're being praised."
Even when authorities know who is behind an attack, bringing them to justice is rarely fast. "Building a case takes years. In the meantime, they can do serious damage," he said.
The urgency is underscored by the pace at which attackers now move.
According to Mandiant, the median global dwell time - the time it takes to detect an intruder - has dropped to just 11 days, and in ransomware cases, often as little as 6 days. More than 56% of ransomware attacks are discovered within a week, showing just how rapidly these operations unfold.
Though many of these actors operate independently, some operate in blurred lines between criminal enterprises and state-sanctioned campaigns. Hultquist explained that governments - particularly in Russia and Iran - often outsource cyber operations to criminal groups, giving them protection in exchange for service.
"It's a Faustian bargain," he said. "The government lets them continue their criminal activity as long as they're also doing work on its behalf."
Google's acquisition of Mandiant in 2022 has enabled Hultquist and his team to monitor global threats more effectively by combining Google's in-house security team with Mandiant's threat intelligence capabilities.
This merger formed the Google Threat Intelligence Group, which Hultquist described as a "juggernaut".
"We've got great visibility on threats all over the world," he said. "We get to see the threats targeting Google users."
That level of access and scale has allowed Google's team to take cyber defence to unprecedented levels. In one recent case, they used an AI model to uncover and neutralise a zero-day vulnerability before attackers could use it.
"It literally found the zero-day," Hultquist said. "The adversary was preparing their attack, and we shut it down. It doesn't get any better than that."
AI is becoming both an asset and a threat. While Google uses it to pre-emptively defend systems, attackers are beginning to leverage it to enhance their own capabilities. Fake images, videos, and text have long been used in phishing and disinformation campaigns, but Hultquist said the next phase is far more concerning.
"We've seen malware that calls out to AI to write its own commands on the fly," he said. "That makes it harder to detect because the commands are always changing."
He warned that AI could soon automate entire intrusions, allowing cybercriminals to break into networks, escalate privileges, and deploy ransomware faster than defenders can respond.
"If someone can move through your network at machine speed, they might ransom you before you even know what's happening," he said. "Your response window gets smaller and smaller."
As attackers evolve, many defenders still rely on outdated mental models, particularly when it comes to cloud security.
"People are still thinking like they're defending old-school, on-prem systems," Hultquist said. "One of the biggest problems in cloud is identity - especially third-party access. That's where your crown jewels might be, and you don't always have full control."
And while some worry about cyber threats to governments, Hultquist said the private sector is often the true target.
"If a country retaliates against the Five Eyes, they're not going after military or intelligence," he said. "They'll go after privately held critical infrastructure. That's always been the asymmetrical advantage."
Despite the constant evolution of threats, Hultquist said progress has been made on both sides. He recalled the early days of Chinese state-backed attacks, where errors in spelling and grammar made their emails laughable - and traceable.
"We used to print them out and tack them to our cubicle walls," he said. "Now, they're incredibly sophisticated. But the reason they've improved is because we've gotten better. Our defences have evolved."
And according to Hultquist, that cat-and-mouse game won't be ending anytime soon.
"We're not fighting the laws of physics like safety engineers," Hultquist said. "Our adversaries adapt. If we fix everything, they'll just change to overcome it."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Platform guides course decisions
Platform guides course decisions

Otago Daily Times

time6 hours ago

  • Otago Daily Times

Platform guides course decisions

It is hard to fathom how he does it. Final-year University of Otago medical student Josiah Bugden has somehow also found time to establish a rapidly growing platform to help students navigate university life with confidence. Mr Bugden (25) is a finalist in the Momentum student entrepreneur category in the KiwiNet Research Commercialisation Awards for CourseSpy, which is about creating transparency in higher education. What started as a side project has evolved into a platform with more than 250,000 visits, which helps students make better and more informed decisions about their tertiary study. CourseSpy was born of Mr Bugden's own frustrations as a student. Prior to embarking on his medical studies, he did a science degree. He said while there were some necessary papers, it required having to "build your degree" and he found it tricky figuring out which papers to take. All he had to go off was the paper title and maybe a couple of sentences, yet there were so many options available, he said. In his hall of residence, friends would pass around Google documents, sharing course advice, and it got him thinking — and tinkering. Teaching himself to code, he built a basic website for students to leave course reviews and also tips and tricks and discovered people liked it. While the website was very unpolished, he saw how students were using it and decided it might be worth putting in more effort and seeing if he could expand the offering and make a viable business. Over last year, he became involved with Startup Dunedin and the Audacious business challenge and Momentum, the national student-led investment committee programme. That provided him with advice on how to take the "next steps" and included getting a team around him, which included his brother Sam, to work on it. What the team wanted to create was a one-stop hub for students to get course advice and it had evolved to be more than a review site, he said. It allowed students to plan their degrees, calculate entrance scores, manage timetables, choose accommodation and access curated study resources. CourseSpy now hosted more than 15,000 course reviews across all eight New Zealand universities and had had about 250,000 page visits, he said. In a recent user survey, the overwhelming majority of student users reported they had changed their future course selections based on insights gained through the platform. In the past year, CourseSpy had also launched Mastery Modules — interactive, adaptive question banks designed to guide students through each lecture and improve their long-term retention and grades. Those modules were built by a team of tutors and high-achieving students to ensure high-quality, course-specific learning support. Mr Bugden said his goal was to continue the rapid growth of CourseSpy by adding tutors and textbooks and further developing Mastery Modules for CourseSpy's proprietary learning platform while looking to expand overseas soon. He acknowledged the juggle between his medical studies and CourseSpy had been "tricky" to manage but said he had enjoyed learning about business and having a great team around him had been pivotal. He now wanted to involve more people in the project to ensure it was sustainable into the future. He was passionate about medicine and was looking forward to working as a doctor next year and it was likely he would take on more of a consulting role with CourseSpy. The winners will be announced at a function in Auckland on October 22.

Social media firms 'turning a blind eye' to child abuse material: watchdog
Social media firms 'turning a blind eye' to child abuse material: watchdog

Otago Daily Times

time18 hours ago

  • Otago Daily Times

Social media firms 'turning a blind eye' to child abuse material: watchdog

Australia's internet watchdog has said the world's biggest social media firms are still 'turning a blind eye' to online child sex abuse material on their platforms, and said YouTube in particular had been unresponsive to its enquiries. In a report released on Wednesday, the eSafety Commissioner said YouTube, along with Apple AAPL.O, failed to track the number of user reports it received of child sex abuse appearing on their platforms and also could not say how long it took them to respond to such reports. The Australian government decided last week to include YouTube in its world-first social media ban for teenagers, following eSafety's advice to overturn its planned exemption for the Alphabet-owned Google's GOOGL.O video-sharing site. 'When left to their own devices, these companies aren't prioritising the protection of children and are seemingly turning a blind eye to crimes occurring on their services,' eSafety Commissioner Julie Inman Grant said in a statement. 'No other consumer-facing industry would be given the licence to operate by enabling such heinous crimes against children on their premises, or services.' A Google spokesperson said 'eSafety's comments are rooted in reporting metrics, not online safety performance', adding that YouTube's systems proactively removed over 99% of all abuse content before being flagged or viewed. 'Our focus remains on outcomes and detecting and removing (child sexual exploitation and abuse) on YouTube,' the spokesperson said in a statement. Meta META.O - owner of Facebook, Instagram and Threads, three of the biggest platforms with more than 3 billion users worldwide - has said it prohibits graphic videos. The eSafety Commissioner, an office set up to protect internet users, has mandated Apple, Discord, Google, Meta, Microsoft MSFT.O, Skype, Snap SNAP.N and WhatsApp to report on the measures they take to address child exploitation and abuse material in Australia. The report on their responses so far found a 'range of safety deficiencies on their services which increases the risk that child sexual exploitation and abuse material and activity appear on the services'. Safety gaps included failures to detect and prevent livestreaming of the material or block links to known child abuse material, as well as inadequate reporting mechanisms. It said platforms were also not using 'hash-matching' technology on all parts of their services to identify images of child sexual abuse by checking them against a database. Google has said before that its anti-abuse measures include hash-matching technology and artificial intelligence. The Australian regulator said some providers had not made improvements to address these safety gaps on their services despite it putting them on notice in previous years. 'In the case of Apple services and Google's YouTube, they didn't even answer our questions about how many user reports they received about child sexual abuse on their services or details of how many trust and safety personnel Apple and Google have on-staff,' Inman Grant said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store