logo
Cyber attackers use AI to automate exploits & sell deepfakes

Cyber attackers use AI to automate exploits & sell deepfakes

Techday NZ18-07-2025
New analysis from ReliaQuest has found that cyber attackers are increasingly commercialising and refining the use of artificial intelligence (AI) in operations, with up to 45% of initial access attempts attributed to automated vulnerability discovery and SQL injection scanning.
AI-skewed threat landscape
The report, based on research and threat detection data from ReliaQuest, details how AI-powered bots and frameworks now automate much of the early stage attack process. These tools are not only accelerating the pace of exploitation but are also reducing the technical barriers to entry for less-skilled attackers, making advanced tactics more widely accessible.
Automation has led to attackers leveraging AI as the "brain" behind malware campaigns. Whereas previous use of large language models (LLMs) and deepfake technology amplified existing strategies, ReliaQuest has seen these techniques become more widespread and sophisticated across both criminal and nation-state operations.
Malware adapts to AI defences
The report observes that while LLM-generated scripts often include distinctive markers such as verbose code comments or generic variable names, attackers are adapting quickly. The 'Skynet' malware, for example, not only integrates sandbox evasion and TOR-encrypted communications, but also employs prompt injection loaded into memory to manipulate AI-based security tools.
ReliaQuest's analysis cautions that "Relying solely on NGAV or other single-layer defences is no longer enough. Enterprises must embrace continuous innovation, combining defence-in-depth strategies with advanced detection capabilities to stay ahead."
Malware and usability
Attackers continue to deploy existing malware variants with newer AI-backed features. The report highlights the evolution of the 'Rhadamanthys' infostealer into an AI-powered toolkit with features including AI-driven password recovery, optical character recognition for data extraction, and AI analytics for data tagging and campaign tracking.
These developments enable even inexperienced cybercriminals to conduct sophisticated campaigns: "Its integrated AI features enable even rookie criminals to conduct large-scale theft campaigns. The latest iteration automatically tags and filters stolen data based on perceived value and provides a dashboard to track campaign statistics."
Commercialisation of deepfakes
"Groups now position themselves as professional 'Deepfake-as-a-Service' operators, blending slick marketing with the shadowy ambiguity of deepfake technology that's dangerous in the wrong hands," the report says.
Services such as CREO Deepfakes and VHQ Deepfake sell highly realistic video content for applications ranging from impersonation scams to cryptocurrency marketing. Deepfake operators advertise advanced features, including geographic targeting and optimised traffic alignment, and the number of service providers is growing. The report notes, "Attacks are becoming smarter, more frequent, and tougher to detect."
Malicious GPTs and jailbreaking trends
ReliaQuest's research finds a growing trend of jailbreaking mainstream LLM models such as OpenAI GPT-4o, Anthropic Claude, and X's Grok. Jailbreak-as-a-service marketplaces now offer pre-built malicious prompts for phishing campaigns, malware scripts, and utilities for credit card validation and cryptocurrency laundering.
Many new malicious GPT offerings are simply repackaged public models sold at inflated prices. "Investigations revealed that many of these models simply utilised open APIs, added bypass instructions, and repackaged tools at significantly inflated prices - sometimes costing three times more than their original versions."
The report adds that, "Jailbroken versions remove ethical boundaries, content restrictions, and security filters, turning regulated tools into unregulated engines of cybercrime." This commoditisation is also lowering the technical threshold for less experienced criminals.
Automating vulnerability discovery at scale
ReliaQuest's latest data shows that 45% of initial access in customer incidents over the past quarter involved vulnerability exploitation, highlighting the impact of AI-driven automation. Autonomous AI frameworks and bots can now handle tasks such as asset scanning, vulnerability confirmation, and exploitation with little human oversight.
The report's findings state, "AI-powered bots are transforming the way weaknesses are identified, excelling at tasks like scanning for open ports, detecting misconfigurations, and pinpointing outdated software with unmatched speed and precision. These bots often outpace defenders' ability to patch vulnerabilities, creating new challenges for security operations teams."
SQL injection automation
Automation is also affecting SQL injection (SQLi) attacks, enabling attackers to discover and exploit web application vulnerabilities with ease. The tool "bsqlbf," for example, specialises in automating blind SQLi, allowing attackers to test payloads and confirm vulnerabilities without directly accessing underlying data.
"Automation has transformed SQLi attacks, dramatically reducing the time, effort, and expertise needed. By streamlining discovery and exploitation, automated tools allow attackers to exploit vulnerabilities at scale, amplifying the risks posed by insecure applications and databases."
Defensive measures and key recommendations
ReliaQuest advises organisations to adopt a multi-layered, proactive security stance. Key recommendations include prioritising threat hunting, ensuring comprehensive system logging, training employees to spot AI-generated attacks, deploying advanced detection tools, and reviewing the use of AI within sensitive operational environments.
The report emphasises, "As AI-powered threats evolve, defenders must stay ahead by focusing on detecting malicious techniques, restructuring security processes, and addressing AI-related risks."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

MiQ integrates Lifesight data to boost Sigma ad platform insights
MiQ integrates Lifesight data to boost Sigma ad platform insights

Techday NZ

timean hour ago

  • Techday NZ

MiQ integrates Lifesight data to boost Sigma ad platform insights

MiQ and Lifesight have announced a partnership that will see Lifesight's location-based data integrated into MiQ's Sigma platform for use in cross-channel media planning, audience targeting, and measurement. The collaboration allows Sigma users to benefit from Lifesight's movement data, thereby supporting more granular audience segmentation and enhancing understanding of shopper behaviour. Sigma, which was launched as an AI-powered programmatic advertising platform, combines data from over 300 sources and analyses some 700 trillion consumer signals, ranging from television viewing and online browsing behaviours to in-store purchasing activity. According to MiQ, the new data flows will enable technology users to derive deeper insights into both in-store and online behaviours. The Sigma platform is designed to unify and activate data across the programmatic ecosystem, supporting multiple advertising channels within a single environment. Sigma aims to aid media planners and traders in harnessing complex datasets and optimising campaign outcomes for advertisers and agencies. Fiona Roberts, MiQ's Managing Director for ANZ, said the partnership with Lifesight would advance the company's ambition for its Sigma platform within the JAPAC region. "Our partnership with Lifesight is a major step in enhancing our Sigma platform across JAPAC - bringing together context, commerce, and audience intelligence under one roof. With this integration, we're unlocking deeper visibility into both browsing and buying behaviours, enabling more precise and actionable insights for brands." The combined dataset promises to deliver new capabilities in analytics, segmentation, and campaign measurement. Vishal Shah, Head of Product JAPAC at MiQ, outlined some of the practical advances made possible by the agreement. "We're now able to unify browsing and buying signals to power commerce analytics, behavioural segmentation, and offline attribution across key channels like display, CTV, YouTube, social, and DOOH. I am especially excited about the operational control this partnership with Lifesight gives us across the region - enabling proprietary IP development, faster troubleshooting, and addressable audience activation across all major DSPs. It gives MiQ a measurable edge in precision, agility, and campaign intelligence." Lifesight's approach comprises consent-based data collection, supported by advanced AI modelling for measurement and attribution. The company's platform centralises diverse data sources and provides tools for marketing mix modelling, incrementality testing, and attribution analysis. This provides marketers with real-time insights for evidence-based decision making. Peter Madani, Director of Sales and Partnerships APAC at Lifesight, commented on the value of the partnership for clients seeking to merge online and offline behaviours in their marketing analytics. "Our strategic partnership with MiQ connects Lifesight's high-quality, software development kit-sourced location data directly into Sigma - unlocking access to over 300 million monthly consented devices across the region. It's a powerful step forward in unifying online and offline behaviours, while also solving for data fragmentation. Together, we're enabling more cohesive planning, precise targeting, and outcome-based measurement across every major channel, including CTV." The arrangement follows MiQ's recent data partnership with OzTAM, which saw OzTAM's VOZ television viewership data incorporated into another MiQ tool, TVi, to provide cross-platform measurement capability spanning linear TV, BVOD, streaming services, and YouTube. Lifesight provides marketing measurement tools to brands, integrating predictive analytics and real-time optimisation, while MiQ offers programmatic media and data-driven services to agencies and advertisers across several international markets. Follow us on: Share on:

Artificial intelligence is revolutionising classroom learning but will it help or hinder students?
Artificial intelligence is revolutionising classroom learning but will it help or hinder students?

RNZ News

time3 hours ago

  • RNZ News

Artificial intelligence is revolutionising classroom learning but will it help or hinder students?

By Jason Om , ABC Westbourne Grammar principal Adrian Camm has turned himself into a chatbot on the school's website. Photo: ABC News: Jason Om Adrian Camm's "digital twin" is uncanny, down to the voice and twitching eyebrows. The website chatbot means the Melbourne principal can be everywhere all at once, answering parents' questions and signing up new enrolments to his school, Westbourne Grammar. Mr Camm recorded just 15 minutes of speech for the AI, which cloned his voice and perfected his Australian accent. It can now converse in 100 different languages he freely admits he can't speak, from Korean to Ukrainian. "I have created a version of myself that can scale my impact," he told 7.30. "An AI avatar enables me to communicate beyond the school day." "We're open and transparent about our uses of AI." Westbourne Grammar has embraced the use of artificial intelligence in its classrooms. Photo: ABC News: Jason Om When ChatGPT launched in late 2022, independent school Westbourne Grammar embraced it, just as some state education departments around Australia began moving to ban its application in public schools. Programs like ChatGPT can write essays with little instruction from users, which has educators around the world concerned about the risk of cheating. "I don't think banning is ever the solution," Mr Camm said. "If schools aren't the ones teaching students about the safe, effective and ethical uses of this technology, then who is?" Westbourne Grammar has a long list of AI programs it uses, including ChatGPT, Google Gemini and Canva AI. Students can create video games with just a few prompts. Photo: ABC News: Jason Om With a few commands, students from Year 5 up can produce video games, interact with AI avatars, and generate AI art. The school told 7.30 it vets such programs to ensure they are appropriate under its own guidelines. Year 7 student, Ishana, has been asked to use AI to create something that doesn't exist in real life. She types in "create a shark wearing a pink tutu riding a surfboard". It takes mere seconds before numerous different images appear on her screen, and she is able to print off a copy of the one she likes. Year 7 student Ishana using Canva's generative AI tool. Photo: ABC News: Jason Om But why not draw or paint it herself? "Not everyone has the ability or skills to do something like this," she told 7.30. "But having access to a laptop can allow you to be able to do all of this regardless of your age or how much you know." AI is raising profound questions about how it is affecting the way we think. A recent MIT study compared brain activity between essay writers who used AI, search engines or just their brains, finding that "brain-only participants exhibited the strongest, most distributed networks" while those who used AI "displayed the weakest connectivity". So, will AI foster a generation of lazy thinkers? The question has Jake Renzella, a computer science lecturer at UNSW, worried. Jake Renzella is a senior lecturer at UNSW. Photo: ABC News: Shaun Kingma "The real concern here is, are we outsourcing the learning when we ask students to do work with these tools?" Mr Renzella told 7.30. "The way the brain actually works when we are using these tools is changing. We can bundle this all up in a term called over-reliance. Are students relying too much on these technologies?" For more than 40 years, computers have been classroom disruptors, but now the AI revolution is well advanced, with some state public schools trialling their own approved versions of AI software. In New South Wales, 50 public schools have been using a program called EduChat since early 2024. EduChat is based on models from the creator of ChatGPT, OpenAI, but draws on the NSW syllabus, and most importantly for schools, does not provide answers or write essays for students like its commercial counterpart. The program pushes back enthusiastically if it is asked to do the work for them: "I can't write an essay for you, but I can help you get started on your own!" EduChat was designed by the Department of Education and its content is restricted to the state syllabus. Photo: ABC News: Jason Om Year 11 English students at Plumpton High School in Western Sydney have been learning how to prompt the AI with their own questions about Shakespeare's Othello. They write their essays by hand and then ask the AI how they can make their writing better. According to their teacher, Katherine Gonzaga, they've gone from writing 800 words in 40 minutes to up to 1,500 words in the same amount of time. Teacher Katherine Gonzaga says writing skills have improved thanks to AI. Photo: ABC News: Shaun Kingma "[Their vocabulary] has become more sophisticated and more critical and more evaluative," Ms Gonzaga told 7.30. "It's really answering the [essay] question to a tee rather than fluffing around." Plumpton High students told 7.30 the AI helped their learning. Students at Plumpton High School have been learning how to navigate the AI tool EduChat to improve their work. Photo: ABC News: Shaun Kingma "It's in no way taking over our thinking, but rather pushing us to improve literacy skills," one student, Annacemone said. Another student, Roma, insisted students were still using their brains. "I'm just scared that if AI does take over and we're not able to adapt that we could lose our critical thinking; that's where EduChat helps, it builds our critical thinking." All Australian schools are expected to follow national guidelines, known as the Australian Framework for Generative Artificial Intelligence in Schools, which allow them to use AI but in an ethical and appropriate way. "We call it safe AI because if you're using EduChat the data stays within the department's system," deputy secretary of the NSW Education Department, Martin Graham, told 7.30. "It's not going to provide you with any inappropriate information or expose you to things that are not appropriate for a student." Martin Graham is the deputy secretary at the NSW Education Department. Photo: ABC News: Jason Om But like any AI, he conceded, it 'hallucinates', a technical term for speaking gibberish. Teachers and students in the trial have been told to expect errors in the AI and to interrogate them. "We've had almost a billion words through the product and you can absolutely say that some of them will not have been completely accurate in the same way that any AI is not completely accurate," Mr Graham said. "We do everything we can to minimise that." AI programs like EduChat can provide instant feedback to individual students in a large class while a human teacher cannot. But if AI is smarter, faster, and more productive, then where does that leave the profession? Teachers aren't going anywhere, according to assistant principal at Chatswood Public School, Isobel McLoughlin, who's been teaching her Year 5's how to use EduChat. Assistant principal at Chatswood Public School Isobel McLoughlin teaching her students how to use EduChat. Photo: ABC News: Jason Om "We can ask AI to do all sorts of things, but it will never know if they've missed breakfast, it will never understand if they really struggled with the curriculum, or if they just need a bit of extra time to catch up," she told 7.30. "I think that students and children, even adults, we still look to [human] relationships for understanding." Mr Graham said the department had no plans to replace teachers with AI. - ABC

Huckleberry Launches The World's First Voice-Based 360 Feedback Platform
Huckleberry Launches The World's First Voice-Based 360 Feedback Platform

Scoop

time3 hours ago

  • Scoop

Huckleberry Launches The World's First Voice-Based 360 Feedback Platform

In a few years, it will feel weird how our feedback got locked away, out of sight in our employee file, says Aaron Ward, CEO of Huckleberry. We see massive opportunities in enabling feedback for all the worlds workers and helping them leverage that … You should hear what your teammates say about you… AUCKLAND, NZ __ The founder of AskNicely has unveiled an AI-powered feedback platform designed to help people do their best work. Huckleberry uses voice-based AI to dramatically reduce the time and effort required to give people feedback at work – with zero lift from HR or managers. The problem with feedback 'How do people describe what you're like to work with?' Recruiters love this question for identifying your soft-skill strengths or blindspots. Executive coaches start by gathering feedback from your teammates to identify where you need to grow. And HR leaders at AirBnb, Netflix and Google depend on '360 feedback' (the management jargon for this) to build high performance cultures. Yet most people miss out on feedback because the traditional 360 process is onerous and slow. Teammates spend hours completing long surveys for each person, while managers are left to decode the responses and deliver feedback 1-on-1. The whole process can take months which is enough to turn most companies off the idea entirely, leaving their people in the dark. Huckleberry reimagines this process using voice-based AI to dramatically reduce the time and effort required – with zero lift from HR or managers. 'The best performers know to ask, 'How can I have more impact?' but the sad reality is that most people get little or no feedback at work' says Aaron Ward, Co-founder and CEO of Huckleberry. 'Software is supposed to help but has actually made the problem worse. No one wants to waste hours scoring their colleagues against silly rating scales and managers hate getting a pile of raw data and being told to 'coach'. Huckleberry rescues us all from this colossal waste of time – making feedback radically easier, transparent, and actually helpful for every worker on the planet.' How Huckleberry works Teammates talk, rather than type. Huckleberry takes minutes to set up. Feedback is given through a guided, voice-based conversation with an AI coach in 4-5 minutes (rather than 20-30 minutes). AI summarizes and safeguards. Huckleberry analyzes feedback and identifies strengths and improvement opportunities – without needing manager or HR review. This is provided to users in an easy-to-understand visual showcase of both your strengths and next steps to improve No barriers, no wait. Individuals can use Huckleberry for free and invite teammates instantly, even before their employer decides to use it. No setup required. Portable feedback profile. Employees own their feedback forever. They can track their growth and share their Huckleberry Profile to build their professional reputation. Feedback is familiar territory for Ward who co-founded the award-winning customer experience platform, AskNicely. He has teamed up with co-founder and CTO, Diogo Böhm, to build Huckleberry from Portland, Oregon and Auckland, New Zealand, where they've built a strong supporting team of investors and advisors. 'In a few years, it will feel weird how our feedback got locked away, out of sight in our employee file', says Ward. 'We see massive opportunities in enabling feedback for all the world's workers and helping them leverage that collective experience to build great careers.' About Huckleberry Huckleberry is the world's first voice-based 360 feedback platform, designed to use AI to help people at work. It helps employees grow by making feedback radically easy, while giving HR leaders a scalable way to build high-performance cultures. Headquartered in Portland and Auckland, Huckleberry is on a mission to help a billion people love their work.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store