
APTs Detected In Over 43% Of High-Severity Incidents
According to the latest Kaspersky Managed Detection and Response (MDR) analyst report, advanced persistent threats (APTs) have been detected in 25% of companies, accounting for over 43% of all high-severity incidents. This marks a staggering 74% increase compared to 2023.
The annual Managed Detection and Response (MDR) analyst report provides insights based on the analysis of MDR incidents identified by Kaspersky's Security Operations Center team. The report sheds light on the most prevalent attacker tactics, techniques and tools, as well as the characteristics of detected incidents and their distribution across regions and industry sectors among MDR customers.
According to recent findings, Advanced Persistent Threats (APTs), classified as human-driven attacks, significantly affected one in four companies, representing a staggering 43% of all high-severity incidents detected in 2024. Compared to previous years, this marks a striking 74% increase from 2023 and a 43% rise from 2022. Despite advancements in automated detection technologies, determined attackers continue to exploit vulnerabilities and circumvent these systems. Notably, APTs were identified across every sector except telecommunications, with the IT and government sectors bearing the brunt.
Moreover, incidents characterized as human-driven attacks confirmed by customers as cyber exercises comprised more than 17% of total incidents. Additionally, severe violations of security policies comprised approximately 12% of high-severity events, with malware-related incidents also accounting for over 12%, predominantly affecting the financial, industrial and IT sectors.
'In 2024, we observed a significant escalation in Advanced Persistent Threats and this alarming trend emphasizes that even with advancements in automated detection, determined human-driven attacks continue to exploit vulnerabilities across various sectors. Organizations must enhance their preparedness and invest in comprehensive cybersecurity strategies to counteract these sophisticated threats,' states Sergey Soldatov, Head of Security Operations Center at Kaspersky.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arabian Post
14 hours ago
- Arabian Post
Cyber Sweep Disables 20,000+ Infostealer IPs and Domains
Global law enforcement has dismantled over 20,000 malicious IP addresses and domains used to serve 69 variants of information‑stealing malware, in a sweeping cybercrime operation spanning 26 countries across the Asia‑Pacific region. The coordinated effort—dubbed Operation Secure—uncovered the digital infrastructure behind credential‑harvesting malware, led to the seizure of 41 servers, over 100 GB of illicit data, and the arrest of 32 suspects, officials said. The four‑month initiative, conducted between January and April 2025, was facilitated through the Asia and South Pacific Joint Operations Against Cybercrime project, with INTERPOL coordinating national cybercrime units and private cybersecurity firms including Group‑IB, Kaspersky and Trend Micro. Intelligence sharing proved crucial, enabling authorities to disrupt roughly 79% of the identified malicious infrastructure. Vietnamese police led the arrests, detaining 18 suspects and uncovering VND 300 million, SIM cards, corporate documentation and digital devices during raids targeting a ring alleged to be selling corporate accounts for illicit use. A further 14 individuals were apprehended in Sri Lanka and Nauru, where targeted house raids also led to the identification of 40 victims. ADVERTISEMENT Hong Kong authorities played a vital technical role, analysing more than 1,700 pieces of intelligence supplied by INTERPOL and mapping 117 command‑and‑control servers across 89 ISPs, infrastructure that underpinned phishing, fraud and social media scam campaigns. In the wake of the operation, over 216,000 individuals and organisations at risk were notified, enabling them to take defensive action such as freezing accounts and changing passwords. Infostealer malware—software designed to extract browser credentials, cookies, credit card details, and cryptocurrency wallet keys—is increasingly being used as a springboard for more destructive operations, according to cyber‑crime experts. Once compromised, credentials are sold on underground forums, facilitating follow‑on attacks including ransomware, data breaches and business email compromise. Group‑IB, a Singapore‑based cybersecurity firm, confirmed that the operation targeted stealer families such as Lumma, RisePro and Meta, adding that 'the compromised credentials and sensitive data acquired by cybercriminals through infostealer malware often serve as initial vectors for financial fraud and ransomware attacks'. Neal Jetton, INTERPOL's Director of Cybercrime, emphasised that the success of Operation Secure underlined the power of global cooperation. 'INTERPOL continues to support practical, collaborative action against global cyber threats,' he said. 'Operation Secure has once again shown the power of intelligence sharing in disrupting malicious infrastructure and preventing large‑scale harm to both individuals and businesses'. Analysts observe that this operation builds on previous global cyber‑crime crackdowns, such as Operation Synergia II in 2024, which dismantled more than 22,000 malicious IPs worldwide. Taken collectively, such operations demonstrate a growing focus on attacking the root infrastructure that supports cybercrime, rather than just responding to individual attacks. With cyber threats proliferating in complexity and scale, experts say that such public‑private partnerships and intelligence sharing are vital. By targeting the infrastructure that underpins malware distribution, authorities aim to disrupt criminal ecosystems before they evolve, rather than merely reacting to breaches.


Zawya
a day ago
- Zawya
Gen Z's favorite games used as bait in over 19 million attempted cyberattacks
From April 1, 2024 to March 31, 2025, Kaspersky detected over 19 million attempts to download malicious or unwanted files disguised as popular Gen Z games. Over 47,800 such attempts were registered in Turkiye, making it one of the countries most affected by such incidents. With GTA, Minecraft and Call of Duty among the most exploited, it's clear that cybercriminals are actively following gaming trends to reach their targets. To help players stay safe, Kaspersky is launching 'Case 404' — an interactive cybersecurity game that teaches Gen Z how to recognize threats and protect their digital worlds while doing what they love: playing. Gen Z plays more than any other generation — and not just more, but differently. They outpace Millennials and Gen X in gaming-related spending, and, instead of sticking to a few favorites, Gen Z jumps between numerous titles, chasing viral trends and new experiences. Yet this same spontaneity and openness also make them vulnerable, with cybercriminals exploiting the habits and trust of these players across the platforms. For instance, throughout the reported period, more than 400,000 users worldwide were affected. Attempts to attack users through malicious or unwanted files disguised as Gen Z's favorite games throughout the reported period As part of the new report, Kaspersky experts conducted an in-depth analysis using 20 of the most popular game titles among Gen Z — from GTA, NBA and FIFA to The Sims and Genshin Impact — as search keywords. The study covered the period from Q2 2024 to Q1 2025, with March 2025 standing out as the peak month, recording 1,842,370 attempted attacks. Despite GTA V being released over a decade ago, the Grand Theft Auto franchise remains one of the most exploited, due to its open-world modding capabilities and thriving online community. In total, Kaspersky detected 4,456,499 attack attempts involving files disguised as GTA franchise-related content. With the highly anticipated release of GTA VI expected in 2026, experts predict a potential spike in such attacks, as cybercriminals may exploit the hype by distributing fake installers, early access offers or beta invites. Minecraft ranked second, with 4,112,493 attack attempts, driven by its vast modding ecosystem and enduring popularity among Gen Z players. Call of Duty and The Sims followed with 2,635,330 and 2,416,443 attack attempts respectively. The demand for cheats and cracked versions around competitive CoD releases such as Modern Warfare III fuels malicious activity, while The Sims fans searching for custom content or unreleased expansion packs may inadvertently download harmful files presented as mods or early access. As a result of such attacks, users' devices can be infected with various types of unwanted or malicious software — from downloaders that can install additional harmful programs, to trojans that steal passwords, monitor activity, grant remote access to attackers or deploy ransomware. The goals of these attacks vary, and one common motive is stealing gaming accounts, which are later sold on the dark web or closed forums. Kaspersky Global Research & Analysis Team experts also analyzed darknet marketplaces and closed platforms for advertisements selling compromised gaming accounts and skins. The research indicates a growing number of such offers showing up not just on the darknet, but also on regular closed forums and Telegram channels — making these illicit assets more visible and accessible than ever. A post from a closed forum advertising a digital store, which sells access to Minecraft and streaming service accounts, boasting over 500 sales This shows that the theft of gaming accounts and digital items is no longer limited to niche cybercrime circles — it's starting to spread into more open online spaces. The barrier to entry for selling or buying stolen accounts has significantly lowered. What was once a technical, underground practice has become a marketplace — fast, accessible and global. It now takes just a few clicks to join a private Telegram channel and access hundreds of listings offering rare skins, high-rank accounts, and access to premium in-game items. And for gamers, this means that the risk of losing an account or having it resold is no longer a rare incident — it's a mainstream threat. To address this, Kaspersky has launched an interactive online game, 'Case 404', created especially for Gen Z gamers. In this cyber-detective adventure, players dive into fictional cases inspired by real digital threats, learning how to spot scams, phishing attempts and account takeover tactics common in gaming. With 'Case 404', Kaspersky isn't just raising awareness — it's equipping players with the mindset and skills to stay secure while doing what they love. Those who complete the game also receive a discount on Kaspersky Premium, giving them reliable tools to protect their gaming and digital lives. 'From open-world blockbusters like GTA to cozy simulators like The Sims, cybercriminals target games across every genre. What unites them is the audience: Gen Z is the most digitally active generation, leaving behind a rich trail of data, clicks and curiosity. This makes them a prime target - because they're constantly online, exploring, downloading and sharing. That's why digital self-defense is essential. Learning how to recognize threats should be as natural as leveling up in a game. Through 'Case 404', we want to equip young players with tools and instincts to protect what their digital identity, their accounts and their freedom to play safely,' comments Fatih Sensoy, senior security researcher at Kaspersky. To play games safely, Kaspersky recommends the following: Check out the interactive online game, 'Case 404' by Kaspersky, explicitly designed for Gen Z. Download games, mods and tools only from official sources. Avoid torrents, third-party websites or links shared in forums and chats — even if they promise rare skins or free bonuses. Be skeptical of giveaways. If a website or message offers something too good to be true (like free currency or legendary gear), it probably is — especially if it asks for your login credentials. Use strong, unique passwords for every gaming and email account. A password manager, such as one from Kaspersky, can help generate and store them securely. Enable two-factor authentication (2FA) wherever possible — especially on platforms like Steam, Epic Games and Discord. Check URLs carefully. Phishing sites often look almost identical to the real ones but use slight misspellings or fake subdomains. Don't share accounts or login details, even with friends. Shared access often leads to unintentional exposure or theft. Use a reliable security solution, like Kaspersky Premium, to detect malicious attachments that could compromise your data. About Kaspersky Kaspersky is a global cybersecurity and digital privacy company founded in 1997. With over a billion devices protected to date from emerging cyberthreats and targeted attacks, Kaspersky's deep threat intelligence and security expertise is constantly transforming into innovative solutions and services to protect individuals, businesses, critical infrastructure, and governments around the globe. The company's comprehensive security portfolio includes leading digital life protection for personal devices, specialized security products and services for companies, as well as Cyber Immune solutions to fight sophisticated and evolving digital threats. We help millions of individuals and over 200,000 corporate clients protect what matters most to them. Learn more at


Khaleej Times
7 days ago
- Khaleej Times
Artificial Intelligence in cybersecurity: savior or saboteur?
Artificial intelligence has rapidly emerged as both a cornerstone of innovation and a ticking time bomb in the realm of cybersecurity. Once viewed predominantly as a force for good, enabling smarter threat detection, automating incident responses, and predicting attacks before they happen — AI has now taken on a double-edged role. The very capabilities that make it invaluable to cybersecurity professionals are now being exploited by cybercriminals to launch faster, more convincing, and more damaging attacks. From phishing emails indistinguishable from real business correspondence to deepfake videos that impersonate CEOs and public figures with chilling accuracy, AI is arming attackers with tools that were previously the stuff of science fiction. And as large language models (LLMs), generative AI, and deep learning evolve, the tactics used by bad actors are becoming more scalable, precise, and difficult to detect. 'The threat landscape is fundamentally shifting,' says Sergey Lozhkin, Head of the Global Research & Analysis Team for the Middle East, Türkiye, and Africa at Kaspersky. 'From the outset, cybercriminals began using large language models to craft highly convincing phishing emails. Poor grammar and awkward phrasing — once dead giveaways are disappearing. Today's scams can perfectly mimic tone, structure, and professional language.' But the misuse doesn't stop at email. Attackers are now using AI to create fake websites, generate deceptive images, and even produce deepfake audio and video to impersonate trusted figures. In some cases, these tactics have tricked victims into transferring large sums of money or divulging sensitive data. According to Roland Daccache, Senior Manager – Sales Engineering at CrowdStrike MEA, AI is now being used across the entire attack chain. 'Generative models are fueling more convincing phishing lures, deepfake-based social engineering, and faster malware creation. For example, DPRK-nexus adversary Famous Chollima used genAI to create fake LinkedIn profiles and résumé content to infiltrate organisations as IT workers. In another case, attackers used AI-generated voice and video deepfakes to impersonate executives for high-value business email compromise (BEC) schemes.' The cybercrime community is also openly discussing how to weaponize LLMs for writing exploits, shell commands, and malware scripts on dark web forums, further lowering the barrier of entry for would-be hackers. This democratisation of hacking tools means that even novice cybercriminals can now orchestrate sophisticated attacks with minimal effort. Ronghui Gu, Co-Founder of CertiK, a leading blockchain cybersecurity firm, highlights how AI is empowering attackers to scale and personalize their strategies. 'AI-generated phishing that mirrors human tone, deepfake technology for social engineering, and adaptive tools that bypass detection are allowing even low-skill threat actors to act with precision. For advanced groups, AI brings greater automation and effectiveness.' On the technical front, Janne Hirvimies, Chief Technology Officer of QuantumGate, notes a growing use of AI in reconnaissance and brute-force tactics. 'Threat actors use AI to automate phishing, conduct rapid data scraping, and craft malware that adapts in real time. Techniques like reinforcement learning are being explored for lateral movement and exploit optimisation, making attacks faster and more adaptive.' Fortifying Cyber Defenses To outsmart AI-enabled attackers, enterprises must embed AI not just as a support mechanism, but as a central system in their cybersecurity strategy. 'AI has been a core part of our operations for over two decades,' says Lozhkin. 'Without it, security operations center (SOC) analysts can be overwhelmed by alert fatigue and miss critical threats.' Kaspersky's approach focuses on AI-powered alert triage and prioritisation through advanced machine learning, which filters noise and surfaces the most pressing threats. 'It's not just about automation — it's about augmentation,' Lozhkin explains. 'Our AI Technology Research Centre ensures we pair this power with human oversight. That combination of cutting-edge analytics and skilled professionals enables us to detect over 450,000 malicious objects every day.' But the AI evolution doesn't stop at smarter alerts. According to Daccache, the next frontier is agentic AI — a system that can autonomously detect, analyze, and respond to threats in real time. 'Traditional automation tools can only go so far,' Daccache says. 'What's needed is AI that thinks and acts — what we call agentic capabilities. This transforms AI from a passive observer into a frontline responder.' CrowdStrike's Charlotte AI, integrated within its Falcon platform, embodies this vision. It understands security telemetry in context, prioritises critical incidents, and initiates immediate countermeasures, reducing analyst workload and eliminating delays during high-stakes incidents. 'That's what gives defenders the speed and consistency needed to combat fast-moving, AI-enabled threats,' Daccache adds. Gu believes AI's strength lies in its ability to analyze massive volumes of data and identify nuanced threat patterns that traditional tools overlook. 'AI-powered threat detection doesn't replace human decision-making — it amplifies it,' Gu explains. 'With intelligent triage and dynamic anomaly detection, AI reduces response time and makes threat detection more proactive.' He also stresses the importance of training AI models on real-world, diverse datasets to ensure adaptability. 'The threat landscape is not static. Your AI defenses shouldn't be either,' Gu adds. At the core of any robust AI integration strategy lies data — lots of it. Hirvimies advocates for deploying machine learning models across SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) platforms. 'These systems can correlate real-time threat intelligence, behavioral anomalies, and system events to deliver faster, more precise responses,' he says. 'Especially when it comes to detecting novel or stealthy attack patterns, machine learning makes the difference between catching a threat and becoming a headline.' Balancing Innovation with Integrity While AI can supercharge threat detection, response times, and threat simulations, it also brings with it the potential for misuse, collateral damage, and the erosion of privacy. 'Ethical AI use demands transparency, clear boundaries, and responsible data handling,' says Lozhkin.'Organisations must also ensure that employees are properly trained in the safe use of AI tools to avoid misuse or unintended exposure to threats.' He highlights Kaspersky's Automated Security Awareness Platform, which now includes dedicated sections on AI-assisted threats and responsible usage, reflecting the company's commitment to proactive education. When AI is deployed in red teaming or simulated cyberattacks, the risk matrix expands. Gu warns that AI systems, if left unchecked, can make decisions devoid of human context, potentially leading to unintended and widespread consequences. 'Ethical AI governance, robust testing environments, and clearly defined boundaries are essential,' he says, underlining the delicate balance required to simulate threats without crossing into unethical territory. Daccache emphasises the importance of a privacy-first, security-first approach. 'AI must be developed and operated with Privacy-by-Design and Secure-by-Design principles,' he explains. 'This extends to protecting the AI systems themselves — including their training data, operational logic, and outputs—from adversarial manipulation.' Daccache also points to the need for securing both AI-generated queries and outputs, especially in sensitive operations like red teaming. Without such safeguards, there's a real danger of data leakage or misuse. 'Transparency, accountability, and documentation of AI's capabilities and limitations are vital, not just to build trust, but to meet regulatory and ethical standards,' he adds. Despite AI's growing autonomy, human oversight remains non-negotiable. 'While AI can accelerate simulations and threat detection, it must be guided by skilled professionals who can interpret its actions with context and responsibility,' says Daccache. This human-AI collaboration ensures that the tools remain aligned with organisational values and ethical norms. Hirvimies rounds out the conversation with additional cautionary notes: 'Privacy violations, data misuse, bias in training datasets, and the misuse of offensive tools are pressing concerns. Transparent governance and strict ethical guidelines aren't optional, they're essential.' Balancing the Equation While AI promises speed, scale, and smarter defense mechanisms, experts caution that an over-reliance on these systems, especially when deployed without proper calibration and oversight — could expose organisations to new forms of risk. 'Absolutely, over-reliance on AI can backfire if systems are not properly calibrated or monitored,' says Lozhkin. 'Adversarial attacks where threat actors feed manipulated data to mislead AI are a growing concern. Additionally, AI can generate false positives, which can overwhelm security teams and lead to alert fatigue. To avoid this, companies should use a layered defence strategy, retrain models frequently, and maintain human oversight to validate AI-driven alerts and decisions.' This warning resonates across the cybersecurity landscape. Daccache echoes the concern, emphasising the need for transparency and control. 'Over-relying on AI, especially when treated as a black box, carries real risks. Adversaries are already targeting AI systems — from poisoning training data to crafting inputs that exploit model blind spots,' he explains. 'Without the right guardrails, AI can produce false positives or inconsistent decisions that erode trust and delay response.' Daccache stresses that AI must remain a tool that complements — not replaces—human decision-making. 'AI should be an extension of human judgement. That requires transparency, control, and context at every layer of deployment. High-quality data is essential, but so is ensuring outcomes are explainable, repeatable and operationally sound,' he says. 'Organisations should adopt AI systems that accelerate outcomes and are verifiable, auditable and secure by design.' Gu adds that blind spots in AI models can lead to serious lapses. 'AI systems are not infallible,' he says. 'Over-reliance can lead to susceptibility to adversarial inputs or overwhelming volumes of false positives that strain human analysts. To mitigate this, organizations should adopt a human-in-the-loop approach, combine AI insights with contextual human judgment, and routinely stress-test models against adversarial tactics.' Gu also warns about the evolving tactics of bad actors. 'An AI provider might block certain prompts to prevent misuse, but attackers are constantly finding clever ways to circumvent these restrictions. This makes human intervention all the more important in companies' mitigation strategies.' Governing the Double-Edged Sword As AI continues to embed itself deeper into global digital infrastructure, the question of governance looms large: will we soon see regulations or international frameworks guiding how AI is used in both cyber defense and offense? Lozhkin underscores the urgency of proactive regulation. 'Yes, there should definitely be an international framework. AI technologies offer incredible efficiency and progress, but like any innovation, they carry their fair share of risks,' he says. 'At Kaspersky, we believe new technologies should be embraced, not feared. The key is to fully understand their threats and build strong, proactive security solutions that address those risks while enabling safe and responsible innovation.' For Daccache, the focus is not just on speculative regulation, but on instilling foundational principles in AI systems from the start. 'As AI becomes more embedded in cybersecurity and digital infrastructure, questions around governance, risk, and accountability are drawing increased attention,' he explains. 'Frameworks like the GDPR already mandate technology-neutral protections, meaning what matters most is how organizations manage risk not whether AI is used.' Daccache emphasises that embedding Privacy-by-Design and Secure-by-Design into AI development is paramount. 'To support this approach, CrowdStrike offers AI Red Teaming Services, helping organisations proactively test and secure their AI systems against misuse and adversarial threats. It's one example of how we're enabling customers to adopt AI with confidence and a security-first mindset.' On the other hand, Gu highlights how AI is not only transforming defensive mechanisms but is also fuelling new forms of offensive capabilities. 'As AI becomes integral to both defence and offense in cyberspace, regulatory frameworks will be necessary to establish norms, ensure transparency, and prevent misuse. We expect to see both national guidelines and international cooperation similar to existing cybercrime treaties emerge to govern AI applications, particularly in areas involving privacy, surveillance, and offensive capabilities.' Echoing this sentiment, Hirvimies concludes by saying that developments are already underway. 'Yes. Regulations like the EU AI Act and global cyber norms are evolving to address dual-use AI,' he says. 'We can expect more international frameworks focused on responsible AI use in cyber defence, limits on offensive AI capabilities, and cross-border incident response cooperation. At QuantumGate, we've designed our products to support this shift and facilitate compliance with the country's cryptography regulations.'