logo
10% of Nigerians affected by data breaches since 2004 — Report

10% of Nigerians affected by data breaches since 2004 — Report

Zawya14-05-2025

At least 10 out of every 100 Nigerians have fallen victim to data breaches since 2004, according to a new report by Surfshark, raising serious concerns about the country's long-standing vulnerability to cybercrimes.
The research is based on data gathered from 29,000 publicly available databases. Each unique breached email address is treated as a separate user account, and breaches often include additional personal data such as passwords, phone numbers, IP addresses, and postal codes.
Surfshark's report before analysis, and countries with populations under 1 million were excluded from the study.
Findings of the report revealed that a staggering 23.2 million Nigerian user accounts have been compromised in the past two decades, an alarming figure in a country with an estimated population of over 230 million.
This includes 7.3 million unique email addresses and 13.1 million passwords.
'Cyberattacks remain persistent and growing threats globally, and Nigeria is no exception,' Surfshark stated in its analysis.
Despite a significant 85 percent drop in new data breaches in the first quarter of 2023 (falling from the previous quarter's numbers), Nigeria still recorded over 110,000 breached accounts during the period. This places the country 34th worldwide in total breach volume.
'Even with the recent decline, the scale and depth of data breaches remain troubling,' it added.
According to the report, 56 percent of Nigerians are affected by breaches are at the highest risk of identity theft, accounting for the historic access to their online accounts.
In 2023 alone, an estimated one Nigerian account was breached into every five minutes, Surfshark noted.
The global picture also shows a dramatic shift: the number of breached accounts dropped 93 percent year-on-year—from nearly 94 million in Q1 2022 to just 6.3 million in Q1 2023.
Countries with the highest number of breached users include the United States (166 million), Russia (144.5 million), and India (42.4 million).
However, when adjusted for population, South Korea, Israel, and Slovenia reported the highest breach density, with South Sudan counting a mere 0.01 breached accounts per 1,000 residents.
Copyright © 2022 Nigerian Tribune Provided by SyndiGate Media Inc. (Syndigate.info).

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Artificial Intelligence in cybersecurity: savior or saboteur?
Artificial Intelligence in cybersecurity: savior or saboteur?

Khaleej Times

time21 hours ago

  • Khaleej Times

Artificial Intelligence in cybersecurity: savior or saboteur?

Artificial intelligence has rapidly emerged as both a cornerstone of innovation and a ticking time bomb in the realm of cybersecurity. Once viewed predominantly as a force for good, enabling smarter threat detection, automating incident responses, and predicting attacks before they happen — AI has now taken on a double-edged role. The very capabilities that make it invaluable to cybersecurity professionals are now being exploited by cybercriminals to launch faster, more convincing, and more damaging attacks. From phishing emails indistinguishable from real business correspondence to deepfake videos that impersonate CEOs and public figures with chilling accuracy, AI is arming attackers with tools that were previously the stuff of science fiction. And as large language models (LLMs), generative AI, and deep learning evolve, the tactics used by bad actors are becoming more scalable, precise, and difficult to detect. 'The threat landscape is fundamentally shifting,' says Sergey Lozhkin, Head of the Global Research & Analysis Team for the Middle East, Türkiye, and Africa at Kaspersky. 'From the outset, cybercriminals began using large language models to craft highly convincing phishing emails. Poor grammar and awkward phrasing — once dead giveaways are disappearing. Today's scams can perfectly mimic tone, structure, and professional language.' But the misuse doesn't stop at email. Attackers are now using AI to create fake websites, generate deceptive images, and even produce deepfake audio and video to impersonate trusted figures. In some cases, these tactics have tricked victims into transferring large sums of money or divulging sensitive data. According to Roland Daccache, Senior Manager – Sales Engineering at CrowdStrike MEA, AI is now being used across the entire attack chain. 'Generative models are fueling more convincing phishing lures, deepfake-based social engineering, and faster malware creation. For example, DPRK-nexus adversary Famous Chollima used genAI to create fake LinkedIn profiles and résumé content to infiltrate organisations as IT workers. In another case, attackers used AI-generated voice and video deepfakes to impersonate executives for high-value business email compromise (BEC) schemes.' The cybercrime community is also openly discussing how to weaponize LLMs for writing exploits, shell commands, and malware scripts on dark web forums, further lowering the barrier of entry for would-be hackers. This democratisation of hacking tools means that even novice cybercriminals can now orchestrate sophisticated attacks with minimal effort. Ronghui Gu, Co-Founder of CertiK, a leading blockchain cybersecurity firm, highlights how AI is empowering attackers to scale and personalize their strategies. 'AI-generated phishing that mirrors human tone, deepfake technology for social engineering, and adaptive tools that bypass detection are allowing even low-skill threat actors to act with precision. For advanced groups, AI brings greater automation and effectiveness.' On the technical front, Janne Hirvimies, Chief Technology Officer of QuantumGate, notes a growing use of AI in reconnaissance and brute-force tactics. 'Threat actors use AI to automate phishing, conduct rapid data scraping, and craft malware that adapts in real time. Techniques like reinforcement learning are being explored for lateral movement and exploit optimisation, making attacks faster and more adaptive.' Fortifying Cyber Defenses To outsmart AI-enabled attackers, enterprises must embed AI not just as a support mechanism, but as a central system in their cybersecurity strategy. 'AI has been a core part of our operations for over two decades,' says Lozhkin. 'Without it, security operations center (SOC) analysts can be overwhelmed by alert fatigue and miss critical threats.' Kaspersky's approach focuses on AI-powered alert triage and prioritisation through advanced machine learning, which filters noise and surfaces the most pressing threats. 'It's not just about automation — it's about augmentation,' Lozhkin explains. 'Our AI Technology Research Centre ensures we pair this power with human oversight. That combination of cutting-edge analytics and skilled professionals enables us to detect over 450,000 malicious objects every day.' But the AI evolution doesn't stop at smarter alerts. According to Daccache, the next frontier is agentic AI — a system that can autonomously detect, analyze, and respond to threats in real time. 'Traditional automation tools can only go so far,' Daccache says. 'What's needed is AI that thinks and acts — what we call agentic capabilities. This transforms AI from a passive observer into a frontline responder.' CrowdStrike's Charlotte AI, integrated within its Falcon platform, embodies this vision. It understands security telemetry in context, prioritises critical incidents, and initiates immediate countermeasures, reducing analyst workload and eliminating delays during high-stakes incidents. 'That's what gives defenders the speed and consistency needed to combat fast-moving, AI-enabled threats,' Daccache adds. Gu believes AI's strength lies in its ability to analyze massive volumes of data and identify nuanced threat patterns that traditional tools overlook. 'AI-powered threat detection doesn't replace human decision-making — it amplifies it,' Gu explains. 'With intelligent triage and dynamic anomaly detection, AI reduces response time and makes threat detection more proactive.' He also stresses the importance of training AI models on real-world, diverse datasets to ensure adaptability. 'The threat landscape is not static. Your AI defenses shouldn't be either,' Gu adds. At the core of any robust AI integration strategy lies data — lots of it. Hirvimies advocates for deploying machine learning models across SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) platforms. 'These systems can correlate real-time threat intelligence, behavioral anomalies, and system events to deliver faster, more precise responses,' he says. 'Especially when it comes to detecting novel or stealthy attack patterns, machine learning makes the difference between catching a threat and becoming a headline.' Balancing Innovation with Integrity While AI can supercharge threat detection, response times, and threat simulations, it also brings with it the potential for misuse, collateral damage, and the erosion of privacy. 'Ethical AI use demands transparency, clear boundaries, and responsible data handling,' says Lozhkin.'Organisations must also ensure that employees are properly trained in the safe use of AI tools to avoid misuse or unintended exposure to threats.' He highlights Kaspersky's Automated Security Awareness Platform, which now includes dedicated sections on AI-assisted threats and responsible usage, reflecting the company's commitment to proactive education. When AI is deployed in red teaming or simulated cyberattacks, the risk matrix expands. Gu warns that AI systems, if left unchecked, can make decisions devoid of human context, potentially leading to unintended and widespread consequences. 'Ethical AI governance, robust testing environments, and clearly defined boundaries are essential,' he says, underlining the delicate balance required to simulate threats without crossing into unethical territory. Daccache emphasises the importance of a privacy-first, security-first approach. 'AI must be developed and operated with Privacy-by-Design and Secure-by-Design principles,' he explains. 'This extends to protecting the AI systems themselves — including their training data, operational logic, and outputs—from adversarial manipulation.' Daccache also points to the need for securing both AI-generated queries and outputs, especially in sensitive operations like red teaming. Without such safeguards, there's a real danger of data leakage or misuse. 'Transparency, accountability, and documentation of AI's capabilities and limitations are vital, not just to build trust, but to meet regulatory and ethical standards,' he adds. Despite AI's growing autonomy, human oversight remains non-negotiable. 'While AI can accelerate simulations and threat detection, it must be guided by skilled professionals who can interpret its actions with context and responsibility,' says Daccache. This human-AI collaboration ensures that the tools remain aligned with organisational values and ethical norms. Hirvimies rounds out the conversation with additional cautionary notes: 'Privacy violations, data misuse, bias in training datasets, and the misuse of offensive tools are pressing concerns. Transparent governance and strict ethical guidelines aren't optional, they're essential.' Balancing the Equation While AI promises speed, scale, and smarter defense mechanisms, experts caution that an over-reliance on these systems, especially when deployed without proper calibration and oversight — could expose organisations to new forms of risk. 'Absolutely, over-reliance on AI can backfire if systems are not properly calibrated or monitored,' says Lozhkin. 'Adversarial attacks where threat actors feed manipulated data to mislead AI are a growing concern. Additionally, AI can generate false positives, which can overwhelm security teams and lead to alert fatigue. To avoid this, companies should use a layered defence strategy, retrain models frequently, and maintain human oversight to validate AI-driven alerts and decisions.' This warning resonates across the cybersecurity landscape. Daccache echoes the concern, emphasising the need for transparency and control. 'Over-relying on AI, especially when treated as a black box, carries real risks. Adversaries are already targeting AI systems — from poisoning training data to crafting inputs that exploit model blind spots,' he explains. 'Without the right guardrails, AI can produce false positives or inconsistent decisions that erode trust and delay response.' Daccache stresses that AI must remain a tool that complements — not replaces—human decision-making. 'AI should be an extension of human judgement. That requires transparency, control, and context at every layer of deployment. High-quality data is essential, but so is ensuring outcomes are explainable, repeatable and operationally sound,' he says. 'Organisations should adopt AI systems that accelerate outcomes and are verifiable, auditable and secure by design.' Gu adds that blind spots in AI models can lead to serious lapses. 'AI systems are not infallible,' he says. 'Over-reliance can lead to susceptibility to adversarial inputs or overwhelming volumes of false positives that strain human analysts. To mitigate this, organizations should adopt a human-in-the-loop approach, combine AI insights with contextual human judgment, and routinely stress-test models against adversarial tactics.' Gu also warns about the evolving tactics of bad actors. 'An AI provider might block certain prompts to prevent misuse, but attackers are constantly finding clever ways to circumvent these restrictions. This makes human intervention all the more important in companies' mitigation strategies.' Governing the Double-Edged Sword As AI continues to embed itself deeper into global digital infrastructure, the question of governance looms large: will we soon see regulations or international frameworks guiding how AI is used in both cyber defense and offense? Lozhkin underscores the urgency of proactive regulation. 'Yes, there should definitely be an international framework. AI technologies offer incredible efficiency and progress, but like any innovation, they carry their fair share of risks,' he says. 'At Kaspersky, we believe new technologies should be embraced, not feared. The key is to fully understand their threats and build strong, proactive security solutions that address those risks while enabling safe and responsible innovation.' For Daccache, the focus is not just on speculative regulation, but on instilling foundational principles in AI systems from the start. 'As AI becomes more embedded in cybersecurity and digital infrastructure, questions around governance, risk, and accountability are drawing increased attention,' he explains. 'Frameworks like the GDPR already mandate technology-neutral protections, meaning what matters most is how organizations manage risk not whether AI is used.' Daccache emphasises that embedding Privacy-by-Design and Secure-by-Design into AI development is paramount. 'To support this approach, CrowdStrike offers AI Red Teaming Services, helping organisations proactively test and secure their AI systems against misuse and adversarial threats. It's one example of how we're enabling customers to adopt AI with confidence and a security-first mindset.' On the other hand, Gu highlights how AI is not only transforming defensive mechanisms but is also fuelling new forms of offensive capabilities. 'As AI becomes integral to both defence and offense in cyberspace, regulatory frameworks will be necessary to establish norms, ensure transparency, and prevent misuse. We expect to see both national guidelines and international cooperation similar to existing cybercrime treaties emerge to govern AI applications, particularly in areas involving privacy, surveillance, and offensive capabilities.' Echoing this sentiment, Hirvimies concludes by saying that developments are already underway. 'Yes. Regulations like the EU AI Act and global cyber norms are evolving to address dual-use AI,' he says. 'We can expect more international frameworks focused on responsible AI use in cyber defence, limits on offensive AI capabilities, and cross-border incident response cooperation. At QuantumGate, we've designed our products to support this shift and facilitate compliance with the country's cryptography regulations.'

Skills shortage remains a huge problem as Cisco reveals results of its Cybersecurity Readiness Index 2025
Skills shortage remains a huge problem as Cisco reveals results of its Cybersecurity Readiness Index 2025

Tahawul Tech

time3 days ago

  • Tahawul Tech

Skills shortage remains a huge problem as Cisco reveals results of its Cybersecurity Readiness Index 2025

Cisco has published the findings of their Cybersecurity Readiness Index, and the pressing need for more skilled cybersecurity professionals has increased as AI adoption continues to skyrocket. However, there was good news for the UAE, as it was revealed that 30% of organisations across the country had reached mature or progressive levels in terms of their readiness to withstand cybersecurity attacks. This represents an improvement from last year's Index, however further efforts are required to address cybersecurity preparedness as hyperconnectivity and AI introduce new complexities for security practitioners. AI is revolutionizing security and escalating threat levels, with 93% of organizations in the country having faced AI-related incidents last year. However, only 62% of respondents are confident their employees fully understand AI-related cybersecurity threats, and only 57% believe their teams fully grasp how malicious actors are using AI to execute sophisticated attacks. This awareness gap leaves organizations critically exposed. AI is compounding an already challenging threat landscape. In the last year, over half of organisations (55%) suffered cyberattacks, hindered by complex security frameworks with siloed point solutions. The top three types of cybersecurity incidents include malware (76%), phishing attacks (59%), and data breaches by malicious actors (47%). Ransomware attacks were mentioned by 39% of respondents. Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS, said: 'As AI reshapes our world, it brings an entirely new class of risks at an unprecedented scale, putting even more pressure on infrastructure and those who defend it.' He added: 'Our region's leadership in AI adoption is remarkable, paving the way for a dynamic future where innovative, AI-driven cybersecurity measures are critical for enhancing and protecting our digital landscape. Cisco is committed to support organizations in the region in enhancing their digital resilience by prioritizing AI solutions, streamlining security architecture, and addressing talent shortages. Today, preparedness is key to ensuring that businesses remain relevant and can thrive in the AI era.' The Index evaluates companies' readiness across five pillars – Identity Intelligence, Network Resilience, Machine Trustworthiness, Cloud Reinforcement, and AI Fortification – and encompasses 31 solutions and capabilities. Based on a double-blind survey of 8,000 private sector security and business leaders in 30 global markets, including 202 in the UAE, respondents detailed their deployment stages for each solution. Companies were then categorized into four readiness stages: Beginner, Formative, Progressive, and Mature. Findings Cybersecurity preparedness in the UAE remains alarmingly low, especially as 75% of respondents anticipate business disruptions from cyber incidents within the next 12 to 24 months. Further: AI's Expanding Role in Cybersecurity: An impressive 96% of organizations use AI to understand threats better, 93% for threat detection, and 77% for recovery, underscoring AI's vital role in strengthening cybersecurity strategies. An impressive 96% of organizations use AI to understand threats better, 93% for threat detection, and 77% for recovery, underscoring AI's vital role in strengthening cybersecurity strategies. Generative AI (GenAI) Deployment Risks: GenAI tools are widely adopted, with 45% of employees using approved third-party tools. However, 20% have unrestricted access to public GenAI, and 54% of IT teams are unaware of employee interactions with GenAI, underscoring major oversight challenges. GenAI tools are widely adopted, with 45% of employees using approved third-party tools. However, 20% have unrestricted access to public GenAI, and 54% of IT teams are unaware of employee interactions with GenAI, underscoring major oversight challenges. Shadow AI Concerns: 33% of organizations lack confidence in detecting unregulated AI deployments, or shadow AI, posing significant cybersecurity and data privacy risks. 33% of organizations lack confidence in detecting unregulated AI deployments, or shadow AI, posing significant cybersecurity and data privacy risks. Unmanaged Device Vulnerability: Within hybrid work models, 88% of organizations face increased security risks as employees access networks from unmanaged devices. This is exacerbated by using unapproved Gen AI tools. Within hybrid work models, 88% of organizations face increased security risks as employees access networks from unmanaged devices. This is exacerbated by using unapproved Gen AI tools. Investment Priorities Shift: While almost all (98%) organizations plan to upgrade their IT infrastructure in the next 12-24 months, only 9% allocate more than 20% of their IT budget to cybersecurity. This finding suggests an opportunity for enhanced investment in comprehensive defense strategies, as the pace of threats continues to rise. While almost all (98%) organizations plan to upgrade their IT infrastructure in the next 12-24 months, only 9% allocate more than 20% of their IT budget to cybersecurity. This finding suggests an opportunity for enhanced investment in comprehensive defense strategies, as the pace of threats continues to rise. Complex Security Postures: Over four in five (81%) organizations report that their complex security infrastructures, dominated by the deployment of more than 10 point security solutions, are hampering their ability to respond to threats swiftly and effectively. Over four in five (81%) organizations report that their complex security infrastructures, dominated by the deployment of more than 10 point security solutions, are hampering their ability to respond to threats swiftly and effectively. Talent Shortage Impedes Progress: A staggering 87% of respondents identify the shortage of skilled cybersecurity professionals as a major challenge, with 57% reporting more than 10 positions to fill. To tackle today's cybersecurity challenges, organizations in the UAE must invest in AI-driven solutions, simplify security infrastructures, and enhance AI threat awareness. Prioritising AI for threat detection, response, and recovery is essential, as is addressing talent shortages and mitigating risks from unmanaged devices and shadow AI.

Threat Intel must adapt to disruptive adversarial GenAI
Threat Intel must adapt to disruptive adversarial GenAI

Tahawul Tech

time3 days ago

  • Tahawul Tech

Threat Intel must adapt to disruptive adversarial GenAI

Bart Lenaerts, Senior Product Marketing Manager, Infoblox, explores how cyber adversaries are increasingly leveraging Generative AI (GenAI), especially Large Language Models (LLMs), to enhance their attacks through social engineering, deception, and code obfuscation. Generative AI, particularly Large Language Models (LLM), is enforcing a transformation in cybersecurity. Adversaries are attracted to GenAI as it lowers entry barriers to create deceiving content. Actors do this to enhance the efficacy of their intrusion techniques like social engineering and detection evasion. This article provides common examples of malicious GenAI usage like deepfakes, chatbot automation and code obfuscation. More importantly, it also makes a case for early warnings of threat activity and usage of predictive threat intelligence capable of disrupting actors before they execute their attacks. Example 1: Deepfake scams using voice cloning At the end of 2024, the FBI warned that criminals were using generative AI to commit fraud on a larger scale, making their schemes more believable. GenAI tools like voice cloning reduce the time and effort needed to deceive targets with trustworthy audio messages. Voice cloning tools can even correct human errors like foreign accents or vocabulary that might otherwise signal fraud. While creating synthetic content isn't illegal, it can facilitate crimes like fraud and extortion. Criminals use AI-generated text, images, audio, and videos to enhance social engineering, phishing, and financial fraud schemes. Especially worrying is the easy access cybercriminals have to these tools and the lack of security safeguards. A recent Consumer Reports investigation[1] on six leading publicly available AI voice cloning tools discovered that five have bypassable safeguards, making it easy to clone a person's voice even without their consent. Voice cloning technology works by taking an audio sample of a person speaking and then extrapolating that person's voice into a synthetic audio file. However, without safeguards in place, anyone who registers an account can simply upload audio of an individual speaking, such as from a TikTok or YouTube video, and have the service imitate them. Voice cloning has been utilized by actors in various scenarios, including large-scale deep-fake videos for cryptocurrency scams or the imitation of voices during individual phone calls. A recent example that garnered media attention is the so-called 'grandparent' scams[2], where a family emergency scheme is used to persuade the victim to transfer funds. Example 2: AI-powered chat boxes Actors often pick their victims carefully by gathering insights on their interests and set them up for scams. Initial research is used to craft the smishing message and trigger the victim into a conversation with them. Personal notes like 'I read your last social post and wanted to become friends' or 'Can we talk for a moment?' are some examples our intel team discovered (step 1 in picture 2). While some of these messages may be extended with AI-modified pictures, what matters is that actors invite their victims to the next step, which is a conversation on Telegram or another actor controlled medium, far away from security controls (step 2 in picture 2). Once the victim is on the new medium, the actor uses several tactics to continue the conversation, such as invites to local golf tournaments, Instagram following or AI-generated images. These AI bot-driven conversations go on for weeks and include additional steps, like asking for a thumbs-up on YouTube or even a social media repost. At this moment, the actor is trying to assess their victims and see how they respond. Sooner or later, the actor will show some goodwill and create a fake account. Each time the victim reacts positively to the actor's request, the amount of currency in the fake account will increase. Later, the actor may even request small amounts of investment money, with an ROI of more than 25 percent. When the victim asks to collect their gains (step 3 in picture 2), the actor requests access to the victim's crypto account and exploits all established trust. At this moment, the scamming comes to an end and the actor steals the crypto money in the account. While these conversations are time-intensive, they are rewarding for the scammer and can lead to ten-thousands of dollars in ill-gotten gains. By using AI-driven chat boxes, actors have found a productive way to automate the interactions and increase the efficiency of their efforts. InfoBlox Threat Intel tracks these scams to optimize threat intelligence production. Common characteristics found in malicious chat boxes include: AI grammar errors, such as an extra space after a period, referencing foreign languages Using vocabulary that includes fraud-related terms Forgetting details from past conversations Repeating messages mechanically due to poorly trained AI chatbots (also known as parroting) Making illogical requests, like asking if you want to withdraw your funds at irrational moments in the conversation Using false press releases posted on malicious sites Opening conversations with commonly used phrases to lure the victim Using specific cryptocurrency types used often in criminal communities The combinations of these fingerprints allow threat intel researchers to observe emerging campaigns, track back the actors and their malicious infrastructure. Example 3: Code obfuscation and evasion Threat actors are using GenAI not only for creating human readable content. Several news outlets explored how GenAI assists actors in obfuscating their malicious codes. Earlier this year Infosecurity Magazine[3] published details of how threat researchers at HP Wolf discovered social engineering campaigns spreading VIP Keylogger and 0bj3ctivityStealer malware, both of which involved malicious code being embedded in image files. With a goal to improve the efficiency of their campaign, actors are repurposing and stitching together existing malware via GenAI to evade detection. This approach also assists them in gaining velocity in setting up threat campaigns and reducing the skills needed to construct infection chains. Industry threat research HP Wolf estimates evasion increments of 11% for email threats while other security vendors like Palo Alto Networks estimate[4] that GenAI flipped their own malware classifier model's verdicts 88% of the time into false negatives. Threat actors are clearly making progress in their AI driven evasion efforts. Making the case for modernising threat research As AI driven attacks pose plenty of detection evasion challenges, defenders need to look beyond traditional tools like sandboxing or indicators derived from incident forensics to produce effective threat intelligence. One of these opportunities can be found by tracking pre-attack activities instead of sending the last suspicious payload to a slow sandbox. Just like your standard software development lifecycle, threat actors go through multiple stages before launching attacks. First, they develop or generate new variants for the malicious code using GenAI. Next, they set up the infrastructure like email delivery networks or hard to trace traffic distribution systems. Often this happens in combination with domain registrations or worse hijacking of existing domains. Finally, the attacks go into 'production' meaning the domains become weaponised, ready to deliver malicious payload. This is the stage where traditional security tools attempt to detect and stop threats because it involves easily accessible endpoints or networks egress points within the customer's environment. Because of evasion and deception by GenAI tools, this point of detection may not be effective as the actors continuously alter their payloads or mimic trustworthy sources. The Value of Predictive Intelligence Based on DNS Telemetry To stay ahead of these evolving threats, organisations should consider leveraging predictive intelligence derived from DNS telemetry. DNS data plays a crucial role in identifying malicious actors and their infrastructure before attacks even occur. Unlike payloads that can be altered or disguised using GenAI, DNS data is inherently transparent across multiple stakeholders—such as domain owners, registrars, domain servers, clients, and destinations—and must be 100% accurate to ensure proper connectivity. This makes DNS an ideal source for threat research, as its integrity makes it less susceptible to manipulation. DNS analytics also provides another significant advantage: domains and malicious DNS infrastructures are often configured well in advance of an attack or campaign. By monitoring new domain registrations and DNS records, organisations can track the development of malicious infrastructure and gain insights into the early stages of attack planning. This approach enables the identification of threats before they're activated. Conclusion The evolving landscape of AI and the impact on security is significant. With the right approaches and strategies, such as predictive intelligence derived from DNS, organizations can truly get ahead of GenAI risks and ensure that they don't become patient zero. [1] [2] [3] [4] Image Credit: Infoblox

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store