
Significant Rise In Targeted Ransomware Activity
Kaspersky experts have reported a significant rise in targeted ransomware activity at GISEC Global 2025, with the number of active ransomware groups increasing by 35% between 2023 and 2024 – reaching 81 groups globally. Despite this surge, the number of infected victims dropped by 8% during the same period, reaching an estimated 4,300 victims worldwide. The UAE, South Africa, Saudi Arabia, and Turkiye emerged as the most frequently targeted countries in the region.
According to Kaspersky research of data leak sites of targeted ransomware groups, the number of ransomware groups continued to rise for the second consecutive year, despite two major disruptions targeting LockBit and BlackCat in 2024 – indicating that such attacks remain highly lucrative for cybercriminals.
Targeted ransomware groups use techniques such as exploiting vulnerable internet-exposed services, social engineering, and leveraging traded initial access on the dark web to infiltrate victims. There is also growing evidence also suggests increased collaboration among these groups, including the exchange of malware and hacking tools to achieve their objectives.
His Excellency Dr. Mohamed Al Kuwaiti, Head of Cybersecurity for the UAE Government, affirmed: 'In light of the accelerating pace of cyberattacks globally, it has become imperative to adopt proactive policies that leverage artificial intelligence and advanced analytics to detect threats and respond to them effectively'. He emphasized the importance of GISEC Global 2025 at this critical time and its role in bringing together cybersecurity experts, specialists, and leaders to showcase and discuss evolving threats. The event serves as a vital platform for enhancing collaboration and developing innovative, forward-looking solutions to ensure a secure cyber environment that supports sustainable development and the digital economy.
Maher Yamout, Lead Security Researcher for the Middle East, Turkiye and Africa at Kaspersky, suggest some plans to protect institutions. He said: 'By identifying and securing your corporate network's entry points and understanding the tactics used by ransomware groups, companies can better protect their digital assets against targeted ransomware attacks. Failing to address both aspects, significantly increases a company's vulnerability.'
To help organizations strengthen their defenses, Kaspersky recommends the following: Employee education and cybersecurity training is necessary as human error is a common cause for cybersecurity breach and can serve as an initial point of access for ransomware attacks.
The Kaspersky Threat Intelligence is an essential tool which provides in-depth threat intelligence and real-time insights on the history, motivations and operations of targeted ransomware groups. In addition, Kaspersky's Digital Footprint Intelligence monitors external threats for companies' assets in Surface, Deep and Dark web, strengthening defense against credential leaks.
Keep all devices and systems updated to prevent attackers from exploiting known vulnerabilities.
Set up offline backups that intruders cannot misuse, and make sure you can access it quickly in an emergency.
Kaspersky's multi-layered, next generation protection detects ransomware at both the delivery stage and execution stage of the attack. Kaspersky Next , which combines exploit prevention, behavior-based detection, and a powerful remediation engine capable of rolling back malicious actions. It also features built-in self-defense mechanisms to prevent tampering or removal by attackers. 0 0
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The National
2 days ago
- The National
'Trumped' again: Taking stock of Tesla's market ups and downs
Tesla Motors' stock price is taking a beating again, this time because of the very high-profile squabble between chief executive Elon Musk and US President Donald Trump. Its 15 per cent decline on Thursday reflects the volatility that shadows the company's shares, which remain vulnerable to everything from market trends to short tweets, especially from Mr Musk. Now, with his increasingly bitter fight with Mr Trump, Mr Musk might find himself on the short end of the stick: once a trusted adviser, he has now fallen out of favour with his blitz of criticism over Mr Trump's "big, beautiful" budget bill. Mr Musk derided it as a "disgusting abomination". His gripes won't surely sit well with a "very disappointed" Mr Trump, who is notorious for getting back at his critics. Mr Musk curried favours during his time in the US administration, securing contracts and deals for his companies. Those favours are now likely up in the air. Mr Trump had already suggested that one way to save "billions and billions" is to "terminate" Mr Musk's government subsidies and contracts. It's a spectacular U-turn for the once allies; Mr Trump said he even bought a Tesla to show his support for Mr Musk. Losing the White House's support would be "terrible for Tesla, which is being eaten alive in Europe and Asia by Chinese competition, and Elon Musk's irritating involvement in politics", said Ipek Ozkardeskaya, a senior analyst at Swissquote Bank. She pointed out that Mr Musk would need the President's support, especially for Tesla's self-driving cars and Robotaxis, which "need friendly legislation to thrive". "Legislation is Trump. The hype around Tesla is not looking good," she added. Tesla's shares were up nearly 5 per cent in premarket trading on Friday amid reports of a scheduled call between Mr Trump and Mr Musk to end the spat. While Tesla's stock still remains slightly above its level when Mr Trump won his second presidency in November – Mr Musk splashed $250 million to help ensure that – it's now uncertain how the Musk-Trump clash will affect its share price moving forward. Here are some of the biggest movements in Tesla's stock history. July 24, 2024: Competition heat Tesla's stock dove 12 per cent to $215.99 after its second-quarter financials disappointed, with revenue sliding 7 per cent. The EV maker began feeling the heat from intense competition, most notably from China, as BYD famously overtook it as the world's biggest EV maker in the fourth quarter of 2023 and, subsequently, for the entirety of 2024. October 24, 2024: 22% blitz After solid third quarter financials that saw Mr Musk boldly projecting up to 30 per cent more sales in 2025, Tesla's stock rocketed nearly 22 per cent, putting investors at ease. This was the biggest single-day gain in more than a decade, which also added $150 billion to the company's market value. November 11, 2024: Tesla gets 'Trumped' Tesla gained nearly 9 per cent to $350 as investors expected the alliance between Mr Musk and the then president-elect Mr Trump to further boost its stock. The world's wealthiest person threw in about $250 million into Mr Trump's campaign to help the latter recapture the White House earlier that month. January 2, 2025: New Year's peeve After a series of highs, Tesla came back down, starting the new year with a more than 6 per cent drop to $379.28 after deliveries posted their first decline in a decade. This was also the first time the stock went below the $400 level in nearly a month. February 11, 2025: BYD strikes again After the previous coups, BYD once again hit Tesla, this time as it partnered with fellow Chinese company DeepSeek – famous for putting a dent into the auras of OpenAI and Nvidia – to utilise artificial intelligence in autonomous vehicles. That caused Tesla's stock to shed 6.3 per cent to $328.50. March 10 to April 9, 2025: Tariff see-saw The beginning of the Trump tariff effect: on March 10, Tesla's stock slid more than 15 per cent to $222.15, amid concerns and uncertainty around Mr Trump's planned tariffs. It didn't last long, as the company's share price worked its way back up, peaking – for this period – at $288.14 on March 25, as Mr Trump signalled he might scale back some of the levies. Mr Trump unveiled his Liberation Day tariffs on April 2. By April 8, investors were now raising concerns on how the company would cope with them: that combination pulled down Tesla's shares nearly 5 per cent to $221.86, its lowest since the March 10 slide. This time, it seemed like a blip: the following day, April 9, Tesla shares soared more than 22 per cent after Benchmark Company analyst Mickey Legg dismissed the sell-off as 'overblown'. April 21, 2025: Dogged by Doge Tesla shares gave up almost 6 per cent analyst fears that there was an 'continuing brand erosion' stemming from Mr Musk's role in the Trump administration. Mr Musk and Tesla had already been feeling the backlash: consumers and the general public, particularly those incensed by his federal job and budget cutting, have protested outside Tesla stores and vandalised its EVs, in addition to Tesla owners "rebranding" their cars out of protest. May 14, 2025: Tariff reprieve Tesla gained more than 9 per cent to $347.68 from the close on May 12 – the day the US and China agreed to temporarily halt their tit-for-tat tariffs. The company's stock would then remain largely steady, until Mr Musk departed from his role in the US government – leading to the public squabble with Mr Trump.


Khaleej Times
2 days ago
- Khaleej Times
Artificial Intelligence in cybersecurity: savior or saboteur?
Artificial intelligence has rapidly emerged as both a cornerstone of innovation and a ticking time bomb in the realm of cybersecurity. Once viewed predominantly as a force for good, enabling smarter threat detection, automating incident responses, and predicting attacks before they happen — AI has now taken on a double-edged role. The very capabilities that make it invaluable to cybersecurity professionals are now being exploited by cybercriminals to launch faster, more convincing, and more damaging attacks. From phishing emails indistinguishable from real business correspondence to deepfake videos that impersonate CEOs and public figures with chilling accuracy, AI is arming attackers with tools that were previously the stuff of science fiction. And as large language models (LLMs), generative AI, and deep learning evolve, the tactics used by bad actors are becoming more scalable, precise, and difficult to detect. 'The threat landscape is fundamentally shifting,' says Sergey Lozhkin, Head of the Global Research & Analysis Team for the Middle East, Türkiye, and Africa at Kaspersky. 'From the outset, cybercriminals began using large language models to craft highly convincing phishing emails. Poor grammar and awkward phrasing — once dead giveaways are disappearing. Today's scams can perfectly mimic tone, structure, and professional language.' But the misuse doesn't stop at email. Attackers are now using AI to create fake websites, generate deceptive images, and even produce deepfake audio and video to impersonate trusted figures. In some cases, these tactics have tricked victims into transferring large sums of money or divulging sensitive data. According to Roland Daccache, Senior Manager – Sales Engineering at CrowdStrike MEA, AI is now being used across the entire attack chain. 'Generative models are fueling more convincing phishing lures, deepfake-based social engineering, and faster malware creation. For example, DPRK-nexus adversary Famous Chollima used genAI to create fake LinkedIn profiles and résumé content to infiltrate organisations as IT workers. In another case, attackers used AI-generated voice and video deepfakes to impersonate executives for high-value business email compromise (BEC) schemes.' The cybercrime community is also openly discussing how to weaponize LLMs for writing exploits, shell commands, and malware scripts on dark web forums, further lowering the barrier of entry for would-be hackers. This democratisation of hacking tools means that even novice cybercriminals can now orchestrate sophisticated attacks with minimal effort. Ronghui Gu, Co-Founder of CertiK, a leading blockchain cybersecurity firm, highlights how AI is empowering attackers to scale and personalize their strategies. 'AI-generated phishing that mirrors human tone, deepfake technology for social engineering, and adaptive tools that bypass detection are allowing even low-skill threat actors to act with precision. For advanced groups, AI brings greater automation and effectiveness.' On the technical front, Janne Hirvimies, Chief Technology Officer of QuantumGate, notes a growing use of AI in reconnaissance and brute-force tactics. 'Threat actors use AI to automate phishing, conduct rapid data scraping, and craft malware that adapts in real time. Techniques like reinforcement learning are being explored for lateral movement and exploit optimisation, making attacks faster and more adaptive.' Fortifying Cyber Defenses To outsmart AI-enabled attackers, enterprises must embed AI not just as a support mechanism, but as a central system in their cybersecurity strategy. 'AI has been a core part of our operations for over two decades,' says Lozhkin. 'Without it, security operations center (SOC) analysts can be overwhelmed by alert fatigue and miss critical threats.' Kaspersky's approach focuses on AI-powered alert triage and prioritisation through advanced machine learning, which filters noise and surfaces the most pressing threats. 'It's not just about automation — it's about augmentation,' Lozhkin explains. 'Our AI Technology Research Centre ensures we pair this power with human oversight. That combination of cutting-edge analytics and skilled professionals enables us to detect over 450,000 malicious objects every day.' But the AI evolution doesn't stop at smarter alerts. According to Daccache, the next frontier is agentic AI — a system that can autonomously detect, analyze, and respond to threats in real time. 'Traditional automation tools can only go so far,' Daccache says. 'What's needed is AI that thinks and acts — what we call agentic capabilities. This transforms AI from a passive observer into a frontline responder.' CrowdStrike's Charlotte AI, integrated within its Falcon platform, embodies this vision. It understands security telemetry in context, prioritises critical incidents, and initiates immediate countermeasures, reducing analyst workload and eliminating delays during high-stakes incidents. 'That's what gives defenders the speed and consistency needed to combat fast-moving, AI-enabled threats,' Daccache adds. Gu believes AI's strength lies in its ability to analyze massive volumes of data and identify nuanced threat patterns that traditional tools overlook. 'AI-powered threat detection doesn't replace human decision-making — it amplifies it,' Gu explains. 'With intelligent triage and dynamic anomaly detection, AI reduces response time and makes threat detection more proactive.' He also stresses the importance of training AI models on real-world, diverse datasets to ensure adaptability. 'The threat landscape is not static. Your AI defenses shouldn't be either,' Gu adds. At the core of any robust AI integration strategy lies data — lots of it. Hirvimies advocates for deploying machine learning models across SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) platforms. 'These systems can correlate real-time threat intelligence, behavioral anomalies, and system events to deliver faster, more precise responses,' he says. 'Especially when it comes to detecting novel or stealthy attack patterns, machine learning makes the difference between catching a threat and becoming a headline.' Balancing Innovation with Integrity While AI can supercharge threat detection, response times, and threat simulations, it also brings with it the potential for misuse, collateral damage, and the erosion of privacy. 'Ethical AI use demands transparency, clear boundaries, and responsible data handling,' says Lozhkin.'Organisations must also ensure that employees are properly trained in the safe use of AI tools to avoid misuse or unintended exposure to threats.' He highlights Kaspersky's Automated Security Awareness Platform, which now includes dedicated sections on AI-assisted threats and responsible usage, reflecting the company's commitment to proactive education. When AI is deployed in red teaming or simulated cyberattacks, the risk matrix expands. Gu warns that AI systems, if left unchecked, can make decisions devoid of human context, potentially leading to unintended and widespread consequences. 'Ethical AI governance, robust testing environments, and clearly defined boundaries are essential,' he says, underlining the delicate balance required to simulate threats without crossing into unethical territory. Daccache emphasises the importance of a privacy-first, security-first approach. 'AI must be developed and operated with Privacy-by-Design and Secure-by-Design principles,' he explains. 'This extends to protecting the AI systems themselves — including their training data, operational logic, and outputs—from adversarial manipulation.' Daccache also points to the need for securing both AI-generated queries and outputs, especially in sensitive operations like red teaming. Without such safeguards, there's a real danger of data leakage or misuse. 'Transparency, accountability, and documentation of AI's capabilities and limitations are vital, not just to build trust, but to meet regulatory and ethical standards,' he adds. Despite AI's growing autonomy, human oversight remains non-negotiable. 'While AI can accelerate simulations and threat detection, it must be guided by skilled professionals who can interpret its actions with context and responsibility,' says Daccache. This human-AI collaboration ensures that the tools remain aligned with organisational values and ethical norms. Hirvimies rounds out the conversation with additional cautionary notes: 'Privacy violations, data misuse, bias in training datasets, and the misuse of offensive tools are pressing concerns. Transparent governance and strict ethical guidelines aren't optional, they're essential.' Balancing the Equation While AI promises speed, scale, and smarter defense mechanisms, experts caution that an over-reliance on these systems, especially when deployed without proper calibration and oversight — could expose organisations to new forms of risk. 'Absolutely, over-reliance on AI can backfire if systems are not properly calibrated or monitored,' says Lozhkin. 'Adversarial attacks where threat actors feed manipulated data to mislead AI are a growing concern. Additionally, AI can generate false positives, which can overwhelm security teams and lead to alert fatigue. To avoid this, companies should use a layered defence strategy, retrain models frequently, and maintain human oversight to validate AI-driven alerts and decisions.' This warning resonates across the cybersecurity landscape. Daccache echoes the concern, emphasising the need for transparency and control. 'Over-relying on AI, especially when treated as a black box, carries real risks. Adversaries are already targeting AI systems — from poisoning training data to crafting inputs that exploit model blind spots,' he explains. 'Without the right guardrails, AI can produce false positives or inconsistent decisions that erode trust and delay response.' Daccache stresses that AI must remain a tool that complements — not replaces—human decision-making. 'AI should be an extension of human judgement. That requires transparency, control, and context at every layer of deployment. High-quality data is essential, but so is ensuring outcomes are explainable, repeatable and operationally sound,' he says. 'Organisations should adopt AI systems that accelerate outcomes and are verifiable, auditable and secure by design.' Gu adds that blind spots in AI models can lead to serious lapses. 'AI systems are not infallible,' he says. 'Over-reliance can lead to susceptibility to adversarial inputs or overwhelming volumes of false positives that strain human analysts. To mitigate this, organizations should adopt a human-in-the-loop approach, combine AI insights with contextual human judgment, and routinely stress-test models against adversarial tactics.' Gu also warns about the evolving tactics of bad actors. 'An AI provider might block certain prompts to prevent misuse, but attackers are constantly finding clever ways to circumvent these restrictions. This makes human intervention all the more important in companies' mitigation strategies.' Governing the Double-Edged Sword As AI continues to embed itself deeper into global digital infrastructure, the question of governance looms large: will we soon see regulations or international frameworks guiding how AI is used in both cyber defense and offense? Lozhkin underscores the urgency of proactive regulation. 'Yes, there should definitely be an international framework. AI technologies offer incredible efficiency and progress, but like any innovation, they carry their fair share of risks,' he says. 'At Kaspersky, we believe new technologies should be embraced, not feared. The key is to fully understand their threats and build strong, proactive security solutions that address those risks while enabling safe and responsible innovation.' For Daccache, the focus is not just on speculative regulation, but on instilling foundational principles in AI systems from the start. 'As AI becomes more embedded in cybersecurity and digital infrastructure, questions around governance, risk, and accountability are drawing increased attention,' he explains. 'Frameworks like the GDPR already mandate technology-neutral protections, meaning what matters most is how organizations manage risk not whether AI is used.' Daccache emphasises that embedding Privacy-by-Design and Secure-by-Design into AI development is paramount. 'To support this approach, CrowdStrike offers AI Red Teaming Services, helping organisations proactively test and secure their AI systems against misuse and adversarial threats. It's one example of how we're enabling customers to adopt AI with confidence and a security-first mindset.' On the other hand, Gu highlights how AI is not only transforming defensive mechanisms but is also fuelling new forms of offensive capabilities. 'As AI becomes integral to both defence and offense in cyberspace, regulatory frameworks will be necessary to establish norms, ensure transparency, and prevent misuse. We expect to see both national guidelines and international cooperation similar to existing cybercrime treaties emerge to govern AI applications, particularly in areas involving privacy, surveillance, and offensive capabilities.' Echoing this sentiment, Hirvimies concludes by saying that developments are already underway. 'Yes. Regulations like the EU AI Act and global cyber norms are evolving to address dual-use AI,' he says. 'We can expect more international frameworks focused on responsible AI use in cyber defence, limits on offensive AI capabilities, and cross-border incident response cooperation. At QuantumGate, we've designed our products to support this shift and facilitate compliance with the country's cryptography regulations.'


Zawya
3 days ago
- Zawya
Calls for Omanisation freeze counterproductive
While relevant authorities are working to employ, train, and qualify Omanis for work in various available economic sectors, and to raise the percentage of "Omanisation" among qualified personnel in required specialties, today we find some countries attempting to distort this national and sovereign demand by proposing the idea of freezing "Omanisation" in some companies established through foreign investments. For more than three weeks, numerous messages and appeals have been circulating in the social media from citizens addressing government officials not to accept any condition restricting the employment of national workers in these companies in the event of bilateral trade agreements. This would lead to a doubling of the number of foreign employees in commercial establishments operating in the Sultanate, which would increase their control over the fate of Omanis and their ability to decide. This will also lead to a decline in the qualification opportunities for Omanis in these institutions. And ultimately will lead to an increase in annual remittances of expatriates to their home countries, thereby reducing liquidity in the domestic market. Many people view this country's request to freeze the "Omanisation" policy in the free trade agreement as a form of guardianship over the Omani labour market. When a country seeks to permanently guarantee its labour in vital sectors in another country, this sets a dangerous precedent that undermines the sovereignty of national decision-making. We know that foreign investment in any country seeks economic freedom, even in hiring its own workers, to reduce the final cost of any product or service. However, each country has its own laws, particularly regarding the employment of a certain percentage of national workers in these institutions, and Oman is no exception. However, I do not believe that the goal of freezing Omanisation will create chaos in the Omani market, as some suggest. However, there is a possibility that this could lead to some diplomatic tensions in specific commercial areas, which could be avoided by clarifying the country's policies. The world has experienced some problems resulting from the presence of its workers in other countries over the past decades. In certain cases, the issue of national labour or economic policies was used as a means to strain relations or improve a particular domestic situation. In international relations, there are solutions to resolve such disputes, and countries work to resolve them diplomatically to avoid escalation. We must view these issues and matters objectively, because governments typically seek to protect their national interests, and disputes related to labour and economic policies are often resolved through dialogue and agreements. The volume of Oman's foreign trade with countries around the world is increasing annually, and the quality of foreign investment projects is also increasing. Oman imports numerous products and goods, from around the world. And any demand to freeze the "Omanisation" policy will lead to a decline in demand from these countries. Furthermore, such a demand will lead to a decline in demand for joint big projects from such countries. All of these projects are part of efforts to enhance economic cooperation between countries, especially since recent years have witnessed an increase in the volume of investments and joint projects between Oman and these countries. Therefore, the presence of national labour alongside expatriate labor is a matter of sovereignty, and no country can propose a vision that excludes national labour from working in its country.