
91% of Security Leaders Admit to Cloud Security Trade-Offs
Gigamon, a leader in deep observability, recently released its 2025 Hybrid Cloud Security Survey, revealing that hybrid cloud infrastructure is under mounting strain from the growing influence of artificial intelligence (AI).
The annual study, now in its third year, surveyed over 1,000 global Security and IT leaders across the globe. As cyberthreats increase in both scale and sophistication, breach rates have surged to 55 percent during the past year, representing a 17 percent year-on-year (YoY) rise, with AI-generated attacks emerging as a key driver of this growth.
Security and IT teams are being pushed to a breaking point, with the economic cost of cybercrime now estimated at $3 trillion worldwide according to the World Economic Forum. As AI-enabled adversaries grow more agile, organisations are challenged with ineffective and inefficient tools, fragmented cloud environments, and limited intelligence.
Key Findings Highlight How AI Is Reshaping Hybrid Cloud Security Priorities
AI's role in escalating network complexity and accelerating risk is evident. The study reveals that 46 percent of Security and IT leaders say managing AI-generated threats is now their top security priority. One in three organisations report that network data volumes have more than doubled in the past two years due to AI workloads, while nearly half of all respondents ( 47 percent ) are seeing a rise in attacks targeting their organisation's large language model (LLM) deployments. More than half ( 58 percent ) say they've seen a surge in AI-powered ransomware—up from 41 percent in 2024 underscoring how adversaries are exploiting AI to outpace and outflank existing defences.
The study reveals that of Security and IT leaders say managing AI-generated threats is now their top security priority. organisations report that network data volumes have more than doubled in the past two years due to AI workloads, while nearly half of all respondents ( ) are seeing a rise in attacks targeting their organisation's large language model (LLM) deployments. More than half ( ) say they've seen a surge in AI-powered ransomware—up from underscoring how adversaries are exploiting AI to outpace and outflank existing defences. Compromises highlight continued trade-offs in foundational areas of hybrid cloud security nine out of ten ( 91 percent) Security and IT leaders concede to making compromises in securing and managing their hybrid cloud infrastructure. The key challenges that create these compromises include the lack of clean, high-quality data to support secure AI workload deployment (46 percent) and lack of comprehensive insight and visibility across their environments, including lateral movement in East-West traffic (47 percent) .
nine out of ten ( Security and IT leaders concede to making compromises in securing and managing their hybrid cloud infrastructure. The key challenges that create these compromises include the lack of clean, high-quality data to support secure AI workload deployment and lack of comprehensive insight and visibility across their environments, including lateral movement in East-West traffic . Public cloud risks prompt industry recalibration. Once considered an acceptable risk in the rush to scale post-COVID operations, the public cloud is now coming under increasingly intense scrutiny. Many organizations are rethinking their cloud strategies in the face of their growing exposure, with 70 percent of Security and IT leaders now viewing the public cloud as a greater risk than any other environment. As a result, 70 percent report their organization is actively considering repatriating data from public to private cloud due to security concerns and 54 percent are reluctant to use AI in public cloud environments, citing fears around intellectual property protection.
Once considered an acceptable risk in the rush to scale post-COVID operations, the public cloud is now coming under increasingly intense scrutiny. Many organizations are rethinking their cloud strategies in the face of their growing exposure, with of Security and IT leaders now viewing the public cloud as a greater risk than any other environment. As a result, report their organization is actively considering repatriating data from public to private cloud due to security concerns and are reluctant to use AI in public cloud environments, citing fears around intellectual property protection. Visibility is top of mind for Security leaders. As cyberattacks become more sophisticated, the limitations of existing security tools are coming sharply into focus. Organisations are shifting their priorities toward gaining complete visibility into their environments, a capability now seen as crucial for effective threat detection and response. More than half (55 percent) of respondents lack confidence in their current tools' ability to detect breaches, citing limited visibility as the core issue. As a result, 64 percent say their number one focus for the next 12 months is achieving real-time threat monitoring delivered through having complete visibility into all data in motion.
Deep Observability Becomes the New Standard
With AI driving unprecedented traffic volumes, risk, and complexity, nearly nine in 10 (89 percent) Security and IT leaders cite deep observability as fundamental to securing and managing hybrid cloud infrastructure. Executive leadership is taking notice, as boards increasingly prioritise complete visibility into all data in motion, with 83 percent confirming that deep observability is now being discussed at the board level to better protect hybrid cloud environments.
'Security teams are struggling to keep pace with the speed of AI adoption and the growing complexity and vulnerability of public cloud environments', said Mark Jow, technical evangelist, EMEA, at Gigamon. 'Deep observability addresses this challenge by combining MELT data with network-derived telemetry such as packets, flows, and metadata, delivering increased visibility and amore informed view of risk. It enables teams to eliminate visibility gaps, regain control, and act proactively with increased confidence. With 88 percent of Security and IT leaders agreeing it is critical to securing AI deployments, deep observability is fast becoming a strategic imperative'.
'With nearly half of organisations saying attackers are already targeting their large language models, AI security can't be an afterthought, it needs to be a top priority', said Mark Walmsley, CISO at Freshfields. 'The key to staying ahead? Visibility. When we can clearly see what's happening across AI systems and data flows, we can cut through the noise and manage risk more effectively. Deep observability helps us spot vulnerabilities early and put the right protections in place before issues arise'.
Image Credit: Gigamon
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Khaleej Times
a day ago
- Khaleej Times
Artificial Intelligence in cybersecurity: savior or saboteur?
Artificial intelligence has rapidly emerged as both a cornerstone of innovation and a ticking time bomb in the realm of cybersecurity. Once viewed predominantly as a force for good, enabling smarter threat detection, automating incident responses, and predicting attacks before they happen — AI has now taken on a double-edged role. The very capabilities that make it invaluable to cybersecurity professionals are now being exploited by cybercriminals to launch faster, more convincing, and more damaging attacks. From phishing emails indistinguishable from real business correspondence to deepfake videos that impersonate CEOs and public figures with chilling accuracy, AI is arming attackers with tools that were previously the stuff of science fiction. And as large language models (LLMs), generative AI, and deep learning evolve, the tactics used by bad actors are becoming more scalable, precise, and difficult to detect. 'The threat landscape is fundamentally shifting,' says Sergey Lozhkin, Head of the Global Research & Analysis Team for the Middle East, Türkiye, and Africa at Kaspersky. 'From the outset, cybercriminals began using large language models to craft highly convincing phishing emails. Poor grammar and awkward phrasing — once dead giveaways are disappearing. Today's scams can perfectly mimic tone, structure, and professional language.' But the misuse doesn't stop at email. Attackers are now using AI to create fake websites, generate deceptive images, and even produce deepfake audio and video to impersonate trusted figures. In some cases, these tactics have tricked victims into transferring large sums of money or divulging sensitive data. According to Roland Daccache, Senior Manager – Sales Engineering at CrowdStrike MEA, AI is now being used across the entire attack chain. 'Generative models are fueling more convincing phishing lures, deepfake-based social engineering, and faster malware creation. For example, DPRK-nexus adversary Famous Chollima used genAI to create fake LinkedIn profiles and résumé content to infiltrate organisations as IT workers. In another case, attackers used AI-generated voice and video deepfakes to impersonate executives for high-value business email compromise (BEC) schemes.' The cybercrime community is also openly discussing how to weaponize LLMs for writing exploits, shell commands, and malware scripts on dark web forums, further lowering the barrier of entry for would-be hackers. This democratisation of hacking tools means that even novice cybercriminals can now orchestrate sophisticated attacks with minimal effort. Ronghui Gu, Co-Founder of CertiK, a leading blockchain cybersecurity firm, highlights how AI is empowering attackers to scale and personalize their strategies. 'AI-generated phishing that mirrors human tone, deepfake technology for social engineering, and adaptive tools that bypass detection are allowing even low-skill threat actors to act with precision. For advanced groups, AI brings greater automation and effectiveness.' On the technical front, Janne Hirvimies, Chief Technology Officer of QuantumGate, notes a growing use of AI in reconnaissance and brute-force tactics. 'Threat actors use AI to automate phishing, conduct rapid data scraping, and craft malware that adapts in real time. Techniques like reinforcement learning are being explored for lateral movement and exploit optimisation, making attacks faster and more adaptive.' Fortifying Cyber Defenses To outsmart AI-enabled attackers, enterprises must embed AI not just as a support mechanism, but as a central system in their cybersecurity strategy. 'AI has been a core part of our operations for over two decades,' says Lozhkin. 'Without it, security operations center (SOC) analysts can be overwhelmed by alert fatigue and miss critical threats.' Kaspersky's approach focuses on AI-powered alert triage and prioritisation through advanced machine learning, which filters noise and surfaces the most pressing threats. 'It's not just about automation — it's about augmentation,' Lozhkin explains. 'Our AI Technology Research Centre ensures we pair this power with human oversight. That combination of cutting-edge analytics and skilled professionals enables us to detect over 450,000 malicious objects every day.' But the AI evolution doesn't stop at smarter alerts. According to Daccache, the next frontier is agentic AI — a system that can autonomously detect, analyze, and respond to threats in real time. 'Traditional automation tools can only go so far,' Daccache says. 'What's needed is AI that thinks and acts — what we call agentic capabilities. This transforms AI from a passive observer into a frontline responder.' CrowdStrike's Charlotte AI, integrated within its Falcon platform, embodies this vision. It understands security telemetry in context, prioritises critical incidents, and initiates immediate countermeasures, reducing analyst workload and eliminating delays during high-stakes incidents. 'That's what gives defenders the speed and consistency needed to combat fast-moving, AI-enabled threats,' Daccache adds. Gu believes AI's strength lies in its ability to analyze massive volumes of data and identify nuanced threat patterns that traditional tools overlook. 'AI-powered threat detection doesn't replace human decision-making — it amplifies it,' Gu explains. 'With intelligent triage and dynamic anomaly detection, AI reduces response time and makes threat detection more proactive.' He also stresses the importance of training AI models on real-world, diverse datasets to ensure adaptability. 'The threat landscape is not static. Your AI defenses shouldn't be either,' Gu adds. At the core of any robust AI integration strategy lies data — lots of it. Hirvimies advocates for deploying machine learning models across SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) platforms. 'These systems can correlate real-time threat intelligence, behavioral anomalies, and system events to deliver faster, more precise responses,' he says. 'Especially when it comes to detecting novel or stealthy attack patterns, machine learning makes the difference between catching a threat and becoming a headline.' Balancing Innovation with Integrity While AI can supercharge threat detection, response times, and threat simulations, it also brings with it the potential for misuse, collateral damage, and the erosion of privacy. 'Ethical AI use demands transparency, clear boundaries, and responsible data handling,' says Lozhkin.'Organisations must also ensure that employees are properly trained in the safe use of AI tools to avoid misuse or unintended exposure to threats.' He highlights Kaspersky's Automated Security Awareness Platform, which now includes dedicated sections on AI-assisted threats and responsible usage, reflecting the company's commitment to proactive education. When AI is deployed in red teaming or simulated cyberattacks, the risk matrix expands. Gu warns that AI systems, if left unchecked, can make decisions devoid of human context, potentially leading to unintended and widespread consequences. 'Ethical AI governance, robust testing environments, and clearly defined boundaries are essential,' he says, underlining the delicate balance required to simulate threats without crossing into unethical territory. Daccache emphasises the importance of a privacy-first, security-first approach. 'AI must be developed and operated with Privacy-by-Design and Secure-by-Design principles,' he explains. 'This extends to protecting the AI systems themselves — including their training data, operational logic, and outputs—from adversarial manipulation.' Daccache also points to the need for securing both AI-generated queries and outputs, especially in sensitive operations like red teaming. Without such safeguards, there's a real danger of data leakage or misuse. 'Transparency, accountability, and documentation of AI's capabilities and limitations are vital, not just to build trust, but to meet regulatory and ethical standards,' he adds. Despite AI's growing autonomy, human oversight remains non-negotiable. 'While AI can accelerate simulations and threat detection, it must be guided by skilled professionals who can interpret its actions with context and responsibility,' says Daccache. This human-AI collaboration ensures that the tools remain aligned with organisational values and ethical norms. Hirvimies rounds out the conversation with additional cautionary notes: 'Privacy violations, data misuse, bias in training datasets, and the misuse of offensive tools are pressing concerns. Transparent governance and strict ethical guidelines aren't optional, they're essential.' Balancing the Equation While AI promises speed, scale, and smarter defense mechanisms, experts caution that an over-reliance on these systems, especially when deployed without proper calibration and oversight — could expose organisations to new forms of risk. 'Absolutely, over-reliance on AI can backfire if systems are not properly calibrated or monitored,' says Lozhkin. 'Adversarial attacks where threat actors feed manipulated data to mislead AI are a growing concern. Additionally, AI can generate false positives, which can overwhelm security teams and lead to alert fatigue. To avoid this, companies should use a layered defence strategy, retrain models frequently, and maintain human oversight to validate AI-driven alerts and decisions.' This warning resonates across the cybersecurity landscape. Daccache echoes the concern, emphasising the need for transparency and control. 'Over-relying on AI, especially when treated as a black box, carries real risks. Adversaries are already targeting AI systems — from poisoning training data to crafting inputs that exploit model blind spots,' he explains. 'Without the right guardrails, AI can produce false positives or inconsistent decisions that erode trust and delay response.' Daccache stresses that AI must remain a tool that complements — not replaces—human decision-making. 'AI should be an extension of human judgement. That requires transparency, control, and context at every layer of deployment. High-quality data is essential, but so is ensuring outcomes are explainable, repeatable and operationally sound,' he says. 'Organisations should adopt AI systems that accelerate outcomes and are verifiable, auditable and secure by design.' Gu adds that blind spots in AI models can lead to serious lapses. 'AI systems are not infallible,' he says. 'Over-reliance can lead to susceptibility to adversarial inputs or overwhelming volumes of false positives that strain human analysts. To mitigate this, organizations should adopt a human-in-the-loop approach, combine AI insights with contextual human judgment, and routinely stress-test models against adversarial tactics.' Gu also warns about the evolving tactics of bad actors. 'An AI provider might block certain prompts to prevent misuse, but attackers are constantly finding clever ways to circumvent these restrictions. This makes human intervention all the more important in companies' mitigation strategies.' Governing the Double-Edged Sword As AI continues to embed itself deeper into global digital infrastructure, the question of governance looms large: will we soon see regulations or international frameworks guiding how AI is used in both cyber defense and offense? Lozhkin underscores the urgency of proactive regulation. 'Yes, there should definitely be an international framework. AI technologies offer incredible efficiency and progress, but like any innovation, they carry their fair share of risks,' he says. 'At Kaspersky, we believe new technologies should be embraced, not feared. The key is to fully understand their threats and build strong, proactive security solutions that address those risks while enabling safe and responsible innovation.' For Daccache, the focus is not just on speculative regulation, but on instilling foundational principles in AI systems from the start. 'As AI becomes more embedded in cybersecurity and digital infrastructure, questions around governance, risk, and accountability are drawing increased attention,' he explains. 'Frameworks like the GDPR already mandate technology-neutral protections, meaning what matters most is how organizations manage risk not whether AI is used.' Daccache emphasises that embedding Privacy-by-Design and Secure-by-Design into AI development is paramount. 'To support this approach, CrowdStrike offers AI Red Teaming Services, helping organisations proactively test and secure their AI systems against misuse and adversarial threats. It's one example of how we're enabling customers to adopt AI with confidence and a security-first mindset.' On the other hand, Gu highlights how AI is not only transforming defensive mechanisms but is also fuelling new forms of offensive capabilities. 'As AI becomes integral to both defence and offense in cyberspace, regulatory frameworks will be necessary to establish norms, ensure transparency, and prevent misuse. We expect to see both national guidelines and international cooperation similar to existing cybercrime treaties emerge to govern AI applications, particularly in areas involving privacy, surveillance, and offensive capabilities.' Echoing this sentiment, Hirvimies concludes by saying that developments are already underway. 'Yes. Regulations like the EU AI Act and global cyber norms are evolving to address dual-use AI,' he says. 'We can expect more international frameworks focused on responsible AI use in cyber defence, limits on offensive AI capabilities, and cross-border incident response cooperation. At QuantumGate, we've designed our products to support this shift and facilitate compliance with the country's cryptography regulations.'


The National
2 days ago
- The National
Trump says he has reached a deal with China on trade and rare earth minerals
President Donald Trump said he and Chinese leader Xi Jinping held a "very positive" phone call on Thursday, with the two agreeing to a trade deal and further talks to break an impasse over tariffs and global supplies of rare earth minerals. In a post on his Truth Social platform, Mr Trump said the call lasted about 90 minutes and said he and Mr Xi invited each other to their respective countries. "There should no longer be any questions respecting the complexity of Rare Earth products," Mr Trump wrote. "Our respective teams will be meeting shortly at a location to be determined." "We have a deal with China," Mr Trump later said from the Oval Office during a meeting with the German Chancellor Friedrich Merz. "We were straightening out some of the points, having to do mostly with rare earth, magnets and some other things." The keenly awaited call comes amid a dispute over rare earth minerals, which are vital for use in batteries and other tech products. The problem has threatened the already fragile trade truce reached last month. "The US side should take a realistic view of the progress made and withdraw the negative measures imposed on China," the Chinese government said in its own readout of the call. "The two sides should make good use of the established trade and economic consultation mechanism, uphold an attitude of equality, respect each other's concerns, and strive for a win-win outcome." On May 12, Washington and Beijing reached a 90-day deal that rolled back some of the tit-for-tat tariffs that the world's two largest economies had placed on each other, threatening to disrupt global trade. Though the move stabilised the stock market, it did not address the issue of the rare earth minerals. In April, China suspended exports of a wide range of critical minerals and magnets in a move that disrupted supplies used for the manufacture of vehicles, computer chips and other vital commodities. During the Oval meeting on Thursday, Mr Trump also appeared to go back on a State Department announcement last week that the US would begin to revoke the visas of Chinese students. "Chinese students are coming, no problem, no problem," Mr Trump said. "It's our honour to have them. Frankly we want to have foreign students, but we want them to be checked." The measure came amid an intense stand-off between the Trump administration and private universities over how institutions of higher education are to handle curriculums, faculty hiring, free speech and on-campus protests. China has the second highest number of international students in the US.


Khaleej Times
2 days ago
- Khaleej Times
Procter & Gamble to cut 7,000 jobs to rein in costs as tariff uncertainty looms
Procter Gamble said on Thursday it would cut 7,000 jobs, or about 6%, of its total workforce over the next two years, as part of a new restructuring plan to counter uneven consumer demand and higher costs due to tariff uncertainty. The world's largest consumer goods company also plans to exit some product categories and brands in certain markets, executives said at a Deutsche Bank Consumer Conference in Paris, adding the program could likely include some divestitures without giving detail. The Pampers maker's two-year restructuring plan comes when consumer spending is expected to remain pressured this year, and global consumer goods makers including PG and Unilever brace for a further hit to demand from even higher prices. "This is not a new approach, rather an intentional acceleration of the current win in the increasingly challenging environment in which we compete," executives said. President Donald Trump's sweeping tariffs on trading partners have roiled global markets and led to fears of a recession in the U.S., the biggest market for PG. The company imports raw ingredients, packaging materials and some finished products into the U.S. from China. Trump's trade war has cost companies more than $34 billion in lost sales and higher costs, a Reuters analysis showed, a toll that is expected to rise. In April, the Tide detergent maker said it would raise prices on some products and that it was prepared to pull every lever in its arsenal to mitigate the impact of tariffs. Pricing and cost cuts were the main levers, CFO Andre Schulten had said then. On Thursday, Schulten and PG's operations head Shailesh Jejurikar acknowledged that the geopolitical environment was "unpredictable" and that consumers were facing "greater uncertainty." The company had about 108,000 employees as of June 30, 2024, and said the job cuts would account for roughly 15% of its non-manufacturing workforce. PG added that the restructuring plan would help simplify the organizational structure by "making roles broader" and "teams smaller". The plans to divest certain brands will also help adjust its supply chain in order to reduce costs, PG said. (Reporting by Rishabh Jaiswal and Aishwarya Venugopal in Bengaluru; Editing by Janane Venkatraman, Rashmi Aich and Sriraj Kalluvila)