
Gigamon welcomes Damian Wilk to lead EMEA emerging markets as regional demand for deep observability builds
Dubai, United Arab Emirates – Gigamon, a leading deep observability company, welcomes Damian Wilk as general manager for EMEA Emerging Markets, as it continues to accelerate growth across the region.
EMEA Emerging Markets organizations are facing an increasingly complex and AI-fueled landscape that has left many organizations vulnerable. As a result, deep observability has become mission-critical for securing and managing today's complex hybrid cloud infrastructure. In the Gigamon 2025 Hybrid Cloud Security Survey, 89 percent of Security and IT leaders agreed that deep observability is a foundational element of cloud security. The Gigamon Deep Observability Pipeline helps organizations secure and manage hybrid cloud infrastructure by efficiently delivering network-derived telemetry directly to cloud, security, and traditional observability tools, helping to eliminate blind spots, optimize network traffic, and increase existing tool efficiency by up to 90 percent.
New Leadership Deepens Regional Cybersecurity Expertise
Based in Dubai, Damian Wilk will lead Gigamon expansion efforts across the Middle East, Africa, and Southern Europe, advancing the company's ability to help regional customers gain deep observability across their hybrid and multi-cloud infrastructure. Wilk will focus on accelerating customer growth, strengthening the channel ecosystem, and building momentum around the Gigamon Deep Observability Pipeline.
This leadership addition reflects a focused effort to scale enterprise customer engagement in complex, high-opportunity markets through refined go-to-market strategies and strong regional partnerships. The deep observability market grew 17 percent year-over-year in 2024, underscoring increasing demand for solutions that offer advanced visibility and security across hybrid cloud environments.
'Damian is a dynamic sales leader with a deep understanding of customer needs and an unwavering commitment to driving results,' said Mark Coates, vice president, EMEA at Gigamon. 'His appointment underscores our strategic focus on EMEA's Emerging Markets and highlights our dedication to delivering sustained growth and enhanced value to our customers and partners across these critical regions.'
'As organizations across EMEA's Emerging Markets navigate an increasingly complex threat landscape, Gigamon is uniquely positioned to help them gain complete visibility and insights across all data in motion in their hybrid cloud infrastructure,' said Wilk. 'We are committed to delivering powerful, customer-centric solutions that drive meaningful outcomes, and with the Gigamon Deep Observability Pipeline, that's exactly what we're delivering.'
Wilk brings over 20 years of enterprise sales leadership experience across the UK and EMEA. He has held senior roles at Rubrik, Veritas Technologies, Good Technology, and Cisco, where he led regional sales teams and strategic customer initiatives in cybersecurity and data management.
About Gigamon
Gigamon® offers a deep observability pipeline that efficiently delivers network-derived telemetry to cloud, security, and observability tools. This helps eliminate security blind spots and reduce tool costs, enabling you to better secure and manage your hybrid cloud infrastructure. Gigamon serves more than 4,000 customers worldwide, including over 80 percent of Fortune 100 enterprises, 9 of the 10 largest mobile network providers, and hundreds of governments and educational organizations. To learn more, please visit gigamon.com.
© 2025 Gigamon. All rights reserved. Gigamon and the Gigamon logo are trademarks of Gigamon in the United States and/or other countries. Gigamon trademarks can be found at www.gigamon.com/legal-trademarks. All other trademarks are the property of their respective owners.
For more information, please contact: Gigamon@activedmc.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The National
an hour ago
- The National
Robert Pether conditionally released in Iraq after four years in jail
A Dubai resident has been conditionally released from prison in Iraq after his family campaigned for four years to secure his freedom. Robert Pether, from Australia, along with his Egyptian co-worker Khaled Radwan, who both resided in Dubai at the time, were jailed in August 2021 and fined $12 million after a contract dispute between his employer and authorities in Iraq. Despite his release, Mr Pether is still barred from leaving Iraq and Australian authorities say he continues to face legal proceedings. However, Australian Foreign Minister Penny Wong said the development was a 'positive development'. "I know the personal toll Mr Pether's detention has taken on him and his family and hope this news brings a measure of relief after years of distress," she said in a statement. Simon Harris, Ireland's Deputy Prime Minister, said in a statement that Iraqi Foreign Minister Fuad Hussein had contacted him to confirm the release of Mr Pether, whose family currently live in Ireland. "I welcomed this as a first step to his being allowed to return to his family in Roscommon," Mr Harris said. But there are concerns about Mr Pether's health and any outstanding charges against him, he added. Contract dispute results in jail Mr Pether and Mr Radwan were arrested when they travelled to Iraq for what they thought was a routine business meeting. Employed as an engineer in Dubai for CME, Mr Pether was contracted to work on the central bank's headquarters near the Tigris River. The men were detained at the meeting. They each received a five-year jail sentence and were ordered to pay $12 million by an Iraqi court. The dispute was over a $33 million contract awarded to CME in 2015. The project was put on hold a year later, with plummeting oil prices and Iraq's war against ISIS put forward as the main reasons. Work resumed in 2018, with CME working for 39 of the 48 months as set out in the contract. Payment was received for 32 of those months before being withheld. CME was asked by the central bank to extend the contract by three months to make up for work that was suspended due to the onset of the Covid-19 pandemic.


The National
2 hours ago
- The National
UAE President performs Eid Al Adha prayers at Sheikh Zayed Grand Mosque
President Sheikh Mohamed performed Eid Al Adha prayers at the Sheikh Zayed Grand Mosque in Abu Dhabi on Friday, alongside other UAE leaders. The Eid sermon addressed the value of sincerity, its concept and forms, and was titled Our Eid is Sincerity, state news agency Wam reported. Dr Khalifa Al Dhaheri, chancellor of the Mohamed bin Zayed University for Humanities, delivered the sermon and said a spiritual and national value holds deep relevance during Eid. A person should serve their homeland, family and community, he added. The President was joined at the mosque by Sheikh Mansour bin Zayed, Vice President, Deputy Prime Minister and Chairman of the Presidential Court; Sheikh Khaled bin Mohamed, Crown Prince of Abu Dhabi; Sheikh Suroor bin Mohammed; Sheikh Nahyan bin Zayed, Chairman of the Board of Trustees of the Zayed bin Sultan Al Nahyan Charitable and Humanitarian Foundation; Sheikh Saif bin Zayed, Deputy Prime Minister and Minister of the Interior; Sheikh Hamed bin Zayed; and Sheikh Abdullah bin Zayed, Deputy Prime Minister and Minister of Foreign Affairs. Other senior officials included Sheikh Omar bin Zayed, Deputy Chairman of the Board of Trustees of the Zayed bin Sultan Al Nahyan Charitable and Humanitarian Foundation; Sheikh Khalid bin Zayed, Chairman of the Board of Directors of the Zayed Higher Organisation for People of Determination; Sheikh Dr Sultan bin Khalifa, Adviser to the President; Sheikh Theyab bin Mohamed, Deputy Chairman of the Presidential Court for Development Affairs and Martyrs' Families Affairs; Sheikh Hamdan bin Mohamed, Deputy Chairman of the Presidential Court for Special Affairs; Sheikh Zayed bin Mohamed; Sheikh Zayed bin Hamdan, Sheikh Nahyan bin Mubarak, Minister of Tolerance and Coexistence; and Sheikh Mohammed bin Hamad, Adviser to the President. After the sermon, Sheikh Mohamed was joined by sheikhs to recite Al Fatiha for UAE Founding Father, the late Sheikh Zayed bin Sultan Al Nahyan. In a message posted on X, Sheikh Mohamed said: "To my brothers, the Rulers of the Emirates, citizens and residents of the UAE, and Muslims around the world, I wish you a blessed Eid Al Adha and pray for God to grant lasting peace and harmony to all." Sheikh Mohammed bin Rashid, Vice President and Ruler of Dubai, also issued an Eid message. "We congratulate the Arab and Islamic nation after the blessed Eid Al Adha and we say to the pilgrims of the House of Allah: May God accept your Hajj and make it an accepted Hajj. We ask God to bring this great occasion back to us and to you with blessings, goodness, prosperity and peace." What is Eid Al Adha? Eid Al Adha, which means 'festival of the sacrifice', is the latter of the two Eid holidays celebrated across the Arab world, coming after Eid Al Fitr. Eid Al Adha commemorates the Prophet Ibrahim being asked by God in a dream to sacrifice his son, Ismail, as a test of his faith. As with other religious holidays in the Emirates, it is a time for friends and family to gather, often over meals, and reflect on their lives and faith. It is customary for families who have the means to have a goat or sheep slaughtered and share the meat with relatives and the less fortunate. Employees in the public and private sectors have been granted leave from Thursday, June 5, to Sunday, June 8, to mark the holiday, with work to resume on Monday, June 9.


Khaleej Times
3 hours ago
- Khaleej Times
Artificial Intelligence in cybersecurity: savior or saboteur?
Artificial intelligence has rapidly emerged as both a cornerstone of innovation and a ticking time bomb in the realm of cybersecurity. Once viewed predominantly as a force for good, enabling smarter threat detection, automating incident responses, and predicting attacks before they happen — AI has now taken on a double-edged role. The very capabilities that make it invaluable to cybersecurity professionals are now being exploited by cybercriminals to launch faster, more convincing, and more damaging attacks. From phishing emails indistinguishable from real business correspondence to deepfake videos that impersonate CEOs and public figures with chilling accuracy, AI is arming attackers with tools that were previously the stuff of science fiction. And as large language models (LLMs), generative AI, and deep learning evolve, the tactics used by bad actors are becoming more scalable, precise, and difficult to detect. 'The threat landscape is fundamentally shifting,' says Sergey Lozhkin, Head of the Global Research & Analysis Team for the Middle East, Türkiye, and Africa at Kaspersky. 'From the outset, cybercriminals began using large language models to craft highly convincing phishing emails. Poor grammar and awkward phrasing — once dead giveaways are disappearing. Today's scams can perfectly mimic tone, structure, and professional language.' But the misuse doesn't stop at email. Attackers are now using AI to create fake websites, generate deceptive images, and even produce deepfake audio and video to impersonate trusted figures. In some cases, these tactics have tricked victims into transferring large sums of money or divulging sensitive data. According to Roland Daccache, Senior Manager – Sales Engineering at CrowdStrike MEA, AI is now being used across the entire attack chain. 'Generative models are fueling more convincing phishing lures, deepfake-based social engineering, and faster malware creation. For example, DPRK-nexus adversary Famous Chollima used genAI to create fake LinkedIn profiles and résumé content to infiltrate organisations as IT workers. In another case, attackers used AI-generated voice and video deepfakes to impersonate executives for high-value business email compromise (BEC) schemes.' The cybercrime community is also openly discussing how to weaponize LLMs for writing exploits, shell commands, and malware scripts on dark web forums, further lowering the barrier of entry for would-be hackers. This democratisation of hacking tools means that even novice cybercriminals can now orchestrate sophisticated attacks with minimal effort. Ronghui Gu, Co-Founder of CertiK, a leading blockchain cybersecurity firm, highlights how AI is empowering attackers to scale and personalize their strategies. 'AI-generated phishing that mirrors human tone, deepfake technology for social engineering, and adaptive tools that bypass detection are allowing even low-skill threat actors to act with precision. For advanced groups, AI brings greater automation and effectiveness.' On the technical front, Janne Hirvimies, Chief Technology Officer of QuantumGate, notes a growing use of AI in reconnaissance and brute-force tactics. 'Threat actors use AI to automate phishing, conduct rapid data scraping, and craft malware that adapts in real time. Techniques like reinforcement learning are being explored for lateral movement and exploit optimisation, making attacks faster and more adaptive.' Fortifying Cyber Defenses To outsmart AI-enabled attackers, enterprises must embed AI not just as a support mechanism, but as a central system in their cybersecurity strategy. 'AI has been a core part of our operations for over two decades,' says Lozhkin. 'Without it, security operations center (SOC) analysts can be overwhelmed by alert fatigue and miss critical threats.' Kaspersky's approach focuses on AI-powered alert triage and prioritisation through advanced machine learning, which filters noise and surfaces the most pressing threats. 'It's not just about automation — it's about augmentation,' Lozhkin explains. 'Our AI Technology Research Centre ensures we pair this power with human oversight. That combination of cutting-edge analytics and skilled professionals enables us to detect over 450,000 malicious objects every day.' But the AI evolution doesn't stop at smarter alerts. According to Daccache, the next frontier is agentic AI — a system that can autonomously detect, analyze, and respond to threats in real time. 'Traditional automation tools can only go so far,' Daccache says. 'What's needed is AI that thinks and acts — what we call agentic capabilities. This transforms AI from a passive observer into a frontline responder.' CrowdStrike's Charlotte AI, integrated within its Falcon platform, embodies this vision. It understands security telemetry in context, prioritises critical incidents, and initiates immediate countermeasures, reducing analyst workload and eliminating delays during high-stakes incidents. 'That's what gives defenders the speed and consistency needed to combat fast-moving, AI-enabled threats,' Daccache adds. Gu believes AI's strength lies in its ability to analyze massive volumes of data and identify nuanced threat patterns that traditional tools overlook. 'AI-powered threat detection doesn't replace human decision-making — it amplifies it,' Gu explains. 'With intelligent triage and dynamic anomaly detection, AI reduces response time and makes threat detection more proactive.' He also stresses the importance of training AI models on real-world, diverse datasets to ensure adaptability. 'The threat landscape is not static. Your AI defenses shouldn't be either,' Gu adds. At the core of any robust AI integration strategy lies data — lots of it. Hirvimies advocates for deploying machine learning models across SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) platforms. 'These systems can correlate real-time threat intelligence, behavioral anomalies, and system events to deliver faster, more precise responses,' he says. 'Especially when it comes to detecting novel or stealthy attack patterns, machine learning makes the difference between catching a threat and becoming a headline.' Balancing Innovation with Integrity While AI can supercharge threat detection, response times, and threat simulations, it also brings with it the potential for misuse, collateral damage, and the erosion of privacy. 'Ethical AI use demands transparency, clear boundaries, and responsible data handling,' says Lozhkin.'Organisations must also ensure that employees are properly trained in the safe use of AI tools to avoid misuse or unintended exposure to threats.' He highlights Kaspersky's Automated Security Awareness Platform, which now includes dedicated sections on AI-assisted threats and responsible usage, reflecting the company's commitment to proactive education. When AI is deployed in red teaming or simulated cyberattacks, the risk matrix expands. Gu warns that AI systems, if left unchecked, can make decisions devoid of human context, potentially leading to unintended and widespread consequences. 'Ethical AI governance, robust testing environments, and clearly defined boundaries are essential,' he says, underlining the delicate balance required to simulate threats without crossing into unethical territory. Daccache emphasises the importance of a privacy-first, security-first approach. 'AI must be developed and operated with Privacy-by-Design and Secure-by-Design principles,' he explains. 'This extends to protecting the AI systems themselves — including their training data, operational logic, and outputs—from adversarial manipulation.' Daccache also points to the need for securing both AI-generated queries and outputs, especially in sensitive operations like red teaming. Without such safeguards, there's a real danger of data leakage or misuse. 'Transparency, accountability, and documentation of AI's capabilities and limitations are vital, not just to build trust, but to meet regulatory and ethical standards,' he adds. Despite AI's growing autonomy, human oversight remains non-negotiable. 'While AI can accelerate simulations and threat detection, it must be guided by skilled professionals who can interpret its actions with context and responsibility,' says Daccache. This human-AI collaboration ensures that the tools remain aligned with organisational values and ethical norms. Hirvimies rounds out the conversation with additional cautionary notes: 'Privacy violations, data misuse, bias in training datasets, and the misuse of offensive tools are pressing concerns. Transparent governance and strict ethical guidelines aren't optional, they're essential.' Balancing the Equation While AI promises speed, scale, and smarter defense mechanisms, experts caution that an over-reliance on these systems, especially when deployed without proper calibration and oversight — could expose organisations to new forms of risk. 'Absolutely, over-reliance on AI can backfire if systems are not properly calibrated or monitored,' says Lozhkin. 'Adversarial attacks where threat actors feed manipulated data to mislead AI are a growing concern. Additionally, AI can generate false positives, which can overwhelm security teams and lead to alert fatigue. To avoid this, companies should use a layered defence strategy, retrain models frequently, and maintain human oversight to validate AI-driven alerts and decisions.' This warning resonates across the cybersecurity landscape. Daccache echoes the concern, emphasising the need for transparency and control. 'Over-relying on AI, especially when treated as a black box, carries real risks. Adversaries are already targeting AI systems — from poisoning training data to crafting inputs that exploit model blind spots,' he explains. 'Without the right guardrails, AI can produce false positives or inconsistent decisions that erode trust and delay response.' Daccache stresses that AI must remain a tool that complements — not replaces—human decision-making. 'AI should be an extension of human judgement. That requires transparency, control, and context at every layer of deployment. High-quality data is essential, but so is ensuring outcomes are explainable, repeatable and operationally sound,' he says. 'Organisations should adopt AI systems that accelerate outcomes and are verifiable, auditable and secure by design.' Gu adds that blind spots in AI models can lead to serious lapses. 'AI systems are not infallible,' he says. 'Over-reliance can lead to susceptibility to adversarial inputs or overwhelming volumes of false positives that strain human analysts. To mitigate this, organizations should adopt a human-in-the-loop approach, combine AI insights with contextual human judgment, and routinely stress-test models against adversarial tactics.' Gu also warns about the evolving tactics of bad actors. 'An AI provider might block certain prompts to prevent misuse, but attackers are constantly finding clever ways to circumvent these restrictions. This makes human intervention all the more important in companies' mitigation strategies.' Governing the Double-Edged Sword As AI continues to embed itself deeper into global digital infrastructure, the question of governance looms large: will we soon see regulations or international frameworks guiding how AI is used in both cyber defense and offense? Lozhkin underscores the urgency of proactive regulation. 'Yes, there should definitely be an international framework. AI technologies offer incredible efficiency and progress, but like any innovation, they carry their fair share of risks,' he says. 'At Kaspersky, we believe new technologies should be embraced, not feared. The key is to fully understand their threats and build strong, proactive security solutions that address those risks while enabling safe and responsible innovation.' For Daccache, the focus is not just on speculative regulation, but on instilling foundational principles in AI systems from the start. 'As AI becomes more embedded in cybersecurity and digital infrastructure, questions around governance, risk, and accountability are drawing increased attention,' he explains. 'Frameworks like the GDPR already mandate technology-neutral protections, meaning what matters most is how organizations manage risk not whether AI is used.' Daccache emphasises that embedding Privacy-by-Design and Secure-by-Design into AI development is paramount. 'To support this approach, CrowdStrike offers AI Red Teaming Services, helping organisations proactively test and secure their AI systems against misuse and adversarial threats. It's one example of how we're enabling customers to adopt AI with confidence and a security-first mindset.' On the other hand, Gu highlights how AI is not only transforming defensive mechanisms but is also fuelling new forms of offensive capabilities. 'As AI becomes integral to both defence and offense in cyberspace, regulatory frameworks will be necessary to establish norms, ensure transparency, and prevent misuse. We expect to see both national guidelines and international cooperation similar to existing cybercrime treaties emerge to govern AI applications, particularly in areas involving privacy, surveillance, and offensive capabilities.' Echoing this sentiment, Hirvimies concludes by saying that developments are already underway. 'Yes. Regulations like the EU AI Act and global cyber norms are evolving to address dual-use AI,' he says. 'We can expect more international frameworks focused on responsible AI use in cyber defence, limits on offensive AI capabilities, and cross-border incident response cooperation. At QuantumGate, we've designed our products to support this shift and facilitate compliance with the country's cryptography regulations.'