logo
#

Latest news with #BusinessEmailCompromise

Introducing a New Age of Digital Security and Communication
Introducing a New Age of Digital Security and Communication

The Market Online

time14-05-2025

  • Business
  • The Market Online

Introducing a New Age of Digital Security and Communication

Sekur Private Data Ltd, a company rewriting the rules on secure communications in a world where privacy has become a premium asset. As big tech platforms continue to mine, monitor and monetize user data, Sekur offers a clean break with fully Swiss hosted proprietary solutions that put users not algorithms back in control. We caught up with Alain Ghiai, Founder and CEO of Sekur. He's been one of the earliest and loudest voices warning about the cracks in traditional cloud models. And he is building a business around the idea that real privacy isn't just a feature, it's a foundation. Now, today we'll dig into why Sekur's no big tech, no AI, no nonsense approach is gaining serious ground, especially in the US market. LYNDSAY: Alain a lot of companies, I mean, they say that they offer privacy, but if the person isn't using the same platform, well then that's game over really. So how does Sekur protect users if the other person isn't using the same Platform? ALAIN: That's a great question. So we offer three solutions so far our VPN, Swiss hosted proprietary, our email and our Messenger. So one of the biggest issues right now is called Business Email Compromise (BEC), and what happens is when you send an email to someone whichever of the two duopoly emails they use, hackers can intercept that impersonate you or your recipient. Sometime after a few months, they're going to trigger their attack. A common thing of BEC attack would be a wiring information that has been changed, contracts, et cetera. What we do with our Sekur send, we're able to send an email to anybody outside of Sekur that doesn't have it. They receive an email, they click on a link, you can password protected, read limit or time limit or do nothing, and then immediately they, in our Swiss server, the key that we do is that we never leave the Swiss highly encrypted server environment. So we're sending signals outside and within Sekur in order for everybody to log in. It's like a meeting place so to speak. And then we communicate within it. We do the same thing with messaging. We also don't record your phone number. So if you sign up for Sekur and you do your secure messenger and you have your app, you'll notice that other apps will need your phone number, and that's how they data mine you and your contacts, we don't. So we have a vetting process that's pretty easy to follow, but extremely effective against hackers, sim swapping, things like that. We basically are able to invite anyone via text or email to click on a link immediately it opens a tunnelling portal to the server, and you and I can chat. I could be in New York, you could be in Tokyo, and the whole thing happens in Switzerland, which is kind of interesting. So we use our proprietary tech with a Helix technology to log into our servers. That way there's nothing floating over the net. And that's what makes it attractive for businesses because businesses have clients that use the typical apps that we're not going to name here, that have been compromised on a daily basis, and now they can communicate with a client without compromising themselves or the client data. LYNDSAY: That's a lot of information right there out of the gate and it's so useful as well. Now you are eyeing a massive US market where trust in big tech is, you know, it's cratering. So tell us what gives secure the edge to through in a space crowded with privacy washing players. ALAIN: So one of the thing is we were the first privacy enthusiast, the first privacy application that offers a gamut of solution. We have our own infrastructure. We don't use big tech because that way we can keep the Swiss data privacy laws. So we have a gamut of solutions. We started two, three years ago to really push this. We spend a lot of money into R&D and marketing, and now we have a name for ourselves. It is Swiss. I mean Swiss is synonymous with privacy and we have our features. So we're able to make a dent like that. And we have key partnerships that we have signed on and others that we are bringing on board at very high level of corporate and government in the US. LYNDSAY: So let's actually lean into that Swiss advantage just a little bit more. So why does the Swiss hosted matter so much right now? And basically how much of a moat does this create secure against US-based competitors? ALAIN: Well, first of all, if you are based in Switzerland, and if you use a US cloud solution such as AWS, Microsoft or Google, you are still subject to what they call the Cloud act. That means that as long as you use a US infra, you're subject to that law under subpoenas, even if you're in Switzerland or you could be in Canada or in Germany and have your own data privacy laws and residency laws. We use our own infrastructure that's housed only in Switzerland because we don't touch the cloud system. The US one, we're able to comply a hundred percent with the FADP, the Swiss law. That's already something that most companies won't do because today, LYNDSAY, most investors are investing in data mining and big data. Nobody's interested to get a customer for 20, 50 bucks a month when you can make a few thousand dollars a year per user on their data. So if I'm a young entrepreneur and I go to you for millions of dollars to build my app and my system, the first thing you're going to do is say, we're going to hook up on AWS, we're going to try to monitor that data, data mine, and do a big data system. LYNDSAY: Big Tech is basically the landlord for half the so-called secure apps out there, like you've mentioned. So how big of a differentiator is it that Sekur owns an entire infrastructure? Like for example, the Signal scandal. Let's talk about that a little bit. ALAIN: Well, we have four things that are distinctive from others. First, we're hosted in Switzerland only. We don't use open source coding. That's a thing that most companies use, 95% of them. That's where most of the hacks happen as well. We have our own equipment, our own proprietary machine. And we also don't put AI into the communication tool. That's a huge thing because today, I mean, AI is everywhere. You can't go on a conference call without this little AI thing next to you. You can't send an email on one of these famous two services that I can't name that doesn't have AI in it. So AI is basically a data siphon system. So what happened at Signal is this, it's either it was intentional, somebody went in there or it was inadvertent, somebody was added. We're not here to make a judgment. What we are here to say is that with our a secure messenger, we would've eliminated both scenarios. The very fact that you don't even have a phone number and you communicate outside of the typical telecom system renders you invisible. This is our mission, is to render people invisible and protect themselves from hackers and other intrusions. And that's why we're launching our enterprise and premium VIP solution that will go to diplomats, it'll go to C-level executive, high net worth, government officials, and others because they have physical security, but they still communicate on these apps or that email that's compromised on a daily basis. LYNDSAY: You mentioned, everyone is slapping AI power onto every product nowadays. I feel like, you know, when we go in a store, there's AI, when we go online, everything has AI nowadays and you're going the opposite way. No AI, no data mining. So why is secure betting against the AI rush so much? I mean, how does that resonate with the customer base? ALAIN: Well, our customers love it because we have always gone against the trend in terms of intrusion. So this is the next thing is not to put AI in our communication tools. If you need to look and research something and AI helps you, let's say for customer service, I think that's fantastic. But AI shouldn't be into your system of communication because you don't know where that data goes. Well, we know it goes to Google, Microsoft or Amazon, and at the end of the day, AI is anti privacy. There was an article not long ago two and a half billion Gmails were hacked with AI. And somebody is asking me, well, how come your system is better? Because we're off grid. We have never been part of the system, we have never used open source coding and we have never hosted on the main platforms. So if you're a completely off grid and invisible AI doesn't want to bother with you, they're going to go with the systems that are easy. So every Google search, every email, every Microsoft this, every browser. So we're actually, if I may extend our vision here a little bit. We're in the middle of a fundraising as well, and one of the thing that we're going to complete is our voice and video encryption where it would be about, I think by the end of the year where you can call someone without dialing their number, you'll be able to go on a video conferencing tool without having AI siphoning the data. And in 2026, our goal is to build our privacy browser, which will also protect you from clicking on these malware and fake links because AI is going to sophisticated itself even more. So that's kind of the next step where we can complete the communication circle and protect everyone from browsing the wrong thing and other things as we just discussed. LYNDSAY: I was going to ask you too what should investors be looking out for in the coming quarters? Is there anything else you'd like to mention? ALAIN: Yeah, so we're basically going to launch our enterprise solution this quarter by the end of June. We're also planning some international partnerships as well. Once we close our funding, we are planning to develop that voice and video encryption, more premium solutions. So we're launching our regular SMB marketing or we're going to go to that premium market where there's literally zero competition. And I mean, even somebody, a regular small business doesn't want to be hacked. So if you're going to offer them something for $20 a month they'll take it if they can help against BEC attack. But the premium solutions are where the big opportunities are because you have C-level executives, targeted people, VIP, the jet setting crowd or government officials. They're the most targeted people and they all use, as we have seen with the Signal issue, they all use these solutions. We imagine that they have very sophisticated tools, but they don't. So we are here to offer that. And those ranges will be about a thousand to $1,500 a year per, per license, which is very cheap when you think about it for a board member and executive. So in the next 12 to 18 month, we're developing the solution. We are also targeting profitability, which is great as a public company. So, you know, watch us and follow our journey. LYNDSAY: Again, that was Alain Ghiai, CEO of Sekur. Now you can learn more about them on their website at and you can find them on the CSE under the ticker symbol SKUR. Join the discussion: Find out what everybody's saying about this stock on the Sekur Private Data investor discussion forum, and check out the rest of Stockhouse's stock forums and message boards. The material provided in this article is for information only and should not be treated as investment advice. For full disclaimer information, please click here

Artificial Intelligence (AI) and AI-agents: A Game-Changer for Both Cybersecurity and Cybercrime (By Anna Collard)
Artificial Intelligence (AI) and AI-agents: A Game-Changer for Both Cybersecurity and Cybercrime (By Anna Collard)

Zawya

time03-03-2025

  • Business
  • Zawya

Artificial Intelligence (AI) and AI-agents: A Game-Changer for Both Cybersecurity and Cybercrime (By Anna Collard)

By Anna Collard, SVP Content Strategy&Evangelist KnowBe4 Africa ( Artificial Intelligence is no longer just a tool—it is a gamechanger in our lives, our work as well as in both cybersecurity and cybercrime. While businesses leverage AI to enhance defences, cybercriminals are weaponising AI to make these attacks more scalable and convincing​. In 2025, researchers forecast that AI agents, or autonomous AI-driven systems capable of performing complex tasks with minimal human input, are revolutionising both cyberattacks and cybersecurity defences. While AI-powered chatbots have been around for a while, AI agents go beyond simple assistants, functioning as self-learning digital operatives that plan, execute, and adapt in real time. These advancements don't just enhance cybercriminal tactics—they may fundamentally change the cybersecurity battlefield. How Cybercriminals Are Weaponising AI: The New Threat Landscape AI is transforming cybercrime, making attacks more scalable, efficient, and accessible. The WEF Artificial Intelligence and Cybersecurity Report (2025) ( highlights how AI has democratised cyber threats, enabling attackers to automate social engineering, expand phishing campaigns, and develop AI-driven malware​. Similarly, the Orange Cyberdefense Security Navigator 2025 ( warns of AI-powered cyber extortion, deepfake fraud, and adversarial AI techniques. And the 2025 State of Malware Report by Malwarebytes ( notes, while GenAI has enhanced cybercrime efficiency, it hasn't yet introduced entirely new attack methods—attackers still rely on phishing, social engineering, and cyber extortion, now amplified by AI. However, this is set to change with the rise of AI agents—autonomous AI systems capable of planning, acting, and executing complex tasks—posing major implications for the future of cybercrime. Here is a list of common (ab)use cases of AI by cybercriminals: AI-Generated Phishing&Social Engineering Generative AI and large language models (LLMs) enable cybercriminals to craft more believable and sophisticated phishing emails in multiple languages—without the usual red flags like poor grammar or spelling mistakes. AI-driven spear phishing now allows criminals to personalise scams at scale, automatically adjusting messages based on a target's online activity. AI-powered Business Email Compromise (BEC) scams are increasing, as attackers use AI-generated phishing emails sent from compromised internal accounts to enhance credibility​. AI also automates the creation of fake phishing websites, watering hole attacks and chatbot scams, which are sold as AI-powered crimeware as a service' offerings, further lowering the barrier to entry for cybercrime​. Deepfake-Enhanced Fraud&Impersonation Deepfake audio and video scams are being used to impersonate business executives, co-workers or family members to manipulate victims into transferring money or revealing sensitive data. The most famous 2024 incident was UK based engineering firm Arup ( that lost $25 million after one of their Hong Kong based employees was tricked by deepfake executives in a video call. Attackers are also using deepfake voice technology to impersonate distressed relatives or executives, demanding urgent financial transactions. Cognitive Attacks Online manipulation—as defined by Susser et al. (2018) ( —is 'at its core, hidden influence — the covert subversion of another person's decision-making power'. AI-driven cognitive attacks are rapidly expanding the scope of online manipulation, leveraging digital platforms and state-sponsored actors increasingly use generative AI to craft hyper-realistic fake content, subtly shaping public perception while evading detection. These tactics are deployed to influence elections, spread disinformation, and erode trust in democratic institutions. Unlike conventional cyberattacks, cognitive attacks don't just compromise systems—they manipulate minds, subtly steering behaviours and beliefs over time without the target's awareness. The integration of AI into disinformation campaigns dramatically increases the scale and precision of these threats, making them harder to detect and counter. The Security Risks of LLM Adoption Beyond misuse by threat actors, business adoption of AI-chatbots and LLMs introduces their own significant security risks—especially when untested AI interfaces connect the open internet to critical backend systems or sensitive data. Poorly integrated AI systems can be exploited by adversaries and enable new attack vectors, including prompt injection, content evasion, and denial-of-service attacks. Multimodal AI expands these risks further, allowing hidden malicious commands in images or audio to manipulate outputs. Additionally, bias within LLMs poses another challenge, as these models learn from vast datasets that may contain skewed, outdated, or harmful biases. This can lead to misleading outputs, discriminatory decision-making, or security misjudgments, potentially exacerbating vulnerabilities rather than mitigating them. As LLM adoption grows, rigorous security testing, bias auditing, and risk assessment are essential to prevent exploitation and ensure trustworthy, unbiased AI-driven decision-making. When AI Goes Rogue: The Dangers of Autonomous Agents With AI systems now capable of self-replication, as demonstrated in a recent study ( the risk of uncontrolled AI propagation or rogue AI—AI systems that act against the interests of their creators, users, or humanity at large - is growing. Security and AI researchers have raised concerns that these rogue systems can arise either accidentally or maliciously, particularly when autonomous AI agents are granted access to data, APIs, and external integrations. The broader an AI's reach through integrations and automation, the greater the potential threat of it going rogue, making robust oversight, security measures, and ethical AI governance essential in mitigating these risks. The future of AI Agents for Automation in Cybercrime A more disruptive shift in cybercrime can and will come from AI Agents, which transform AI from a passive assistant into an autonomous actor capable of planning and executing complex attacks. Google, Amazon, Meta, Microsoft, and Salesforce are already developing Agentic AI for business use, but in the hands of cybercriminals, its implications are alarming. These AI agents can be used to autonomously scan for vulnerabilities, exploit security weaknesses, and execute cyberattacks at scale. They can also allow attackers to scrape massive amounts of personal data from social media platforms and automatically compose and send fake executive requests to employees or analyse divorce records across multiple countries to identify individuals for AI-driven romance scams, orchestrated by an AI agent. These AI-driven fraud tactics don't just scale attacks—they make them more personalised and harder to detect. Unlike current GenAI threats, Agentic AI has the potential to automate entire cybercrime operations, significantly amplifying the risk​. How Defenders Can Use AI&AI Agents Organisations cannot afford to remain passive in the face of AI-driven threats and security professionals need to remain abreast of the latest development. Here are some of the opportunities in using AI to defend against AI: AI-Powered Threat Detection and Response: Security teams can deploy AI and AI-agents to monitor networks in real time, identify anomalies, and respond to threats faster than human analysts can. AI-driven security platforms can automatically correlate vast amounts of data to detect subtle attack patterns that might otherwise go unnoticed, create dynamic threat modelling, real-time network behaviour analysis, and deep anomaly detection​. For example, as outlined by researchers of Orange Cyber Defense ( AI-assisted threat detection is crucial as attackers increasingly use "Living off the Land" (LOL) techniques that mimic normal user behaviour, making it harder for detection teams to separate real threats from benign activity. By analysing repetitive requests and unusual traffic patterns, AI-driven systems can quickly identify anomalies and trigger real-time alerts, allowing for faster defensive responses. However, despite the potential of AI-agents, human analysts still remain critical, as their intuition and adaptability are essential for recognising nuanced attack patterns and leverage real incident and organisational insights to prioritise resources effectively. Automated Phishing and Fraud Prevention: AI-powered email security solutions can analyse linguistic patterns, and metadata to identify AI-generated phishing attempts before they reach employees, by analysing writing patterns and behavioural anomalies. AI can also flag unusual sender behaviour and improve detection of BEC attacks​. Similarly, detection algorithms can help verify the authenticity of communications and prevent impersonation scams. AI-powered biometric and audio analysis tools detect deepfake media by identifying voice and video inconsistencies. *However, real-time deepfake detection remains a challenge, as technology continues to evolve. User Education&AI-Powered Security Awareness Training: AI-powered platforms (e.g., KnowBe4's AIDA) deliver personalised security awareness training, simulating AI-generated attacks to educate users on evolving threats, helping train employees to recognise deceptive AI-generated content​ and strengthen their individual susceptility factors and vulnerabilities. Adversarial AI Countermeasures: Just as cybercriminals use AI to bypass security, defenders can employ adversarial AI techniques, for example deploying deception technologies—such as AI-generated honeypots—to mislead and track attackers, as well as continuously training defensive AI models to recognise and counteract evolving attack patterns. Using AI to Fight AI-Driven Misinformation and Scams: AI-powered tools can detect synthetic text and deepfake misinformation, assisting fact-checking and source validation. Fraud detection models can analyse news sources, financial transactions, and AI-generated media to flag manipulation attempts​. Counter-attacks, like shown by research project Countercloud ( or O2 Telecoms AI agent 'Daisy' ( show how AI based bots and deepfake real-time voice chatbots can be used to counter disinformation campaigns as well as scammers by engaging them in endless conversations to waste their time and reducing their ability to target real victims​. In a future where both attackers and defenders use AI, defenders need to be aware of how adversarial AI operates and how AI can be used to defend against their attacks. In this fast-paced environment, organisations need to guard against their greatest enemy: their own complacency, while at the same time considering AI-driven security solutions thoughtfully and deliberately. Rather than rushing to adopt the next shiny AI security tool, decision makers should carefully evaluate AI-powered defences to ensure they match the sophistication of emerging AI threats. Hastily deploying AI without strategic risk assessment could introduce new vulnerabilities, making a mindful, measured approach essential in securing the future of cybersecurity. To stay ahead in this AI-powered digital arms race, organisations should: ✅Monitor both the threat and AI landscape to stay abreast of latest developments on both sides. ✅ Train employees frequently on latest AI-driven threats, including deepfakes and AI-generated phishing. ✅ Deploy AI for proactive cyber defense, including threat intelligence and incident response. ✅ Continuously test your own AI models against adversarial attacks to ensure resilience. Distributed by APO Group on behalf of KnowBe4.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store