logo
#

Latest news with #RahulMatthan

Siddharth Pai: Arm employees against sophisticated cyberattacks
Siddharth Pai: Arm employees against sophisticated cyberattacks

Mint

time3 days ago

  • Business
  • Mint

Siddharth Pai: Arm employees against sophisticated cyberattacks

The internet can be more dangerous than even the roughest part of a big city. Consider this: Over the Easter weekend, British retailer Marks & Spencer (M&S) fell victim to a cyberattack that proved not only costly in financial but also reputational terms. It stemmed not from a failure of firewalls or malware detection tools, but AI-enabled social engineering. A hacker group known as Scattered Spider is being probed for breaching M&S's systems through a third-party IT services contractor. The attackers may have used impersonation techniques to gain unauthorized access to internal systems, resulting in leaked customer data, operational disruptions and an estimated financial hit of over £400 million. Also Read: Rahul Matthan: Brace for a wave of AI-enabled criminal enterprise It underscores an increasingly common theme in today's cybersecurity breaches: the exploiting of humans, rather than hardware or software. Cyber-safety is no longer just a technical issue to be left to the IT department; it's a human issue, deeply embedded in behaviour, awareness and preparedness. Human resource training is a pressing challenge in today's context. Organizations are facing an onslaught of evolving cyber threats—ransomware attacks, phishing scams, deepfake impersonations, credential stuffing and more. These don't merely target infrastructure, but also people. Employees get emails from attackers posing as executives, vendors or even co-workers. They're tricked into clicking malicious links, giving away login credentials or transferring money to fake accounts. So the front-line isn't the server room, but everyone's inbox. M&S wasn't alone. Around the same time, Peter Green Chilled, a logistics supplier for major supermarkets, was hit by a ransomware demand that disrupted its ability to deliver fresh goods—a classic example of how lapses can ripple across supply chains. In each case, the technical sophistication of the attack was significant, but what often allowed entry was an older vulnerability: human error, complacency or ignorance. That's where training comes in. However, unlike other workplace modules like those for code compliance or harassment awareness, cybersecurity training poses unique challenges. For one, the threat landscape evolves constantly. Techniques that were cutting-edge six months ago may be obsolete now. Social engineering tactics are increasing as attackers study employee behaviour to refine their methods even as training modules struggle to keep pace. Also Read: Dodgy aides: What can we do about AI models that defy humans? Then there's an engagement problem. Most employees don't exactly look forward to such training. The mere mention conjures images of outdated videos, multiple-choice quizzes and unrelatable jargon. For behaviour change, the content must be engaging, memorable and relevant to people's day-to-day roles. Trained users are 30% less likely to fall for phishing attempts ( Gamification may work. If employees are challenged to identify phishing emails in a simulated inbox, or compete in cybersecurity 'escape rooms' that require them to solve puzzles based on real threats, they are far likelier to remember the lessons. Interactive storytelling and incentives could work. Case studies, like M&S's, could be used. Another solution is adaptive learning. Tools powered by large language models, such as Gen AI-based systems, can tailor training material to an employee's role, learning pace and previous performance. A marketing executive who frequently handles customer data might need a different module from a warehouse supervisor. Likewise, training systems can use natural language interactions as learning chats. This would not only enhance comprehension but also facilitate continuous reinforcement. However, designing and implementing such training programs isn't solely the responsibility of the IT department. All departments must join hands, with HR embedding cyber awareness into the cultural fabric of the organization and fostering a mindset where everyone makes safety part of their job. When employees understand that a single careless click can cause multimillion-pound damage, as in the case of M&S, they're more likely to internalize the lessons. To sustain cybersecurity training, it should be embedded into everyday workflows. Micro-learning modules, brief but frequent sessions delivered via mobile devices or placed in productivity platforms, can reinforce knowledge incrementally. These modules could be triggered contextually—for example, providing a phishing refresher right after an employee forwards a suspicious email. Over time, such nudge-based training would build everyone's muscle memory, turning caution into instinct. Also Read: Rahul Matthan: Don't let data privacy safeguards work against us The stakes could not be higher. Over 80% of the world's largest organizations report at least one major breach a year. It's not just about firewalls and antivirus software anymore; it's about employees in coffee shops, on personal devices, at home networks and in third-party vendor offices. That reality demands that HR development evolve beyond compliance checklists and become an active, dynamic component of the organization's cybersecurity strategy. Ultimately, the best defence an organization can build is not a piece of software, but a culture—one where every employee acts as a guardian of data and systems. It demands well-designed, engaging and adaptive training efforts that keep pace with the adversaries we face. In the game of cybersecurity, humans aren't just a vulnerability—they're also the solution. The author is co-founder of Siana Capital, a venture fund manager.

Dodgy aides: What can we do about AI models that defy humans?
Dodgy aides: What can we do about AI models that defy humans?

Mint

time4 days ago

  • Mint

Dodgy aides: What can we do about AI models that defy humans?

Artificial intelligence (AI) going rogue has been the stuff of dystopic science fiction. Could fiction be giving way to fact, with several AI models reportedly disobeying explicit instructions to shut down when a third-party tester asked them to? On a recent test done by Palisade Research, the most glaring refusenik belonged to OpenAI, with some AI models of Google and Anthropic also showing a tendency to evade shutdown. It is not yet time to rewatch Terminator 3: Rise of the Machines (2003) for a vivid nightmare scenario of malign AI running amok, but it would be a good idea to adopt caution while integrating AI bots and modules into Enterprise Resource Planning systems. If something goes wrong, the system would likely need a reboot; and if its AI bits scuttle a shutdown, a digital hostage crisis could arise. Also Read: Rahul Matthan: Brace for a wave of AI-enabled criminal enterprise That's what users of AI have to worry about. Developers and regulators of AI, meanwhile, must accelerate efforts to address the challenges thrown open by the rise of AI that can defy human orders. Silicon Valley is used to privileging speed-to-market over full system integrity and safety. This urge is baked into the business model of multiple startups in pursuit of similar wonders, with venture capital breathing down executive necks to play the pioneer in a potentially winner-takes-all setting. Investors often need their hot ventures to prove their mettle double-quick so that they can either cash out or stem losses before moving on to other bets. 'Move fast and break things' is fine as a motto while developing apps to share videos, compare pet pranks or disrupt our online lives in other small ways. Also Read: When AI gets a manager, you know the game has changed But when it comes to AI, which is rapidly being given agency, nobody can afford to be cavalier about what may end up broken. If one thing snaps, multiple breakdowns could follow. AI is given to hallucination and training input biases. It can also learn the wrong thing if it is fed carelessly crafted synthetic data, for example, like broad estimates with low fidelity to actual numbers. This problem goes by the bland title of 'misalignment.' Today, what risks going askew is the course taken by AI from the path planned for AI development. Among the techniques used to keep alignment in check, there is one whose name harks back to war games of the Cold War era: Red Teaming. The Red Team represented the bad guys, of course, and the aim was to get into the head of the enemy and anticipate its conduct. Applied to AI, it would entail provoking it to expose its follies. If the AI models that dodged orders to shut down had been Red Teamed properly while under development, developers need to come up with better ways to exorcise their software of potential demons. If the makers of these tools fail to keep AI aligned with desirable outcomes, then regulation would be the only security we have against a big threat in the making. Also Read: Biases aren't useless: Let's cut AI some slack on these The EU's regulatory approach to AI invites criticism for being too stiff for innovation to thrive, but it is spot-on in its demand for safe, transparent, traceable, eco-friendly and non-discriminatory AI. Human oversight of AI systems, as the EU requires, should be universally adopted even if it slows down AI evolution. We must minimize risks by specifying limits and insisting on transparency. In all AI labs, developers and whistleblowers alike should know what lines must not be crossed. Rules are rarely perfect at the outset, but we all have a stake in this. Let's ensure that AI is here to serve and not subvert human welfare.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store