10 hours ago
Mint Primer: AI's twin impact: Better security, worse dangers
AI and generative AI are proving to be double-edged swords, boosting cyber defences while also enabling threats like deepfakes, voice cloning and even attacks by autonomous AI agents. With over two-thirds of Indian firms hit by such threats last year, how do we keep up?
What sets AI-powered cyberthreats apart?
AI-powered cyberthreats supercharge traditional attacks, making phishing, malware, and impersonation faster, stealthier, and more convincing. GenAI tools create deepfakes, polymorphic malware that mutates constantly, and generate personalized phishing emails. AI bots test stolen credentials, bypass CAPTCHAs that detect bots using puzzles, and scan networks for vulnerabilities. Tools like ChatGPT are used to send 100,000 spam emails for just $1,250. Symantec researchers have shown how AI agents like OpenAI's Operator can run a phishing attack via email with little human intervention.
Also read: Artificial intelligence may cause mass unemployment, says Geoffrey Hinton; 'Godfather of AI' reveals 'safe' jobs
How big is this threat for India?
Nearly 72% of Indian firms faced AI-driven cyberattacks in the past year, reveals an IDC–Fortinet report. Key threats include insider risks, zero-day exploits (attacks before developers can fix software bugs, offering zero defence on day one), phishing, ransomware, and supply chain attacks. These threats are rising fast—70% saw cases double, 12% saw a threefold surge. These attacks are harder to detect. The fallout is costly: 56% suffered financial losses, 20% lost over $500,000, the report noted. Data theft (60%), trust erosion (50%), regulatory fines (46%), and operational disruptions (42%) are the other top business impacts.
The threats are evolving. Are we?
Only 14% of firms feel equipped to handle AI-driven threats, while 21% can't track them at all, notes IDC. Skills and tool gaps persist, mainly in detecting adaptive threats and using GenAI in red teaming (when ethical hackers mimic real attackers to test a firm's cyber defences). Other gaps include lean security teams, and few chief information security officers.
Also read: Google flags over 500 million scam messages monthly as cybercrime soars in India
What about laws on AI-led cybercrime?
Most countries are addressing AI-related cybercrime using existing laws and evolving AI frameworks. In India, efforts rely on the IT Act, the Indian Computer Emergency Response Team, cyber forensics labs, global ties, and the Indian Cybercrime Coordination Centre under the Union home ministry, which oversees a cybercrime portal logging 6,000 daily cases. The draft Digital India Act may tackle AI misuse. While several states are forming AI task forces, a national AI cybersecurity framework may also be needed.
Also read: Israeli startup Coralogix to invest bulk of $115 million fundraise in India
How to build cyber defence for AI threats?
Evolving AI threats call for AI-savvy governance, regular training, and simulations. Firms must adopt an 'AI vs AI" defence, train staff on phishing and deepfakes, enforce Zero Trust (every access request must be verified) and multi-factor authentication, and conduct GenAI red-team drills. Airtel, for instance, now uses AI to block spam and scam links in real time; Darktrace uses self-learning AI to detect threats without prior data. Cyber insurance must also cover reputational and regulatory risks.