logo
AI in Cybersecurity: A game changer or a double-edged sword?

AI in Cybersecurity: A game changer or a double-edged sword?

India Today02-05-2025

Artificial intelligence has utterly transformed cybersecurity in diverse manners, both remarkable and multifold. Its skills—including scouring immense datasets, searching for anomalies and systematising retaliation—have propelled protective tactics to unprecedented heights.Yet, similar to any transformative innovation, AI in cybersecurity presents both immense potential and significant peril.As groups increasingly incorporate AI into their security ecosystems, the question surfaces: are we bolstering our defenses, or building new vulnerabilities?advertisement
Indiatoday spoke with Namrata Barpanda, Staff security engineer, ServiceNow to get more insights.AI's transformative worth in cybersecurity is cemented in its aptitude to scale and tailor. Today's organisations spawn gigantic volumes of information, and traditional instruments regularly fail to identify sophisticated threats concealed in that noise.AI, particularly machine learning designs, can process countless pieces of data in real-time, distinguishing examples and abnormalities that would somehow go unnoticed. Unlike signature-based frameworks, which rely upon known dangers, AI models evolve, gaining from new behaviours and staying one step ahead of zero-day susceptibilities.The employment of AI in Security Information and Event Management (SIEM) and Security Automation, Orchestration, and Response (SOAR) platforms has proven particularly powerful.These tools streamline log examination, alert triage, and automated response—capacities that are time-consuming and mistake-inclined when overseen manually.advertisementIBM's examination shows that companies with AI and computerisation abilities spared a normal of $3.05 million in breach costs and decreased containment time by 74 days contrasted with those without these innovations.While AI systems demonstrate impressive skills for automating protections and pinpointing pioneering hazards more quickly than person-by-person investigations, confirming such technologies evolve responsibly and dealing with innate prejudices is indispensable.While AI tools display extraordinary aptitude to automate protections and pinpoint novel risks, confirming such frameworks stay transparent and address biases is essential.AI is also capable of analysing server behaviour and usage trends, and when it detects modifications to system behaviour, it can initiate monitoring or mitigation measures.This explains behavioural analytics, in which artificial intelligence (AI) detects changes in performance or activity, allowing for real-time response mechanisms to handle possible risks or problems.Strict regulatory environments like healthcare and finance, and lack of transparency in automated decision-making could severely undermine adherence and trust in systems.While AI may flag incidents more rapidly than people, security teams require an understanding of why a model initiated a specific action to preserve accountability.Poorly designed systems also risk disproportionately impacting some groups if biases are not carefully audited and addressed.advertisementFor AI to truly augment rather than replace human intelligence, governance frameworks ensuring responsible development and ongoing testing are needed.While automation can streamline defences, completely removing the human element risks overlooking nuanced threats.A balanced, multipronged approach combining expert human judgment with intelligent tools offers the greatest promise for both security and ethical AI.Overall, with care and oversight, the integration of AI into cybersecurity need not come at the cost of human values.The accelerating refinement of AI systems and cyberattacks has spawned acybersecurity arms race. On one front, protectors employ AI to safeguard digital domains; on another, aggressors leverage comparable technology to rupture through defences.Staying ahead in this contest necessitates a multilayered plan—one blending intelligent instruments with well-prepared professionals, robust governance policies, and a culture of constant enhancement.It also implies preparation for novel risk vectors introduced by AI itself, ranging from algorithmic manipulation to synthetic persona fraud.AI's function in cybersecurity moreover highlights a growing requirement for collaboration between technical and non-technical teams.Meanwhile, security teams must evolve fluency in AI technologies, ensuring they can monitor, tune, and validate models productively. The convergence of cybersecurity and AI demands novel skillsets, novel structures, and a novel mindset.advertisementDeeper integration with cutting-edge endpoint security products like Endpoint Detection and Response, or EDR, is part of this.Innovation in this area is essential to fending off sophisticated threats and modifying security frameworks for AI-driven environments, as many researchers are actively involved in the development of next-generation EDR systems.Which can provide more coverage than the current offerings from many vendors.Ultimately, AI in cybersecurity is simultaneously a game transformer and a double-edged sword.It allows for swifter, more exact threat discovery and response, decreasing costs and increasing resilience.However, it also introduces novel risks, ranging from adversarial dangers to ethical dilemmas.The key lies in how we deploy it. Responsible application of AI—guided by visibility, human leadership, and continuous progress— can empower security teams and assist organisations to stay ahead in an increasingly intricate risk landscape.While AI undeniably has remarkable potential to fortify cyber safeguards, we cannot ignore its constraints nor overlook the subtle ways it may undermine security.With any technology, benefits and drawbacks necessitate prudent evaluation; we must acknowledge what is known and remain vigilant toward what is not.Developed and applied judiciously, with sensitivity to unintended outcomes, AI could bolster protection in ways otherwise impossible.advertisementHowever, reckless or unchecked use risks unforeseen holes compromising all it aimed to safeguard.Only through diligence, moderation and ongoing scrutiny can we optimise AI's promise and contain its pitfalls, making technology a guardian of resilience, not a harbinger of harm. Progress requires prudence.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

HCLTech opens second delivery centre in Kerala
HCLTech opens second delivery centre in Kerala

Time of India

time24 minutes ago

  • Time of India

HCLTech opens second delivery centre in Kerala

IT firm HCLTech on Monday announced the opening of its delivery centre in Thiruvananthapuram, Kerala, to cater to projects across AI , GenAI , Cloud and emerging technologies. HCLTech aims to cultivate innovation-driven learning and offer career opportunities to aspiring IT professionals in the region, the company said in a regulatory filing. This is the company's second delivery centre in the state. It established its first centre in Kochi in October 2024. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like What Happens When You Massage Baking Soda Into Your Scalp Read More Undo Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories Shares of HCLTech settled 0.68 per cent higher at Rs 1,648.5 apiece on the BSE on Monday. PTI

Beyond the fear: What global data says about AI supporting, not replacing jobs
Beyond the fear: What global data says about AI supporting, not replacing jobs

Time of India

time25 minutes ago

  • Time of India

Beyond the fear: What global data says about AI supporting, not replacing jobs

As AI and automation reshape the world of work, a quiet anxiety is sweeping across sectors, fuelled by fears of job loss and uncertainty about the future. With tightening pressure, employees, regardless of age or industry, are grappling with rising stress and mental fatigue. In such a moment of emotional vulnerability, while upskilling and reskilling for the workforce are essential, without a clear roadmap, they alone may not be enough to ease these fears. What's needed to sustain workforce trust isn't just speculation, but data-backed insights and credible research, evidence that shows AI isn't a threat to jobs, but a powerful tool for growth and evolution. Building confidence with data-driven insights To restore the confidence of today's demoralised workforce, many of whom fear being replaced by AI, we've gathered key global reports that present evidence-based insights. These findings show that AI isn't here to take over jobs, but to create new opportunities and reshape the future of work for the better. #Insight 1: 38% rise in AI-exposed jobs, 56% wage premium for AI-skilled workers A recent PWC 2025 Global AI Jobs Barometer , analysing nearly a billion job ads across six continents, reveals that AI is boosting worker value, productivity, and wage potential, with job numbers rising by 38% between 2019 and 2024 in the AI-exposed roles. According to the report, since the rise of GenAI in 2022, productivity in AI-exposed sectors has surged from 7% in 2018-2022 to 27% by 2024. It also revealed that AI-skilled workers earned an average wage premium of 56% in 2024, which was more than double the 25% premium from the previous year, defying widespread fears of job losses. In 2025, this number will continue to grow. #Insight 2: 1 in 4 jobs are highly exposed to AI, but not at risk of elimination A recently released joint study titled 'Generative AI and jobs: A refined global index of occupational exposure' by the International Labour Organisation (ILO) and Poland's National Research Institute (NASK) reveals that one in four jobs worldwide is highly exposed to AI. Surprisingly, the study emphasises this exposure as a transformation rather than a replacement of jobs. It highlights that complete automation in many sectors remains limited, with human involvement continuing to play a central role in maintaining efficiency. #Insight 3: $4.4 trillion productivity potential, but only with people at the centre McKinsey's recent insights on generative AI underscore that generative AI could deliver as much as $4.4 trillion in global productivity gains annually. But beyond the numbers, a quieter revolution is underway, one that shifts the focus from machine dominance to human enablement. Realising this potential, many companies are investing in reskilling, change management, and employee support systems that are better positioned to unlock AI's full value. #Insight 4: 72% of enterprises adopting AI, 40% jump in efficiency with strategic use Deloitte Global's 2025 predictions report, ' Generative AI: Paving the Way for a Transformative Future ', also presents a forward-looking analysis of how AI is not just automating tasks but enhancing human decision-making, fostering new business models, and increasing efficiency in dynamic work environments. With 72% of global enterprises already embracing AI-led transformations, the report further points to the rise of new roles focused on AI governance, ethics, and collaboration between humans and machines. Interestingly, companies that use AI strategically have seen up to a 40% jump in operational efficiency, showing that when done right, AI isn't just a tool, it's a foundation for workplace resilience. AI is a partner, not a replacement As change remains a constant force, it's important to recognise that AI as a revolution has just superseded the conventional method of working, bringing innovation and development to technology, while retaining the essential role of human judgement and creativity. With the right training and increased awareness, the fear of job loss can shift into optimism about thriving in an AI-driven world.

Siddharth Pai: Can AI beat quantum computing at its own game?
Siddharth Pai: Can AI beat quantum computing at its own game?

Mint

time43 minutes ago

  • Mint

Siddharth Pai: Can AI beat quantum computing at its own game?

For decades, quantum computing has been described as the 21st century's technological lodestar—with its unfathomable computational power poised to solve problems beyond the ken of classical machines. Quantum computers promise to crack cryptographic codes, simulate the quantum dynamics of molecules in material science, aid drug discovery and more. Yet, as the quantum race drags on, an unexpected challenger has emerged, not to dethrone but outpace it in precisely those domains where it was expected to shine the brightest: AI. To grasp the possibility of this disruption, begin with what quantum computing is. Unlike classical computers that encode information in binary bits—0s or 1s—quantum computers use quantum bits, or qubits, which can exist in a superposition of states. Also Read: Will AI ever grasp quantum mechanics? Don't bet on it Through entanglement and quantum interference, quantum computers can process a vast space of possibilities in parallel. This lets them model quantum systems naturally, making them ideal for simulating molecules, designing new materials and solving certain optimization problems. Among its most touted applications is its potential to transform material science. Advances with high-temperature superconductors, catalytic surfaces or novel semiconductors often require modelling the interactions of strongly correlated electrons—systems where the behaviour of one particle is tightly linked to that of many others. Classical algorithms falter in such simulations because the complexity of the quantum state space rises exponentially with system size. A full-fledged quantum computer would handle all this with ease. But the practical realization of quantum computing remains vexed. Qubits, whether superconducting loops, trapped ions or topological states, are exquisitely fragile. They 'decohere' (lose their quantum state) within microseconds and must be kept at temperatures colder than deep space. Error correction remains an uphill battle. Most of today's quantum machines can manage only a few hundred noisy qubits, far short of the millions needed for fault-tolerant computing. Also Read: Is Google's Willow really a 'wow' moment for quantum computers? In the meantime, artificial intelligence (AI), particularly deep learning, has made remarkable incursions into the same spaces. A turning point came in 2017 with a paper in Science by Giuseppe Carleo and Matthias Troyer ( Startling scientists, they found a neural network-based variational method to approximate the wave-function of quantum systems. This approach employed restricted Boltzmann machines to represent complex correlations among quantum particles, modelling the ground states of certain spin systems that had been hard to simulate classically. That paper didn't just introduce a new tool; it signalled a paradigm shift. Researchers used it for deep convolutional and autoregressive networks, transformer architectures and even diffusion models to simulate quantum many-body systems. These neural networks run on classical hardware and do not require the brittle infrastructure of quantum machines. It's not merely a question of catching up. AI is beginning to demonstrate capabilities in material discovery and quantum simulation that, while not perfectly accurate at the quantum level, are good enough. Generative models have proposed new crystalline structures with desirable thermal or electronic properties, while graph neural networks have predicted materials' phase behaviour without recourse to first-principle calculations. Most strikingly, AI models have begun to assist in inferring effective Hamiltonians—mathematical descriptions of physical systems—from experimental data, a tough task even for top-level experts. This acceleration has not gone unnoticed by major research labs. Google's DeepMind, for instance, has begun integrating machine learning tools directly into quantum chemistry workflows. Startups in the quantum space often include AI-based pre-processing or error mitigation in their pipelines. Also Read: Underdelivery: AI gadgets have been a let-down but needn't be A complementary field is fast becoming a competing one. AI will not make quantum computing irrelevant in the absolute sense, as there will always be quantum phenomena that only quantum devices can fully capture, but AI may take the lead in many practical problems before quantum hardware matures. If machine learning models can deliver 90% of the performance at 5% of the cost and infrastructure, industrial users may not wait for perfection. Moreover, there's a subtler factor at play: a shift in intellectual capital. The more investment AI-based methods attract, the more resources will flow into neural modelling over quantum error correction. By the time quantum machines mature, many of the use-cases originally envisioned for them may have been absorbed by AI tools that ironically use quantum data or theory. Quantum computing risks becoming a beautiful idea outpaced by a merely competent but deployable alternative. There is an irony here that would not be lost on Schrödinger or Feynman: that the classical world, once deemed too simplistic in the face of quantum reality, might be reasserting itself through the statistical abstractions of machine learning. We set out to build a machine that thinks like nature. Instead, we taught our machines to imitate nature well enough to move forward without grasping it fully. Quantum computing may still prove indispensable. But it will have to justify its place in a world where its promise is being appropriated by its upstart cousin AI. The author is co-founder of Siana Capital, a venture fund manager.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store