logo
AI in Cybersecurity: A game changer or a double-edged sword?

AI in Cybersecurity: A game changer or a double-edged sword?

India Today02-05-2025
Artificial intelligence has utterly transformed cybersecurity in diverse manners, both remarkable and multifold. Its skills—including scouring immense datasets, searching for anomalies and systematising retaliation—have propelled protective tactics to unprecedented heights.Yet, similar to any transformative innovation, AI in cybersecurity presents both immense potential and significant peril.As groups increasingly incorporate AI into their security ecosystems, the question surfaces: are we bolstering our defenses, or building new vulnerabilities?advertisement
Indiatoday spoke with Namrata Barpanda, Staff security engineer, ServiceNow to get more insights.AI's transformative worth in cybersecurity is cemented in its aptitude to scale and tailor. Today's organisations spawn gigantic volumes of information, and traditional instruments regularly fail to identify sophisticated threats concealed in that noise.AI, particularly machine learning designs, can process countless pieces of data in real-time, distinguishing examples and abnormalities that would somehow go unnoticed. Unlike signature-based frameworks, which rely upon known dangers, AI models evolve, gaining from new behaviours and staying one step ahead of zero-day susceptibilities.The employment of AI in Security Information and Event Management (SIEM) and Security Automation, Orchestration, and Response (SOAR) platforms has proven particularly powerful.These tools streamline log examination, alert triage, and automated response—capacities that are time-consuming and mistake-inclined when overseen manually.advertisementIBM's examination shows that companies with AI and computerisation abilities spared a normal of $3.05 million in breach costs and decreased containment time by 74 days contrasted with those without these innovations.While AI systems demonstrate impressive skills for automating protections and pinpointing pioneering hazards more quickly than person-by-person investigations, confirming such technologies evolve responsibly and dealing with innate prejudices is indispensable.While AI tools display extraordinary aptitude to automate protections and pinpoint novel risks, confirming such frameworks stay transparent and address biases is essential.AI is also capable of analysing server behaviour and usage trends, and when it detects modifications to system behaviour, it can initiate monitoring or mitigation measures.This explains behavioural analytics, in which artificial intelligence (AI) detects changes in performance or activity, allowing for real-time response mechanisms to handle possible risks or problems.Strict regulatory environments like healthcare and finance, and lack of transparency in automated decision-making could severely undermine adherence and trust in systems.While AI may flag incidents more rapidly than people, security teams require an understanding of why a model initiated a specific action to preserve accountability.Poorly designed systems also risk disproportionately impacting some groups if biases are not carefully audited and addressed.advertisementFor AI to truly augment rather than replace human intelligence, governance frameworks ensuring responsible development and ongoing testing are needed.While automation can streamline defences, completely removing the human element risks overlooking nuanced threats.A balanced, multipronged approach combining expert human judgment with intelligent tools offers the greatest promise for both security and ethical AI.Overall, with care and oversight, the integration of AI into cybersecurity need not come at the cost of human values.The accelerating refinement of AI systems and cyberattacks has spawned acybersecurity arms race. On one front, protectors employ AI to safeguard digital domains; on another, aggressors leverage comparable technology to rupture through defences.Staying ahead in this contest necessitates a multilayered plan—one blending intelligent instruments with well-prepared professionals, robust governance policies, and a culture of constant enhancement.It also implies preparation for novel risk vectors introduced by AI itself, ranging from algorithmic manipulation to synthetic persona fraud.AI's function in cybersecurity moreover highlights a growing requirement for collaboration between technical and non-technical teams.Meanwhile, security teams must evolve fluency in AI technologies, ensuring they can monitor, tune, and validate models productively. The convergence of cybersecurity and AI demands novel skillsets, novel structures, and a novel mindset.advertisementDeeper integration with cutting-edge endpoint security products like Endpoint Detection and Response, or EDR, is part of this.Innovation in this area is essential to fending off sophisticated threats and modifying security frameworks for AI-driven environments, as many researchers are actively involved in the development of next-generation EDR systems.Which can provide more coverage than the current offerings from many vendors.Ultimately, AI in cybersecurity is simultaneously a game transformer and a double-edged sword.It allows for swifter, more exact threat discovery and response, decreasing costs and increasing resilience.However, it also introduces novel risks, ranging from adversarial dangers to ethical dilemmas.The key lies in how we deploy it. Responsible application of AI—guided by visibility, human leadership, and continuous progress— can empower security teams and assist organisations to stay ahead in an increasingly intricate risk landscape.While AI undeniably has remarkable potential to fortify cyber safeguards, we cannot ignore its constraints nor overlook the subtle ways it may undermine security.With any technology, benefits and drawbacks necessitate prudent evaluation; we must acknowledge what is known and remain vigilant toward what is not.Developed and applied judiciously, with sensitivity to unintended outcomes, AI could bolster protection in ways otherwise impossible.advertisementHowever, reckless or unchecked use risks unforeseen holes compromising all it aimed to safeguard.Only through diligence, moderation and ongoing scrutiny can we optimise AI's promise and contain its pitfalls, making technology a guardian of resilience, not a harbinger of harm. Progress requires prudence.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Brutal CEO cut 80% of workers who rejected AI - 2 years later he says he would do it again
Brutal CEO cut 80% of workers who rejected AI - 2 years later he says he would do it again

Time of India

time2 hours ago

  • Time of India

Brutal CEO cut 80% of workers who rejected AI - 2 years later he says he would do it again

When most leaders cautiously tested AI, Eric Vaughan, the CEO of IgniteTech, took a gamble that shocked the tech world. The IgniteTech CEO replaced nearly 80% of his staff in a bold move to make artificial intelligence the company's foundation. His decision, controversial yet transformative, shows the brutal reality of adapting to disruption. Vaughan's story shows that businesses must change their culture, not just their technology, to thrive in the AI era. In early 2023, IgniteTech CEO Eric Vaughan faced one of his toughest decisions. Convinced that artificial intelligence was not just a tool but an existential shift for every business, he dismantled his company's traditional structure. ALSO READ: Orca attack mystery: What really happened to marine trainer Jessica Radcliffe Why did IgniteTech face resistance to AI? Live Events When Vaughan first pushed the company to use AI, he spent a lot of money on training. Mondays turned into "AI Mondays," which were only for learning new skills, trying out new tools, and starting pilot projects. IgniteTech paid for employees to take AI-related courses and even brought in outside experts to help with adoption, as per a report by Fortune. However, resistance emerged rapidly. It was surprising that the most pushback came from technical employees, not sales or marketing. A lot of people were doubtful about what AI could do, focusing on what it couldn't do instead of what it could do. Some people openly refused to take part, while others worked against the projects. Vaughan said that the resistance was so strong that it was almost sabotage. His experience is backed up by research. According to a 2025 report on enterprise AI adoption, one in three workers said they were against or even sabotaging AI projects, usually because they were afraid of losing their jobs or were frustrated with tools that weren't fully developed, as per a report by Fortune. ALSO READ: Apple iPhone 17 Air and Pro get surprise release date change — here's the new timeline How did Vaughan rebuild the company? Vaughan came to the conclusion that believing in AI was not up for debate. Instead of making his current employees change, he started hiring new people who shared his vision. He called these new hires "AI innovation specialists." This change affected every department, including sales and finance, as per a report by Fortune. Thibault Bridel-Bertomeu, IgniteTech's new chief AI officer, was a key hire. Vaughan reorganized the company so that every division reported to AI after he joined. This centralization stopped things from being done twice and made it easier for people to work together, which is a common problem when companies use AI. The change was expensive, disruptive, and emotionally draining, but Vaughan says it had to said, "It was harder to change minds than to add skills,' as per a report by Fortune. What can other companies learn from this? Even though it hurt, IgniteTech got a lot of benefits. By the end of 2024, it had released two AI solutions that were still in the patent process. One of them was Eloquens AI, an email automation platform. Revenue remained in the nine-figure range, with profit margins near 75% Ebitda. During the chaos, the company even made a big purchase. ALSO READ: Alien Attack in November? Harvard scientists warn mysterious space object could be advanced technology Vaughan's story teaches us a crucial lesson: using AI is as much about culture as it is about technology. While companies like Ikea focus on augmenting workers instead of replacing them, Vaughan chose radical restructuring to ensure alignment. Both methods show how hard it is for businesses to find a balance between trust and innovation. FAQs Why did Eric Vaughan fire so many people at IgniteTech? He thought that people who didn't want to use AI would hurt the company's future, so he decided to rebuild with people who shared his vision. What happened after IgniteTech changed its AI? The company introduced new AI products, set up a central AI division, and made more money, even though the change was hard.

Future proofing APAC: Building the skills for AI powered economy
Future proofing APAC: Building the skills for AI powered economy

Time of India

time3 hours ago

  • Time of India

Future proofing APAC: Building the skills for AI powered economy

By John Lombard Artificial Intelligence (AI) is no longer a distant vision—it is now the operational backbone of industries across the Asia-Pacific (APAC) region. From predictive analytics in manufacturing to generative AI (GenAI) in customer service, AI adoption is reshaping economic structures. Yet, adoption is far from uniform. Advanced economies such as Japan, Singapore, South Korea, and China are deploying AI at scale, while others are still building foundational infrastructure and digital readiness. One of the most pressing challenges is not technological capability but human capability. Without a skilled workforce, AI adoption risks creating a gap between potential and performance—a high-speed engine without the tracks to run on. The skills gap in an accelerating AI landscape According to NTT DATA's recent Global GenAI Report, nearly 70% of organizations view AI as a game changer, and almost two-thirds plan to significantly invest in GenAI over the next two years. But investment alone is insufficient. The talent pool trained to design, deploy, and govern AI systems is lagging behind. In many organizations, innovation in AI is outpacing workforce readiness and governance frameworks. Employees often feel underprepared—not due to resistance, but because training, role-specific upskilling, and AI literacy have not kept pace with technological change. Cultural diversity, language barriers, and varied education systems across APAC further complicate the creation of a region-wide skilled AI workforce. To bridge this divide, enterprises need structured AI and GenAI talent development frameworks that are scalable, measurable, and adaptable to evolving technologies. Such frameworks should: • Provide foundational AI literacy for all employees, regardless of role. • Offer role-based practical training for professionals in technical and non-technical functions. • Develop certified specialists with deep domain expertise in AI deployment. • Cultivate strategic leaders capable of driving AI innovation and governance at an enterprise scale. This tiered approach allows organizations to embed AI capabilities at every operational layer—from frontline staff to the C-suite. Importantly, in-house trainers and industry-specific learning modules can make training more relevant and impactful. In the enterprise technology context, AI skills cannot be siloed. They need to intersect with other capabilities such as: • Cybersecurity – safeguarding AI models and data pipelines from vulnerabilities. • Data Science & Machine Learning – building and refining predictive and generative models. • Cloud Computing – enabling scalable AI deployments across geographies. • Ethical AI Governance – ensuring accuracy, bias mitigation, and regulatory compliance. These competencies will be essential as AI becomes integrated into everything from supply chain systems to customer engagement platforms. Responsible AI adoption Expanding AI use also means expanding accountability. Governance models must address risks such as bias in outputs, misinformation, intellectual property concerns, and data leakage. Organizations should invest in transparent governance frameworks, clear audit trails, and training that empowers employees to identify and mitigate AI risks. Responsible AI adoption is not just about compliance—it is about building trust with employees, customers, and regulators. In a region as diverse as APAC, this trust will be a competitive differentiator. The leadership imperative The role of leadership in AI transformation extends beyond technology investment. Executives must actively participate in upskilling, signal commitment to continuous learning, and ensure inclusivity in training initiatives. By aligning AI innovation agendas with workforce development strategies, leaders can create sustainable adoption rather than short-term experimentation. Ultimately, the future of APAC's AI economy will depend on how effectively the region matches technological advances with human capabilities. Leaders who act today—investing in both AI systems and the people who operate them—will define the competitive, ethical, and sustainable AI landscape of tomorrow. The author is CEO, APAC, NTT DATA. The views expressed are solely of the author and ETCIO does not necessarily subscribe to it. ETCIO shall not be responsible for any damage caused to any person/organization directly or indirectly.

IIT Bombay launches women-only certificate course on Generative AI for business
IIT Bombay launches women-only certificate course on Generative AI for business

News18

time3 hours ago

  • News18

IIT Bombay launches women-only certificate course on Generative AI for business

Agency: Mumbai, Aug 18 (PTI) Indian Institute of Technology Bombay on Monday said its Desai Sethi School of Entrepreneurship (DSSE) is rolling out a beginner-level course exclusively designed to empower women professionals, entrepreneurs, and managers with practical skills in generative AI. The special edition of 'GenAI for Business: A Hands-On Introduction" will be conducted online from September 11 to 13, and the last date of registration is September 9, IIT Bombay said in a statement. This will give an opportunity to the women participants to explore the fast-evolving world of GenAI in a supportive, collaborative learning environment. Over the course of three days, participants will gain hands-on experience with tools like ChatGPT, Claude, Gemini, Co-Pilot, DALLE, Perplexity, Flux1, Grok, and Notebook LM through live demos, exercises, and real-world examples. PTI SM NSK view comments First Published: August 18, 2025, 22:45 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy. Loading comments...

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store