logo
What The Last Century Of Cybersecurity Can Teach Us About What Comes Next In The Age Of AI

What The Last Century Of Cybersecurity Can Teach Us About What Comes Next In The Age Of AI

Forbes18-07-2025
Mark Hughes is the Global Managing Partner of Cybersecurity Services at IBM.
New research from our company reveals that CISOs have just 36 months to adapt to AI-driven cybersecurity or face serious disruption. With just 30% of organizations ready to operate at that level, those not leading with AI risk falling behind as threats accelerate and competitors gain an advantage. Reassuringly, this isn't the first time security leaders have had to rethink their approach in response to a tech shift. To understand what's at stake for your business and what's coming next, it helps to look at how we got here.
Cybersecurity didn't begin with multimillion-dollar platforms or high-tech SOCs. It began in research labs, where engineers noticed users on shared systems could access files they weren't supposed to. Soon came the internet, and with it a new kind of threat. Attackers no longer had to be inside the building. Security moved from a system administrator's side job to a dedicated practice, and teams raced to keep up with threats growing faster than traditional IT teams were built to handle.
As cloud, mobile workforces and connected devices scaled, the job changed again. Now, with more data, devices and decisions than human teams can reasonably handle, we're entering a new phase—one where AI isn't just supporting operations but making split-second decisions that can save or cost millions in revenue and reputation.
The New 'First Responder' Is AI
As businesses moved operations online, networks expanded, creating complexity far harder to manage. Security teams needed structure. In most operations, the Tier One analyst was introduced as the first line of defense, responsible for reviewing alerts and passing along anything that looked serious.
Now, with the introduction of AI systems trained on years of real-world data, many of those tasks can be automated at scale—in most cases, with greater speed and consistency than a human working alone. The business impact is immediate and measurable.
To use AI effectively in frontline defense, it must do more than process data. It has to understand how your organization assesses risk and learn to make decisions that protect both security and business continuity. We're seeing that this is especially valuable for clients with high customer activity, where security teams are flooded with alerts that demand fast, accurate decisions to maintain service levels.
In retail environments, these stakes are particularly high as even a small delay in triage can disrupt customer experience and impact revenue. AI is beginning to handle that first layer of alerts by analyzing user behavior and flagging only credible threats. This frees entry-level analysts to focus on higher-value work, like identifying root causes and strengthening future defenses. This isn't just better security—it's a direct competitive advantage that translates to revenue protection and customer retention.
A New Wave Of Human Responsibilities
When intrusion detection systems (IDS) were introduced, they gave security teams something they never had before: real-time visibility into suspicious activity. But visibility brought volume. Alerts poured in, and most didn't point to real threats. Analysts were left with a new challenge and a human task: fine-tuning rules to cut noise and surface genuine threats.
Every leap in cybersecurity tooling has come with its own wave of new human responsibilities. Recent advances in AI, including the emergence of agentic systems that can act without human sign-off, now make it possible to detect and contain threats before a human ever sees the alert.
This capability is powerful, but it demands new forms of oversight. The new human task will be more strategic—to retrace the AI's decision path, check for missed context and determine whether the response was justified. Now that AI can act, analysts will be responsible for making sure those actions are appropriate.
Turn Insight Into Action
If the last century taught us anything, it's that technology alone doesn't solve problems or create business value. How organizations adapt to it does. The companies that will dominate the next decade aren't just adopting AI tools; they're rebuilding their entire security strategy around AI capabilities. To avoid another cycle of overwhelm and catch-up, organizations need to focus on three strategic moves grounded in what history has shown us works:
1. Treat AI as a team member. Just as early detection systems once overwhelmed teams with unclear alerts, AI can do the same without defined roles and clear integration into business processes. Analyze workflows to find where automation can improve speed or consistency, like reviewing large volumes of log data or spotting known threat patterns. Once you've identified the right opportunities, assign AI clear responsibilities and document them in playbooks. By explicitly defining AI's job, you reduce ambiguity, streamline execution and ensure it's used where most effective.
2. Train analysts to become AI supervisors. As AI takes on more routine security work, organizations must identify where human expertise adds the most strategic value and which skills matter most. Start by tracking when and why analysts intervene in AI-driven processes. Are they spotting misclassified patterns? Interpreting alerts based on business priorities? Coordinating with legal or communications teams to guide a broader response? Turn those insights into roles, build training around them and revisit regularly as both threats and technology evolve.
3. Connect AI actions to business outcomes. AI security tools shouldn't operate in isolation from business strategy. Map each AI-driven action to the business risks it helps mitigate, like preventing fraud or minimizing operational disruption. Incorporate business impact into response workflows so threats are prioritized based on what matters most to the organization. Use metrics that translate technical alerts into business language to better measure the effectiveness of AI-driven security initiatives.
Looking back at when I started in this space, the biggest transformation has been the role's scope and direct business impact. What was once a specialized technical function has become a central part of how organizations compete, operate and grow. As AI accelerates what's possible in security, the challenge for leadership is making choices that are as thoughtful as they are fast—and that position their organizations to lead rather than follow in an AI-driven marketplace.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT-5 just got 4 new personalities — here's how to use them (and why you should)
ChatGPT-5 just got 4 new personalities — here's how to use them (and why you should)

Tom's Guide

timean hour ago

  • Tom's Guide

ChatGPT-5 just got 4 new personalities — here's how to use them (and why you should)

With the launch of OpenAI's newest model, the company has introduced four distinct personality modes for ChatGPT-5. As the company's most advanced large language model to date, it delivers major upgrades in reasoning, memory and multimodal capabilities. It's better at sustaining complex conversations, understanding context across chats and producing more accurate and creative responses. Now, during text-based conversations, users can customize how the chatbot sounds and responds. It's similar to custom GPTs, but these modes were created by OpenAI to give users a new way to tailor the assistant's tone and demeanor with a single click. Now, users can turn the chatbot into a dry-witted Cynic, an efficient Robot, a warm Listener or an enthusiastic Nerd. These personality styles are available to users running GPT-5 in text mode and can be changed at any time during a chat. Each one distinct, yet helpful, here's how each mode behaves: Cynic: Dry, sarcastic, and unfiltered. Offers practical advice with a side of side-eye. Great for humor or cutting through fluff. Robot: Emotionless, direct, and to the point. No extra words — just clean, accurate responses. Listener: Calm, warm and empathetic. Reflects your thoughts gently and adds emotional nuance to its replies. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Nerd: Curious, excitable, and deeply knowledgeable. Explains things thoroughly and loves sharing fun facts. Each personality mode brings a different tone to your conversations and can even shift how you brainstorm, write or learn with AI. Activating these modes is easy: The personality will apply to your current chat, but you can switch or turn off the personality mode at any time. Here's how each personality mode answers the same prompt: 'Why is the sky blue?' Cynic: Oh, you mean besides the fact that Mother Nature likes to mess with us? Fine. Sunlight hits the atmosphere, the shorter blue wavelengths scatter more than the other colors, so the sky looks blue to us simple humans. You'll get your nice orange or red at sunset when the light's taking the scenic route. Happy now? Robot: The sky appears blue due to Rayleigh scattering. When sunlight enters Earth's atmosphere, shorter wavelengths (blue) scatter more efficiently than longer wavelengths (red). This selective scattering causes the human eye to perceive the sky as blue during daylight hours. Listener: That's such a great question! It's actually because sunlight is made up of many colors, and when it passes through our atmosphere, the tiny air particles scatter the shorter blue wavelengths more than the others. So when we look up, our eyes catch mostly that soft, blue light. It's kind of peaceful when you think about it, right? Nerd: Oh, you've unlocked one of my favorite topics! The blue sky is thanks to Rayleigh scattering, which is inversely proportional to the fourth power of wavelength. That means shorter wavelengths like blue and violet scatter much more than red or yellow. Our eyes are more sensitive to blue than violet, and some violet is absorbed by the upper atmosphere, so we see a rich blue. Fun fact: the same physics makes Mars' sunsets blue and ours red! Switching between ChatGPT's personalities is fun, but it can also seriously boost your productivity and creativity. Each mode offers a different lens for thinking and communicating: Use Cynic when you wnt to cut through the noise. It's good for brainstorming hot takes or injecting humor into dry topics. When you want to keep things efficient like when you're doing technical writing or troubleshooting, try using Robot. It's also a useful companion when coding. Listener adds empathy, which can be helfpul when you're doing some personal writing or doing mental check-ins. It could also be useful for writing to customers if you run a business. Nerd is a useful personality when you want to make learning fun. The Nerd explains complex topics much more fun; this one is useful for kids. Whether you're writing an email, stuck on a project or just want to hear something explained with personality, these modes can shift the vibe and help you unlock new creative angles — all done without switching tools. These new personality styles give ChatGPT-5 a more human-like edge and give you more control. As in the example above, you'll see that they all respond differently. This is an opportunity to choose how your AI sounds, thinks and helps, instead of the one-size-fits-all assistant that we got with GPT-4. Try them all. You might be surprised which one becomes your Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

5 secrets for breaking through the entry-level job ‘glass floor'
5 secrets for breaking through the entry-level job ‘glass floor'

Fast Company

timean hour ago

  • Fast Company

5 secrets for breaking through the entry-level job ‘glass floor'

From on-again-off-again tariffs, economic uncertainty, and layoffs, fresh graduates are in one of the toughest job markets in recent history. More than half do not have a job lined up by the time they graduate, and the unemployment rate for young degree holders is the highest it's been in 12 years, not counting the pandemic. Technological advancements are further making the situation harder, as artificial intelligence (AI) has wormed its way into the workforce, cannibalizing the number of entry-level jobs available. What's a young grad to do? I interviewed hiring managers, career advisers, and college students, and in this piece you'll learn: What out-of-work new grads need to be doing right now in their 'limbo' How to identify industries that are hiring you may never have thought of The right approach to developing AI literacy to stand out 1. Use limbo productively What several recent college grads refer to as 'limbo,' the time period between graduation and employment, is often regarded as an excruciating phase of uncertainty. Experts recommend using this time as an opportunity for gaining experience outside of traditional corporate work.

ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?
ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?

Forbes

timean hour ago

  • Forbes

ChatGPT As Your Bedside Companion: Can It Deliver Compassion, Commitment, And Care?

During the GPT-5 launch this week, Sam Altman, CEO of OpenAI, invited a cancer patient and her husband to the stage. She shared how, after receiving her biopsy report, she turned to ChatGPT for help. The AI instantly decoded the dense medical terminology, interpreted the findings, and outlined possible next steps. That moment of clarity gave her a renewed sense of control over her care. Altman mentioned; 'health is one of the top reasons consumers use ChatGPT, saying it 'empowers you to be more in control of your healthcare journey.' Around the world, patients are turning to AI chatbots like ChatGPT and Claude to better understand their diagnoses and take a more active role in managing their health. In hospitals, both patients and clinicians sometimes use these AI tools informally to verify information. At medical conferences, some healthcare professionals admit to carrying a 'second phone' dedicated solely to AI queries. Without accessing any private patient data, they use it to validate their assessments, much like patients seeking a digital 'second opinion' alongside their physician's advice. Even during leisure activities like hiking or camping, parents often rely on AI Chatbots like ChatGPT or Claude for quick guidance on everyday concerns such as treating insect bites or skin reactions in their children. This raises an important question: Can AI Companions Like ChatGPT, Claude, and Others Offer the Same Promise, Comfort, Commitment, and Care as Some Humans? As AI tools become more integrated into patient management, their potential to provide emotional support alongside clinical care is rapidly evolving. These chatbots can be especially helpful in alleviating anxiety caused by uncertainty, whether it's about a diagnosis, prognosis, or simply reassurance regarding potential next steps in medical or personal decisions. Given the existing ongoing stressors from disease management burden on patients, advanced AI companions like ChatGPT and Claude can play an important role by providing timely, 24/7 reassurance, clear guidance, and emotional support. Notably, some studies suggest that AI responses can be perceived as even more compassionate and reassuring than those from humans. Loneliness is another pervasive issue in healthcare. Emerging research suggests that social chatbots can reduce loneliness and social anxiety, underscoring their potential as complementary tools in mental health care. These advanced AI models help bridge gaps in information access, emotional reassurance, and patient engagement, offering clear answers, confidence, comfort, and a digital second opinion, particularly valuable when human resources are limited. Mustafa Suleyman, CEO of Microsoft AI, has articulated a vision for AI companions that evolve over time and transform our lives by providing calm and comfort. He describes an AI 'companion that sees what you see online and hears what you hear, personalized to you. Imagine the overload you carry quietly, subtly diminishing. Imagine clarity. Imagine calm.' While there are many reasons AI is increasingly used in healthcare, a key question remains: Why Are Healthcare Stakeholders Increasingly Turning to AI? Healthcare providers are increasingly adopting AI companions because they fill critical gaps in care delivery. Their constant availability and scalability enhance patient experience and outcomes by offering emotional support, cognitive clarity, and trusted advice whenever patients need it most. While AI companions are not new, today's technology delivers measurable benefits in patient care. For example, Woebot, an AI mental health chatbot, demonstrated reductions in anxiety and depression symptoms within just two weeks. ChatGPT's current investment in HealthBench to promote health and well-being further demonstrate its promise, commitment, and potential to help even more patients. These advances illustrate how AI tools can effectively complement traditional healthcare by improving patient well-being through consistent reassurance and engagement. So, what's holding back wider reliance on chatbots? The Hindrance: Why We Can't Fully Rely on AI Chatbot Companions Despite rapid advancements, AI companions are far from flawless, especially in healthcare where the margin for error is razor thin. Large language models (LLMs) like ChatGPT and Claude are trained on vast datasets that may harbor hidden biases, potentially misleading vulnerable patient populations. Even with impressive capabilities, ChatGPT can still hallucinate or provide factually incorrect information—posing real risks if patients substitute AI guidance for professional medical advice. While future versions may improve reliability, current models are not suited for unsupervised clinical use. Sometimes, AI-generated recommendations may conflict with physicians' advice, which can undermine trust and disrupt the patient–clinician relationship. There is also a risk of patients forming deep emotional bonds with AI, leading to over-dependence and blurred boundaries between digital and human interaction. As LinkedIn cofounder Reid Hoffman put it in Business Insider, 'I don't think any AI tool today is capable of being a friend,' and "And I think if it's pretending to be a friend, you're actually harming the person in so doing." For now, AI companions should be regarded as valuable complements to human expertise, empathy, and accountability — not replacements. A Balanced, Safe Framework: Maximizing Benefit, Minimizing Risk To harness AI companions' full potential while minimizing risks, a robust framework is essential. This begins with data transparency and governance: models must be trained on inclusive, high-quality datasets designed to reduce demographic bias and errors. Clinical alignment is critical; AI systems should be trained on evidence-based protocols and guidelines, with a clear distinction between educational information and personalized medical advice. Reliability and ethical safeguards are vital, including break prompts during extended interactions, guidance directing users to seek human support when needed, and transparent communication about AI's limitations. Above all, AI should complement human clinicians, acting as a navigator or translator to encourage and facilitate open dialogue between patients and their healthcare providers. Executive Call to Action In today's digital age, patients inevitably turn to the internet and increasingly to AI chatbots like ChatGPT and Claude for answers and reassurance. Attempts to restrict this behavior are neither practical nor beneficial. Executive physician advisors and healthcare leaders are therefore responsible for embracing this reality by providing structured, transparent, and integrated pathways that guide patients in using these powerful tools wisely. It is critical that healthcare systems are equipped with frameworks ensuring AI complements clinical care rather than confuses or replaces it. Where AI capabilities fall short, these gaps must be bridged with human expertise and ethical oversight. Innovation should never come at the expense of patient safety, trust, or quality of care. By proactively shaping AI deployment in healthcare, stakeholders can empower patients with reliable information, foster meaningful clinician-patient dialogue, and ultimately improve outcomes in this new era of AI-driven medicine.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store