
Stop Sleeping On AI: Why Security Teams Should Embrace The Technology
Ron Williams is the CEO and founder of Kindo.Ai . getty
Artificial intelligence (AI) is no longer a futuristic tool for cybersecurity. It's gone mainstream. Threat actors have integrated AI into their operations with alarming success, using tools like WormGPT, GhostGPT and even legitimate platforms like Google's Gemini AI to scale their attacks.
Google's Threat Intelligence Group recently detailed how state-sponsored actors have been abusing Gemini AI to enhance reconnaissance, scripting and privilege escalation. These factors lead to a harsh reality: The asymmetry of power in AI between cybersecurity and bad actors is growing, and security teams are falling behind.
If defenders don't start using AI to automate workflows, mitigate threats and improve incident response, they risk being perpetually outpaced by modern attackers. The time to act is now, not after attackers have perfected the use of AI in their operations.
ChatGPT democratized consumer AI access, revolutionizing a whole range of industries. However, cybercriminals quickly recognized its potential for malicious usage, and just a year after its launch, discussions on cybercrime networks about exploiting AI exploded, leading to an increase in AI-based attack strategies.
Hundreds of thousands of ChatGPT accounts were being bought and sold on underground markets, and by mid-2023, WormGPT, a malicious chatbot designed to enhance business email compromise attacks and spear-phishing campaigns, sent shockwaves through the industry.
WormGPT was marketed as an AI tool specifically trained on malicious datasets to improve cybercrime operations, prompting headlines warning of AI-powered cybercrime on the rise. But WormGPT was just the beginning. Variants like FraudGPT, DarkBERT (not to be confused with DarkBART) and GhostGPT followed.
Fast-forwarding to today, cybercriminals have found multiple ways to weaponize AI for their operations:
• Bypassing ethical constraints: Mainstream AI models like ChatGPT and Claude refuse to generate phishing emails. However, attackers discovered ways to manipulate them into compliance using prompt engineering.
• Masquerading legitimate chatbots as malicious chatbots: Some cybercriminals have wrapped jailbroken AI instances within custom interfaces, branding them as their own evil variants and selling access to others.
• Training AI models on malicious datasets: Rather than relying on trickery, some groups have trained their own AI models, fine-tuning them with cybercrime-related data to generate more accurate attack strategies. This is essentially how WormGPT and similar tools evolved within months. Why Security Teams Are Hesitant
Despite clear evidence of AI's role in advancing cybercrime, many security teams remain hesitant to embrace AI defenses. This reluctance sometimes stems from three key concerns: lack of trust in AI, implementation complexity and job security fears. Lack Of Trust In AI
Many cybersecurity professionals view AI as a 'black box' technology and are concerned that it's difficult to predict how AI will behave in a live security environment. Security teams worry that if something goes wrong, they won't be able to remediate the issue due to their lack of understanding of the model's decision-making process.
However, while these concerns are valid, they can be addressed. Many AI-based workflows are built on well-documented APIs that offer transparency and allow customization. If security teams take the time to understand how AI-powered tools function in practical applications, much of their skepticism could be alleviated. Implementation Complexity
Another major roadblock is the perceived difficulty of integrating AI into legacy security infrastructure. A lot of organizations assume that AI adoption requires a fundamental overhaul of existing systems, which is daunting and expensive.
However, security teams can start small by identifying repetitive, time-consuming tasks that AI can automate. Take vulnerability management, for instance. Consultants spend a lot of time triaging vulnerabilities, mapping them to affected assets and prioritizing remediation efforts. AI can optimize this by automatically correlating vulnerabilities with exploitability data, assessing business impact and recommending remediation priorities.
A simple exercise to test AI's effectiveness is to take a common, repetitive security task and design an AI-assisted workflow to replace it. Even partial automation can yield a large return on investment in saved time and improved accuracy. Job Displacement
Some security professionals fear that widespread AI adoption could automate them out of a job. While discussions about AI replacing analysts entirely are common in the industry, AI should be viewed as an augmentation tool rather than a replacement. The focus should be on promoting this perspective. Organizations that upskill their employees to work alongside AI will develop a stronger, more efficient security team.
The bigger point here is that AI won't eliminate security teams—it will empower them. By automating time-consuming and mundane tasks, security analysts can focus on higher-value work, like investigating more complex threats, threat hunting and incident response. How AI Helps Security Teams
Whether operating within a security operations center (SOC) or following a more agile approach, all security teams encounter repetitive tasks that can be automated. AI-powered security solutions can assist with this by: Automating repetitive alert investigations, reducing analyst burnout and improving response times.
Improving detection capabilities by identifying patterns in large datasets faster than human analysts.
Consider a typical security analyst's workflow: They receive an alert, analyze it, extract indicators of compromise, query threat intelligence databases, determine if it's a genuine threat, document the findings and respond accordingly. AI automates much of this process, alleviating manual operational burdens.
The benefits of AI and autonomous agents extend beyond the SOC; AI can also improve web application security, agile security in software development lifecycles, penetration testing and threat intelligence gathering. Security teams don't need to overhaul their entire infrastructure overnight. Incremental AI adoption can have immediate benefits. The Cost Of Inaction
AI is not a passing trend—it's the present and future of cybersecurity. Attackers are not waiting for defenders to catch up. They are actively refining AI-augmented attack methods, making their operations faster, more scalable and more effective. Security teams must recognize that the only way to counter AI-based cyber threats is to fight fire with fire.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
11 minutes ago
- Yahoo
Pennsylvania bill looks to prohibit use of AI in political campaigns
PENNSYLVANIA (WTAJ) — A Pennsylvania Representative announced legislation that aims to prohibit the misuse of Artificial Intelligence (AI) in political campaigns. The bill, authored by Rep. Tarik Khan (D-Philadelphia), argues that while AI has the potential to make content creation 'more efficient,' it also equally has the power to spread disinformation — propaganda intended to mislead against a rival or media. Khan noted that as AI has become more widely available, 'bad actors' have used the programming to produce content targeting political candidates, public officials and government programs. The Federal Election Commission has considered a proposal to limit AI-generated political content, but it is currently uncertain whether regulatory action will be taken or not. Khan argued that while that decision is being made, Pennsylvania must join the 14 other states that have adopted laws or resolutions related to AI. 'Voters need accurate information on political candidates to make the best and most informed decisions for their families and communities,' Khan wrote. 'Please join us in protecting Pennsylvania voters against deception and manipulation by signing on to this proposal.' Under the legislation, the dissemination of a campaign advertisement containing an artificially generated impersonation of a candidate would be prohibited if done without consent and with the intent to influence the election outcome. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Yahoo
11 minutes ago
- Yahoo
FDA approves first AI tool to predict breast cancer risk
The U.S. Food and Drug Administration (FDA) has approved the first artificial intelligence (AI) tool to predict breast cancer risk. The authorization was confirmed by digital health tech company Clairity, the developer of Clairity Breast – a novel, image-based prognostic platform designed to predict five-year breast cancer risk from a routine screening mammogram. In a press release, Clairity shared its plans to launch the AI platform across health systems through 2025. Ai Detects Ovarian Cancer Better Than Human Experts In New Study Most risk assessment models for breast cancer rely heavily on age and family history, according to Clairity. However, about 85% of cases occur in women who have no family history of breast cancer, likely stemming from genetic mutations that occur because of aging, health agencies report. Read On The Fox News App Traditional risk models have also been built on data from predominantly European Caucasian women, which Clairity said has not been "generalized well" to diverse backgrounds. The AI tool analyzes subtle images from a screening mammogram that correlate with breast cancer risk, then generates a "validated five-year risk score" and delivers it to healthcare providers, the company noted. Ai Detects Woman's Breast Cancer After Routine Screening Missed It: 'Deeply Grateful' Dr. Connie Lehman, Clairity founder and breast imaging specialist at Mass General Brigham, stressed the importance of mammograms in early cancer detection. "Now, advancements in AI and computer vision can uncover hidden clues in the mammograms – invisible to the human eye – to help predict future risk," she said in a press release. "By delivering validated, equitable risk assessments, we can help expand access to life-saving early detection and prevention for women everywhere." Dr. Robert A. Smith, senior vice president of early cancer detection science at the American Cancer Society, also commented in a statement that personalized, risk-based screening is "critical to improving breast cancer outcomes, and AI tools offer us the best opportunity to fulfill that potential." "Clairity's FDA authorization is a turning point for more women to access the scientific advances of AI-driven cancer risk prediction," Larry Norton, founding scientific director of the Breast Cancer Research Foundation, wrote in another statement. "Breast cancer is rising, especially among younger women, yet most risk models often miss those who will develop the disease," he said. "Now we can ensure more women get the right care at the right time." More than 2.3 million women are diagnosed with breast cancer globally each year, including more than 370,000 in the U.S., despite "decades of progress," according to the Breast Cancer Research Foundation. Cases have particularly been on the rise among younger women under the age of 50. Click Here To Sign Up For Our Health Newsletter In a Tuesday appearance on "America's Newsroom," Fox News senior medical analyst Dr. Marc Siegel called Clairity's development "profound." "Just looking at a mammogram … sometimes [radiologists] will see things that aren't clear, they have to follow it over time," he said. "AI improves how focused and how predictive it is, [shown] very dramatically in studies." Siegel confirmed that radiologists across the country are generally in support of leveraging AI for cancer detection, especially in areas of the country that are "underserved" in terms of healthcare. For more Health articles, visit "In areas where you're relying on radiologists without special training, this is even more important," he said. "This is the wave of the future. AI is going to be part of the equation, but it's not going to take over."Original article source: FDA approves first AI tool to predict breast cancer risk
Yahoo
11 minutes ago
- Yahoo
'Disgusting Abomination': Elon Musk Slams Donald Trump's 'Big, Beautiful Bill'
Elon Musk again attacked the centerpiece of Donald Trump's legislative agenda, the Big Beautiful Bill Act, this time calling the tax and spending package as a 'disgusting abomination.' Meanwhile, additional criticisms are being directed at the legislation from GOP lawmakers, including one who voted for it, Rep. Marjorie Taylor Greene (R-GA). More from Deadline Jon Stewart Celebrates Elon Musk's Departure From Government, Says Trump Has "Broken" The "Poor Bastard" Paramount Schedules Annual Shareholder Meeting, Sets Board Expansion Amid Skydance Merger Watch Alex Marquardt Says He's Leaving CNN After 8 Years Musk wrote on X today, 'I'm sorry, but I just can't stand it anymore. This massive, outrageous, pork-filled Congressional spending bill is a disgusting abomination. Shame on those who voted for it: you know you did wrong. You know it.' 'It will massively increase the already gigantic budget deficit to $2.5 trillion (!!!) and burden America citizens with crushingly unsustainable debt.' At the White House briefing, Fox News' Peter Doocy read off part of Musk's comment. Doocy asked Press Secretary Karoline Leavitt, 'How mad do you think President Trump is going to be [about Trump's comments]?' She answered, 'Look, the president already knows where Elon Musk stood on this bill. It doesn't change the president's opinion. This is one big beautiful bill and he's sticking to it.' Musk appeared with Trump in the Oval Office last week, as a farewell of sorts as the billionaire wrapped up his official government service leading the so-called Department of Government Efficiency effort to slash the federal government workforce. The legislation passed the House by just one vote last month. The bill would extend and make permanent Trump's 2017 income tax cuts, while expanding spending on defense and border security. Democrats were united in their opposition, as it would make cuts to Medicaid, clean energy and other social programs. The Tax Foundation estimated that it would add as much as $2.6 trillion to the federal deficit over the next decade. Other analyses put the figure even higher. The Senate is taking up the bill this month. GOP critics, like Sen. Rand Paul (R-KY), have said that they could not support the bill because of the deficit increase. Meanwhile, Greene wrote on X that she would have voted against the bill had she known that the bill includes a provision that restricts states from passing their own regulations on AI over the next decade. She wrote, 'Full transparency, I did not know about this section on pages 278-279 of the OBBB that strips states of the right to make laws or regulate AI for 10 years. I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there. We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous. This needs to be stripped out in the Senate. When the OBBB comes back to the House for approval after Senate changes, I will not vote for it with this in it. We should be reducing federal power and preserving state power. Not the other way around. Especially with rapidly developing AI that even the experts warn they have no idea what it may be capable of.' The provision, though, had caught to attention of some in the Senate. At a hearing on the No Fakes Act, Sen. Marsha Blackburn (R-TN) said last month that she would oppose including that provision in the Senate version of the legislation. Rep. Eric Swalwell (D-CA) responded to Greene. 'You have one job. To. Read. The. F–king. Bill.' Best of Deadline 2025-26 Awards Season Calendar: Dates For Tonys, Emmys, Oscars & More Everything We Know About 'Nobody Wants This' Season 2 So Far List Of Hollywood & Media Layoffs From Paramount To Warner Bros Discovery To CNN & More