logo
#

Latest news with #PerryCarpenter

Five AI-Powered Threats Senior Leaders Should Be Aware Of
Five AI-Powered Threats Senior Leaders Should Be Aware Of

Forbes

time22-05-2025

  • Business
  • Forbes

Five AI-Powered Threats Senior Leaders Should Be Aware Of

Perry Carpenter is Chief Human Risk Management Strategist for KnowBe4, a cybersecurity platform that addresses human risk management. We're all too familiar with warnings about phishing scams, and they're still a security issue we need to be aware of. But there are a wide range of other concerns, beyond phishing, that should have your attention—and that you should be sharing with colleagues so they can collaborate with you to protect your company and assets. We're moving into what I call the 'Exploitation Zone'—a widening gap between technological advancement and human adaptability. It is, admittedly, tough to keep up unless, like me, you're singularly focused on data security and staying on top of increasingly sophisticated ploys by bad actors to exploit your human nature. Here are five AI-powered threats you need to understand and take steps to respond to. It's not just emails we have to be worried about these days. Today's hackers can spoof more than email addresses. One of the quickly emerging scams is voice phishing, or vishing. Just last year, we saw a 442% increase in vishing attacks between the first and second half of 2024, according to CrowdStrike. Using publicly available voice snippets they can access via earnings calls, podcasts, video calls or media interviews, cybercriminals are able to create hard-to-detect voice clones. This can take the form of a frantic call from a 'grandchild' to a grandparent asking for money to help get them out of a jam. It can also take the form of a demanding call from a 'CEO' to release funds through a bank transfer. Suggestion: Put steps in place to verify any requests for financial transactions, especially those received via calls or voice messages; consider using authentication questions that only legitimate business representatives would know. Since the pandemic, it's not unusual for many types of meetings to take place in a virtual environment. That includes board meetings. When your board members are participating virtually, there's a chance for manipulation by bad actors. That's not just the stuff of science fiction. Deepfakes have already been used to influence critical business decisions or access sensitive information. A U.S. judicial panel has even considered how deepfakes could disrupt legal trials. Chances are that images and video clips of your board members and senior leaders exist. All cybercriminals need to do is get access to a few seconds of a voice recording, video, or sometimes even a single image and use generative AI tools to create audio and video that most people won't be able to discern from the real. Think I'm exaggerating? You can see me demoing the tools and tactics here. Suggestion: Make sure you're using authentication to protect the security of any video calls. Implement multifactor authentication and establish verification procedures that involve different communication channels. And also, similar to the suggestion for No. 1, consider creating safe words or a verbal challenge/response procedure. In 2023, a fake, likely AI-generated photo of an alleged explosion near the Pentagon briefly caused the S&P 500 to drop. Suggestion: Develop crisis response plans to address the potential for synthetic media attacks, including rapid verification channels that can be used with targeted news outlets and financial partners. Imagine a disgruntled employee using AI voice cloning to generate a fake audio recording of their CEO making discriminatory remarks. Or, picture an AI-generated video showing a senior-level official involved in questionable activities. It's all too possible with the rise of AI-generated content that is now literally at the fingertips of anyone with an axe to grind. Even when these attempts are proven to be false, the damage remains. It used to be true that 'seeing is believing.' That's still true, but what we're seeing may not be actually believable. Suggestion: Be aggressive in monitoring digital channels for synthetic content related to your organization and your key executives, board members and other representatives. Have rapid response plans in place to address any incidents that occur, and be prepared to provide evidence of manipulation. Large language models (LLMs) are the foundational technology behind many generative AI tools. While LLMs themselves don't access real-time information, threat actors can leverage these tools—often in combination with publicly available data about your organization—to craft hyper-personalized phishing campaigns and social engineering attacks. These messages can closely mimic the tone and style of internal communications, making it increasingly difficult for recipients to distinguish between legitimate and malicious content. In a now widely reported incident, what was likely a combination of voice cloning and video deepfakes were used to convince an employee at a multinational firm in Hong Kong to pay out $25 million. After participating in what turned out to be a fake, multi-person video conference call, and despite some initial misgivings, the employee did as requested. Suggestion: Train staff members to recognize the warning signs of AI-enabled impersonation, such as limited interaction or refusal to answer unexpected questions. And encourage them to trust their gut. If something feels off, it probably is, and they should pursue additional verification options. Repeated exposure to information and examples of the many ways bad actors are attempting to infiltrate and influence organizations and employees can help keep the threats top-of-mind and help minimize the chances of falling prey to these attacks. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

CNN correspondent walks through aftermath of deadly tornado
CNN correspondent walks through aftermath of deadly tornado

CNN

time18-05-2025

  • CNN

CNN correspondent walks through aftermath of deadly tornado

Deepfake detectors fooled by expert With AI technology creating more and more realistic deepfakes, detectors are not up to the challenge of realizing what is real and what is fake, according to an industry expert. CNN's Isabel Rosales looks at how this technology can be bypassed and what you can do to protect yourself. An earlier version of this video gave the incorrect title for Perry Carpenter. He is the Chief Human Risk Management Strategist at KnowBe4.

Rare dust storm blankets Chicago
Rare dust storm blankets Chicago

CNN

time18-05-2025

  • CNN

Rare dust storm blankets Chicago

Deepfake detectors fooled by expert With AI technology creating more and more realistic deepfakes, detectors are not up to the challenge of realizing what is real and what is fake, according to an industry expert. CNN's Isabel Rosales looks at how this technology can be bypassed and what you can do to protect yourself. An earlier version of this video gave the incorrect title for Perry Carpenter. He is the Chief Human Risk Management Strategist at KnowBe4.

CNN correspondent walks through aftermath of deadly tornado
CNN correspondent walks through aftermath of deadly tornado

CNN

time17-05-2025

  • CNN

CNN correspondent walks through aftermath of deadly tornado

Deepfake detectors fooled by expert With AI technology creating more and more realistic deepfakes, detectors are not up to the challenge of realizing what is real and what is fake, according to an industry expert. CNN's Isabel Rosales looks at how this technology can be bypassed and what you can do to protect yourself. An earlier version of this video gave the incorrect title for Perry Carpenter. He is the Chief Human Risk Management Strategist at KnowBe4.

Rare dust storm blankets Chicago
Rare dust storm blankets Chicago

CNN

time17-05-2025

  • CNN

Rare dust storm blankets Chicago

Deepfake detectors fooled by expert With AI technology creating more and more realistic deepfakes, detectors are not up to the challenge of realizing what is real and what is fake, according to an industry expert. CNN's Isabel Rosales looks at how this technology can be bypassed and what you can do to protect yourself. An earlier version of this video gave the incorrect title for Perry Carpenter. He is the Chief Human Risk Management Strategist at KnowBe4.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store