logo
#

Latest news with #vishing

Five AI-Powered Threats Senior Leaders Should Be Aware Of
Five AI-Powered Threats Senior Leaders Should Be Aware Of

Forbes

time22-05-2025

  • Business
  • Forbes

Five AI-Powered Threats Senior Leaders Should Be Aware Of

Perry Carpenter is Chief Human Risk Management Strategist for KnowBe4, a cybersecurity platform that addresses human risk management. We're all too familiar with warnings about phishing scams, and they're still a security issue we need to be aware of. But there are a wide range of other concerns, beyond phishing, that should have your attention—and that you should be sharing with colleagues so they can collaborate with you to protect your company and assets. We're moving into what I call the 'Exploitation Zone'—a widening gap between technological advancement and human adaptability. It is, admittedly, tough to keep up unless, like me, you're singularly focused on data security and staying on top of increasingly sophisticated ploys by bad actors to exploit your human nature. Here are five AI-powered threats you need to understand and take steps to respond to. It's not just emails we have to be worried about these days. Today's hackers can spoof more than email addresses. One of the quickly emerging scams is voice phishing, or vishing. Just last year, we saw a 442% increase in vishing attacks between the first and second half of 2024, according to CrowdStrike. Using publicly available voice snippets they can access via earnings calls, podcasts, video calls or media interviews, cybercriminals are able to create hard-to-detect voice clones. This can take the form of a frantic call from a 'grandchild' to a grandparent asking for money to help get them out of a jam. It can also take the form of a demanding call from a 'CEO' to release funds through a bank transfer. Suggestion: Put steps in place to verify any requests for financial transactions, especially those received via calls or voice messages; consider using authentication questions that only legitimate business representatives would know. Since the pandemic, it's not unusual for many types of meetings to take place in a virtual environment. That includes board meetings. When your board members are participating virtually, there's a chance for manipulation by bad actors. That's not just the stuff of science fiction. Deepfakes have already been used to influence critical business decisions or access sensitive information. A U.S. judicial panel has even considered how deepfakes could disrupt legal trials. Chances are that images and video clips of your board members and senior leaders exist. All cybercriminals need to do is get access to a few seconds of a voice recording, video, or sometimes even a single image and use generative AI tools to create audio and video that most people won't be able to discern from the real. Think I'm exaggerating? You can see me demoing the tools and tactics here. Suggestion: Make sure you're using authentication to protect the security of any video calls. Implement multifactor authentication and establish verification procedures that involve different communication channels. And also, similar to the suggestion for No. 1, consider creating safe words or a verbal challenge/response procedure. In 2023, a fake, likely AI-generated photo of an alleged explosion near the Pentagon briefly caused the S&P 500 to drop. Suggestion: Develop crisis response plans to address the potential for synthetic media attacks, including rapid verification channels that can be used with targeted news outlets and financial partners. Imagine a disgruntled employee using AI voice cloning to generate a fake audio recording of their CEO making discriminatory remarks. Or, picture an AI-generated video showing a senior-level official involved in questionable activities. It's all too possible with the rise of AI-generated content that is now literally at the fingertips of anyone with an axe to grind. Even when these attempts are proven to be false, the damage remains. It used to be true that 'seeing is believing.' That's still true, but what we're seeing may not be actually believable. Suggestion: Be aggressive in monitoring digital channels for synthetic content related to your organization and your key executives, board members and other representatives. Have rapid response plans in place to address any incidents that occur, and be prepared to provide evidence of manipulation. Large language models (LLMs) are the foundational technology behind many generative AI tools. While LLMs themselves don't access real-time information, threat actors can leverage these tools—often in combination with publicly available data about your organization—to craft hyper-personalized phishing campaigns and social engineering attacks. These messages can closely mimic the tone and style of internal communications, making it increasingly difficult for recipients to distinguish between legitimate and malicious content. In a now widely reported incident, what was likely a combination of voice cloning and video deepfakes were used to convince an employee at a multinational firm in Hong Kong to pay out $25 million. After participating in what turned out to be a fake, multi-person video conference call, and despite some initial misgivings, the employee did as requested. Suggestion: Train staff members to recognize the warning signs of AI-enabled impersonation, such as limited interaction or refusal to answer unexpected questions. And encourage them to trust their gut. If something feels off, it probably is, and they should pursue additional verification options. Repeated exposure to information and examples of the many ways bad actors are attempting to infiltrate and influence organizations and employees can help keep the threats top-of-mind and help minimize the chances of falling prey to these attacks. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Don't be a victim! FBI warns iOS, Android users about the latest scam
Don't be a victim! FBI warns iOS, Android users about the latest scam

Phone Arena

time17-05-2025

  • Phone Arena

Don't be a victim! FBI warns iOS, Android users about the latest scam

The latest warning from the FBI for iOS and Android users comes in the form of a Public Service Announcement. The warning says that since last month, malicious actors have impersonated senior US officials to target individuals. Many of those targeted are current or former senior US federal or state government officials themselves. The FBI suggests that if you receive a message from someone claiming to be a senior US official, "do not assume that it is authentic." These attacks have come in the form of fake texts, a practice known as "smishing," and via fake AI-generated phone calls, which is a practice known as "vishing." These texts and calls claim to come from senior US officials and try to cozy up with the targets to gain a rapport with the attackers and their victims. Earning that trust goes a long way toward helping the attackers convince their victims that they need to be sent their personal data, including the credentials they use to sign into personal accounts including banking apps, securities apps, crypto wallets, and other higher sensitive accounts accessible via the target's mobile devices. -FBI Using smishing, vishing, and spear phishing (which is the use of malicious emails to trick the victim into revealing personal data), the threat actor introduces malware or includes hyperlinks with the malicious text that will send the victim to a site controlled by the threat actor that steals usernames and passwords. Smishing attacks generate phone numbers that are used by the attacker to call. The attacker will pretend to be a business associate or a relative to engage with the target and collect log-in credentials. At the top of this story, we told you that the FBI is concerned with the latest smishing and vishing attacks, and victims are receiving texts and AI voice messages that claim to be from senior US officials. The FBI suggests that the first thing you should do if you receive one of these calls or texts is to verify the person and organization that allegedly sent you the text or phone call. The FBI suggests that before responding, research the originating number, organization, and/or person purporting to contact you. Then, independently identify a phone number for the person and call to verify their authenticity." The FBI also says, "Carefully examine the email address; messaging contact information, including phone numbers; URLs; and spelling used in any correspondence or communications. Scammers often use slight differences to deceive you and gain your trust. For instance, actors can incorporate publicly available photographs in text messages, use minor alterations in names and contact information, or use AI-generated voices to masquerade as a known contact." Take a long look at any images and or videos sent to you for "subtle imperfections." Hands or feet could be distorted in AI-generated images, and you might catch irregular facial features, unrealistic accessories such as glasses or jewelry, shadows that look fake, unnatural movements in videos including lags between mouth movement and the words being said. Try to distinguish between a real call and an AI-generated call. If you can't judge the authenticity of a message from someone trying to reach out to you, you can call the FBI for help. In addition, the FBI says that you should not share sensitive information or an associate's contact number with people you've only met online or on the phone. The same applies when it comes to sending cash, gift cards or cryptocurrency. Do not send these items to people you've only met online or on the phone. Do not click on any links found on texts or emails you have received. Additionally, "Never open an email attachment, click on links in messages, or download applications at the request of or from someone you have not verified." Also, you should set up two-factor authentication on all apps that allow it. Never disable it, and never disclose the code to anyone.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store