Latest news with #badactors


Forbes
2 days ago
- Business
- Forbes
Microsoft Confirms Password Deletion—Now Just 8 Weeks Away
Here's when passwords will be deleted. Microsoft wants to delete passwords for its billion-plus users, now 'the password era is ending' and set against the backdrop of hundreds of millions of email addresses and passwords being stolen. 'Bad actors know' passwords are finished, Microsoft says, 'which is why they're desperately accelerating password-related attacks while they still can.' All of which amplifies the risk for anyone yet to upgrade their account security. In parallel, Microsoft is making another headline change, deleting passwords for millions of users just 8 weeks from now. Anyone using Microsoft Authenticator is being warned that 'from August 2025, your saved passwords will no longer be accessible and any generated passwords not saved will be deleted.' You must act now. Here are your deadlines: The company's solution is to first move autofill and then any form of password management to Edge. 'Your saved passwords (but not your generated password history) and addresses are securely synced to your Microsoft account, and you can continue to access them and enjoy seamless autofill functionality with Microsoft Edge.' Passwords are ending in Authenticator Microsoft has added an Authenticator splash screen with a 'Turn on Edge' button as its ongoing campaign to switch users to its own browser continues. It's not just with passwords, of course, there are the endless warnings and nags within Windows and even pointers within security advisories to switch to Edge for safety and security. Microsoft says that 'to continue to use generated passwords, save them from Generator history (via or from the Password tab) into your saved passwords,' and that 'after July 2025, any payment information stored in Authenticator will be deleted from your device.' and 'after August 2025, your saved passwords will no longer be accessible in Authenticator and any generated passwords not saved will be deleted.' Ironically, Microsoft's Authenticator will continue to support passkeys and that's actually what all users should be doing now. Forget old school passwords and two-factor authentication (2FA), all critical accounts should have passkeys added where available, especially your Microsoft and Google accounts. Microsoft wants users to delete passwords once that's done, so no legacy vulnerability remains, albeit Google has not gone quite that far as yet. You do need to remove SMS 2FA though, and use an app or key-based code at a minimum. FIDO's latest research reports that 'over 35% of people had at least one of their accounts compromised due to password vulnerabilities… This is significant for passkey adoption, as 54% of people familiar with passkeys consider them to be more convenient than passwords, and 53% believe they offer greater security.'


Forbes
22-05-2025
- Business
- Forbes
Five AI-Powered Threats Senior Leaders Should Be Aware Of
Perry Carpenter is Chief Human Risk Management Strategist for KnowBe4, a cybersecurity platform that addresses human risk management. We're all too familiar with warnings about phishing scams, and they're still a security issue we need to be aware of. But there are a wide range of other concerns, beyond phishing, that should have your attention—and that you should be sharing with colleagues so they can collaborate with you to protect your company and assets. We're moving into what I call the 'Exploitation Zone'—a widening gap between technological advancement and human adaptability. It is, admittedly, tough to keep up unless, like me, you're singularly focused on data security and staying on top of increasingly sophisticated ploys by bad actors to exploit your human nature. Here are five AI-powered threats you need to understand and take steps to respond to. It's not just emails we have to be worried about these days. Today's hackers can spoof more than email addresses. One of the quickly emerging scams is voice phishing, or vishing. Just last year, we saw a 442% increase in vishing attacks between the first and second half of 2024, according to CrowdStrike. Using publicly available voice snippets they can access via earnings calls, podcasts, video calls or media interviews, cybercriminals are able to create hard-to-detect voice clones. This can take the form of a frantic call from a 'grandchild' to a grandparent asking for money to help get them out of a jam. It can also take the form of a demanding call from a 'CEO' to release funds through a bank transfer. Suggestion: Put steps in place to verify any requests for financial transactions, especially those received via calls or voice messages; consider using authentication questions that only legitimate business representatives would know. Since the pandemic, it's not unusual for many types of meetings to take place in a virtual environment. That includes board meetings. When your board members are participating virtually, there's a chance for manipulation by bad actors. That's not just the stuff of science fiction. Deepfakes have already been used to influence critical business decisions or access sensitive information. A U.S. judicial panel has even considered how deepfakes could disrupt legal trials. Chances are that images and video clips of your board members and senior leaders exist. All cybercriminals need to do is get access to a few seconds of a voice recording, video, or sometimes even a single image and use generative AI tools to create audio and video that most people won't be able to discern from the real. Think I'm exaggerating? You can see me demoing the tools and tactics here. Suggestion: Make sure you're using authentication to protect the security of any video calls. Implement multifactor authentication and establish verification procedures that involve different communication channels. And also, similar to the suggestion for No. 1, consider creating safe words or a verbal challenge/response procedure. In 2023, a fake, likely AI-generated photo of an alleged explosion near the Pentagon briefly caused the S&P 500 to drop. Suggestion: Develop crisis response plans to address the potential for synthetic media attacks, including rapid verification channels that can be used with targeted news outlets and financial partners. Imagine a disgruntled employee using AI voice cloning to generate a fake audio recording of their CEO making discriminatory remarks. Or, picture an AI-generated video showing a senior-level official involved in questionable activities. It's all too possible with the rise of AI-generated content that is now literally at the fingertips of anyone with an axe to grind. Even when these attempts are proven to be false, the damage remains. It used to be true that 'seeing is believing.' That's still true, but what we're seeing may not be actually believable. Suggestion: Be aggressive in monitoring digital channels for synthetic content related to your organization and your key executives, board members and other representatives. Have rapid response plans in place to address any incidents that occur, and be prepared to provide evidence of manipulation. Large language models (LLMs) are the foundational technology behind many generative AI tools. While LLMs themselves don't access real-time information, threat actors can leverage these tools—often in combination with publicly available data about your organization—to craft hyper-personalized phishing campaigns and social engineering attacks. These messages can closely mimic the tone and style of internal communications, making it increasingly difficult for recipients to distinguish between legitimate and malicious content. In a now widely reported incident, what was likely a combination of voice cloning and video deepfakes were used to convince an employee at a multinational firm in Hong Kong to pay out $25 million. After participating in what turned out to be a fake, multi-person video conference call, and despite some initial misgivings, the employee did as requested. Suggestion: Train staff members to recognize the warning signs of AI-enabled impersonation, such as limited interaction or refusal to answer unexpected questions. And encourage them to trust their gut. If something feels off, it probably is, and they should pursue additional verification options. Repeated exposure to information and examples of the many ways bad actors are attempting to infiltrate and influence organizations and employees can help keep the threats top-of-mind and help minimize the chances of falling prey to these attacks. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?