Latest news with #ThreatIntelligenceGroup


Mint
3 days ago
- Mint
Google uncovers malware campaign by China-linked hackers using Calendar events in a sophisticated cyberattack
In a concerning revelation, Google's Threat Intelligence Group (GTIG) has uncovered that a group of hackers linked to China used Google Calendar as a tool to steal sensitive information from individuals. The group, known as APT41 or HOODOO, is believed to have ties to the Chinese government. According to GTIG, the attack began with a spear phishing campaign. This method involves sending carefully crafted emails to specific targets. These emails included a link to a ZIP file hosted on a compromised government website. Once the victim opened the ZIP file, they would find a shortcut file disguised as a PDF and a folder with several images of insects and spiders. However, two of these image files were fake and actually contained malicious software. When the victim clicked the shortcut, it triggered the malware and even replaced itself with a fake PDF that appeared to be about species export regulations, likely to avoid suspicion. The malware worked in three steps. First, it decrypted and ran a file named PLUSDROP in the computer's memory. Then, it used a known Windows process to secretly run harmful code. In the final stage, a program called TOUGHPROGRESS carried out commands and stole data. What made this attack unusual was the use of Google Calendar as a communication tool. The malware created short, zero-minute events on specific dates. These events included encrypted data or instructions hidden in their description field. The malware regularly checked these calendar events for new commands from the hacker. After completing a task, it would create another event with the stolen information. Google said the campaign was discovered in October 2024 after it found malware spreading from a compromised government website. The tech company has since shut down the calendar accounts used by the hackers and removed other parts of their online infrastructure. To stop similar attacks in the future, Google has improved its malware detection systems and blocked the harmful websites involved. It also alerted organisations that may have been affected and shared technical details to help them respond and protect themselves.


Forbes
03-04-2025
- Business
- Forbes
Stop Sleeping On AI: Why Security Teams Should Embrace The Technology
Ron Williams is the CEO and founder of . getty Artificial intelligence (AI) is no longer a futuristic tool for cybersecurity. It's gone mainstream. Threat actors have integrated AI into their operations with alarming success, using tools like WormGPT, GhostGPT and even legitimate platforms like Google's Gemini AI to scale their attacks. Google's Threat Intelligence Group recently detailed how state-sponsored actors have been abusing Gemini AI to enhance reconnaissance, scripting and privilege escalation. These factors lead to a harsh reality: The asymmetry of power in AI between cybersecurity and bad actors is growing, and security teams are falling behind. If defenders don't start using AI to automate workflows, mitigate threats and improve incident response, they risk being perpetually outpaced by modern attackers. The time to act is now, not after attackers have perfected the use of AI in their operations. ChatGPT democratized consumer AI access, revolutionizing a whole range of industries. However, cybercriminals quickly recognized its potential for malicious usage, and just a year after its launch, discussions on cybercrime networks about exploiting AI exploded, leading to an increase in AI-based attack strategies. Hundreds of thousands of ChatGPT accounts were being bought and sold on underground markets, and by mid-2023, WormGPT, a malicious chatbot designed to enhance business email compromise attacks and spear-phishing campaigns, sent shockwaves through the industry. WormGPT was marketed as an AI tool specifically trained on malicious datasets to improve cybercrime operations, prompting headlines warning of AI-powered cybercrime on the rise. But WormGPT was just the beginning. Variants like FraudGPT, DarkBERT (not to be confused with DarkBART) and GhostGPT followed. Fast-forwarding to today, cybercriminals have found multiple ways to weaponize AI for their operations: • Bypassing ethical constraints: Mainstream AI models like ChatGPT and Claude refuse to generate phishing emails. However, attackers discovered ways to manipulate them into compliance using prompt engineering. • Masquerading legitimate chatbots as malicious chatbots: Some cybercriminals have wrapped jailbroken AI instances within custom interfaces, branding them as their own evil variants and selling access to others. • Training AI models on malicious datasets: Rather than relying on trickery, some groups have trained their own AI models, fine-tuning them with cybercrime-related data to generate more accurate attack strategies. This is essentially how WormGPT and similar tools evolved within months. Why Security Teams Are Hesitant Despite clear evidence of AI's role in advancing cybercrime, many security teams remain hesitant to embrace AI defenses. This reluctance sometimes stems from three key concerns: lack of trust in AI, implementation complexity and job security fears. Lack Of Trust In AI Many cybersecurity professionals view AI as a 'black box' technology and are concerned that it's difficult to predict how AI will behave in a live security environment. Security teams worry that if something goes wrong, they won't be able to remediate the issue due to their lack of understanding of the model's decision-making process. However, while these concerns are valid, they can be addressed. Many AI-based workflows are built on well-documented APIs that offer transparency and allow customization. If security teams take the time to understand how AI-powered tools function in practical applications, much of their skepticism could be alleviated. Implementation Complexity Another major roadblock is the perceived difficulty of integrating AI into legacy security infrastructure. A lot of organizations assume that AI adoption requires a fundamental overhaul of existing systems, which is daunting and expensive. However, security teams can start small by identifying repetitive, time-consuming tasks that AI can automate. Take vulnerability management, for instance. Consultants spend a lot of time triaging vulnerabilities, mapping them to affected assets and prioritizing remediation efforts. AI can optimize this by automatically correlating vulnerabilities with exploitability data, assessing business impact and recommending remediation priorities. A simple exercise to test AI's effectiveness is to take a common, repetitive security task and design an AI-assisted workflow to replace it. Even partial automation can yield a large return on investment in saved time and improved accuracy. Job Displacement Some security professionals fear that widespread AI adoption could automate them out of a job. While discussions about AI replacing analysts entirely are common in the industry, AI should be viewed as an augmentation tool rather than a replacement. The focus should be on promoting this perspective. Organizations that upskill their employees to work alongside AI will develop a stronger, more efficient security team. The bigger point here is that AI won't eliminate security teams—it will empower them. By automating time-consuming and mundane tasks, security analysts can focus on higher-value work, like investigating more complex threats, threat hunting and incident response. How AI Helps Security Teams Whether operating within a security operations center (SOC) or following a more agile approach, all security teams encounter repetitive tasks that can be automated. AI-powered security solutions can assist with this by: Automating repetitive alert investigations, reducing analyst burnout and improving response times. Improving detection capabilities by identifying patterns in large datasets faster than human analysts. Consider a typical security analyst's workflow: They receive an alert, analyze it, extract indicators of compromise, query threat intelligence databases, determine if it's a genuine threat, document the findings and respond accordingly. AI automates much of this process, alleviating manual operational burdens. The benefits of AI and autonomous agents extend beyond the SOC; AI can also improve web application security, agile security in software development lifecycles, penetration testing and threat intelligence gathering. Security teams don't need to overhaul their entire infrastructure overnight. Incremental AI adoption can have immediate benefits. The Cost Of Inaction AI is not a passing trend—it's the present and future of cybersecurity. Attackers are not waiting for defenders to catch up. They are actively refining AI-augmented attack methods, making their operations faster, more scalable and more effective. Security teams must recognize that the only way to counter AI-based cyber threats is to fight fire with fire. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Forbes
31-03-2025
- Forbes
NSA Warning—Change Your iPhone And Android Message Settings
Do not make this dangerous messaging mistake Update: Republished on March 31 with a new report into the dangers of secure messaging in the workplace and a twist on WhatsApp versus Signal. The secure messaging apps on your phone are dangerous. Not because their own security measures are vulnerable to attack — although that does happen, but because their security is only as good as your behavior. And millions of iPhone and Android users don't realize that simple mistakes can open your phone to attack. That was the crux of the NSA's warning that has now been made public and which has been headlined as a Signal vulnerability in the wake of Trump officials inadvertently inviting a journalist onto a sensitive group chat. But it's not. It's a user vulnerability. The NSA notification is a warning to change messaging settings. Nothing more. The NSA warning last month was prompted by Google's Threat Intelligence Group discovering Russia's GRU was tricking Ukrainian officials into opening access to their Signal accounts, allowing the Russians to listen in. This wasn't a Signal flaw — the app was working as intended. And it wasn't limited to Signal. Google warned 'this threat also extends to other popular messaging applications such as WhatsApp and Telegram.' The two 'vulnerabilities' relate to features in both Signal and WhatsApp that make them easier to use. Linked Devices and Group Links. The first enables you to sync and access your secure messaging apps on all your eligible devices. The second provides a simple way for you to invite new members into a group chat by sending them a link, rather than adding them one-by-one from within the group. The Group Link threat only extends to the group itself, and is easily mitigated. In Signal, disable the Group Link from within the group's settings. In WhatsApp you don't have that option, but do not use links for sensitive groups; you should also set sensitive groups in WhatsApp such that only Admins can add members. The Linked Devices option is much more dangerous as it can establish a fully sync'd replica of your messaging app on someone else's device. But again this risk is easily mitigated. In both apps there is a clear settings menu entitled 'Linked Devices.' Go there now and unlink any device you don't 100% recognize as belonging to you. If in doubt, remove. You can always add it back later if you make a mistake. On both apps, your primary phone is the base and all other devices can be linked and unlinked there. There is a twist to this. In the Russian attack, the Signal group invite link was hijacked to link a device instead, a vulnerability in the invite coding and mechanics, but not the app itself. But there is no way for someone to link a device without it showing in your settings per above. Regularly checking those links is key. It's also worth periodically unlinking browser 'web app' links (as opposed to apps) and relinking. The other advice is to not click group links unless they're expected and you can vouch for the sender. The NSA's other messaging advice should be common sense. Set and regularly change your app PIN and enable the screen lock. Do not share contact or status info, certainly not outside your contacts. The DOD agency also recommends keeping phone and app contacts a separate, albeit that's painful for everyday use. The concept of secure messaging is widely misunderstood. End-to-end encryption is a transmission safeguard. Content is scrambled by your device and unscrambled when it reaches a recipient. Each end (phones in a chat) is vulnerable to a compromise of that device, a user saving content, or the wrong person invited into a group. None of these apps are bulletproof if your other security is flawed or you make a mistake. NSA is not alone in calling out Signal as the headline act when it comes to secure commercial messaging platforms used by politicians and other officials. America's cyber defense agency did the same in the wake of China's Salt Typhoon hacks on U.S. networks. 'Use only end-to-end encrypted communications,' CISA said. 'Adopt a free messaging application for secure communications that guarantees end-to-end encryption, such as Signal or similar app.' With interesting timing, WhatsApp — the most popular secure messenger worldwide, which uses the same Signal encryption protocol and Signals itself — has just made that easier. iPhone users can now select WhatsApp as their default texting and calling app. The platform update that delivers this new capability is rolling out this weekend. In Settings — Apps, select 'Default Apps' and change 'Messaging and 'Calls' options. But again, that doesn't change the user/device vulnerability that will always leave secure messaging at risk. 'The biggest risk of eavesdropping on a Signal conversation comes from the individual phones that the app is running on,' says Foreign Policy. 'While it's largely unclear whether the U.S. officials involved had downloaded the app onto personal or government-issued phones… smartphones are consumer devices, not at all suitable for classified U.S. government conversations.' This is especially acute given that 'an entire industry of spyware companies sells capabilities to remotely hack smartphones for any country willing to pay.' These are the forensic exploits that have plagued iPhones and Androids this year. And so just as it's critical to apply the right messaging settings, it's also critical to keep your phone updated, to avoid risky apps, and to stop clicking on links or unexpected attachments. While Signal has taken the bulk of the headlines given the attack thread in the U.S., in reality it's WhatsApp that's the much bigger problem. 'It's a WhatsApp world at work now,' per the Financial Times, 'and that's not always a good thing.' As the newspaper reports, gone are the days 'you could leave [work] apps to ping away all weekend, knowing the pingers were unlikely to be asking anything more taxing than what time to meet for coffee or whether there was milk in the fridge. Those days are gone. Some time before Covid, office colleagues and work contacts began to send messages over apps once confined to social life.' And WhatsApp is very much top of that list. Ironically, the only key market that has been a holdout against it has been the U.S., where iMessage has remained the dominant secure messaging platform. But even that is now changing, with Meta publicly celebrating WhatsApp passing 100 million U.S. users last summer. 'At some point,' the FT points out, 'it no longer seemed wrong to WhatsApp one's manager, and then add a thumbs up emoji. This seemed entirely sensible at this strange, disconnected time. A few years on though, it also feels as if a dividing line between work and social life has been breached.' Ironically, Signalgate has prompted a gentle spat between WhatsApp and Signal as to which is the more secure app to trade and keep secrets. 'There are big differences between Signal and WhatsApp,' Signal boss Meredith Whittaker posted, after WhatsApp boss Will Cathcart pointed out both use the same core encryption and could therefore be seen in the same bracket, notwithstanding Meta's ownership. 'Signal is the gold standard in private comms,' Whittaker said. 'WhatsApp licenses Signal's cryptography to protect message contents for consumer WhatsApp,' albeit the same level of security doesn't apply to business comms. 'Don't misunderstand — we love that WhatsApp uses our tech to raise the privacy bar of their app. Part of Signal's mission is to set, and encourage the tech ecosystem to meet, this high privacy bar. But these are key differences when it comes to meaningful privacy and the public deserves to understand them, given the stakes. Not have them clouded in marketing.' But it's WhatsApp we need to turn to for the purest irony in this whole story. Just a few days before The Atlantic published its shocking revelations as to its inadvertent eavesdropping on a government 'eyes only' Signal group chat, its rival platform posted on X: 'As an admin, are you letting group members add other people to the chat?' Just that, nothing more. It's almost as if the entire furor could have been foretold. Not that whoever really added reporter Jeffrey Goldberg was or wasn't an admin, just that the risk of those group invites is out there and requires some attention. The bottom line though is very simple. Whether WhatsApp or Signal, both are secure and recommended for use — if used properly. Set them up wrong — either of them, or neglect core phone updates, settings and secure usage, and both will fail. You can read the NSA's full advisory here. Take heed and make sure you keep your work plans, your party plans and even your war plans secret.


Forbes
30-03-2025
- Forbes
NSA Warning—Change Your iPhone, Android Message Settings
Do not make this dangerous messaging mistake Update: Republished on March 30 with a new report into device vulnerability and a new update that simplifies secure communications on iPhones. The secure messaging apps on your phone are dangerous. Not because their own security measures are vulnerable to attack — although that does happen, but because their security is only as good as your behavior. And millions of iPhone and Android users don't realize that simple mistakes can open your phone to attack. That was the crux of the NSA's warning that has now been made public and which has been headlined as a Signal vulnerability in the wake of Trump officials inadvertently inviting a journalist onto a sensitive group chat. But it's not. It's a user vulnerability. The NSA notification is a warning to change messaging settings. Nothing more. The NSA warning last month was prompted by Google's Threat Intelligence Group discovering Russia's GRU was tricking Ukrainian officials into opening access to their Signal accounts, allowing the Russians to listen in. This wasn't a Signal flaw — the app was working as intended. And it wasn't limited to Signal. Google warned 'this threat also extends to other popular messaging applications such as WhatsApp and Telegram.' The two 'vulnerabilities' relate to features in both Signal and WhatsApp that make them easier to use. Linked Devices and Group Links. The first enables you to sync and access your secure messaging apps on all your eligible devices. The second provides a simple way for you to invite new members into a group chat by sending them a link, rather than adding them one-by-one from within the group. The Group Link threat only extends to the group itself, and is easily mitigated. In Signal, disable the Group Link from within the group's settings. In WhatsApp you don't have that option, but do not use links for sensitive groups; you should also set sensitive groups in WhatsApp such that only Admins can add members. The Linked Devices option is much more dangerous as it can establish a fully sync'd replica of your messaging app on someone else's device. But again this risk is easily mitigated. In both apps there is a clear settings menu entitled 'Linked Devices.' Go there now and unlink any device you don't 100% recognize as belonging to you. If in doubt, remove. You can always add it back later if you make a mistake. On both apps, your primary phone is the base and all other devices can be linked and unlinked there. There is a twist to this. In the Russian attack, the Signal group invite link was hijacked to link a device instead, a vulnerability in the invite coding and mechanics, but not the app itself. But there is no way for someone to link a device without it showing in your settings per above. Regularly checking those links is key. It's also worth periodically unlinking browser 'web app' links (as opposed to apps) and relinking. The other advice is to not click group links unless they're expected and you can vouch for the sender. The NSA's other messaging advice should be common sense. Set and regularly change your app PIN and enable the screen lock. Do not share contact or status info, certainly not outside your contacts. The DOD agency also recommends keeping phone and app contacts a separate, albeit that's painful for everyday use. The concept of secure messaging is widely misunderstood. End-to-end encryption is a transmission safeguard. Content is scrambled by your device and unscrambled when it reaches a recipient. Each end (phones in a chat) is vulnerable to a compromise of that device, a user saving content, or the wrong person invited into a group. None of these apps are bulletproof if your other security is flawed or you make a mistake. NSA is not alone in calling out Signal as the headline act when it comes to secure commercial messaging platforms used by politicians and other officials. America's cyber defense agency did the same in the wake of China's Salt Typhoon hacks on U.S. networks. 'Use only end-to-end encrypted communications,' CISA said. 'Adopt a free messaging application for secure communications that guarantees end-to-end encryption, such as Signal or similar app.' With interesting timing, WhatsApp — the most popular secure messenger worldwide, which uses the same Signal encryption protocol and Signals itself — has just made that easier. iPhone users can now select WhatsApp as their default texting and calling app. The platform update that delivers this new capability is rolling out this weekend. In Settings — Apps, select 'Default Apps' and change 'Messaging and 'Calls' options. But again, that doesn't change the user/device vulnerability that will always leave secure messaging at risk. 'The biggest risk of eavesdropping on a Signal conversation comes from the individual phones that the app is running on,' says Foreign Policy. 'While it's largely unclear whether the U.S. officials involved had downloaded the app onto personal or government-issued phones… smartphones are consumer devices, not at all suitable for classified U.S. government conversations.' This is especially acute given that 'an entire industry of spyware companies sells capabilities to remotely hack smartphones for any country willing to pay.' These are the forensic exploits that have plagued iPhones and Androids this year. And so just as it's critical to apply the right messaging settings, it's also critical to keep your phone updated, to avoid risky apps, and to stop clicking on links or unexpected attachments. You can read the NSA's full advisory here. Take heed and make sure you keep your work plans, your party plans and even your war plans secret.
Yahoo
20-02-2025
- Yahoo
Russian hackers target Signal accounts in growing espionage effort
Google's Threat Intelligence Group (GTIG) has identified a rise in Russian state-backed hacking attempts aimed at compromising Signal messenger accounts. These attacks primarily target individuals of interest to Russia's intelligence services, including military personnel, government officials, journalists, and activists. While these efforts are currently tied to Russia's war in Ukraine, experts warn that similar tactics may soon be adopted by other threat actors worldwide. The broader concern extends beyond Signal, as Russian-aligned groups have also been observed targeting messaging platforms like WhatsApp and Telegram using comparable methods, according to the group's latest report published on Feb. 19. Experts warn that these attacks signal a growing global trend in cyber espionage, where governments and hacking groups are increasingly seeking to infiltrate secure messaging apps. The primary technique used in these attacks involves exploiting Signal's "linked devices" feature, which allows users to connect additional devices to their accounts. Hackers have crafted malicious QR codes that, when scanned, link a victim's Signal account to a hacker-controlled device. Read also: US, UK, Australia sanction Russian cyber firm Zservers over ransomware attacks This enables them to intercept messages in real-time without needing direct access to the victim's phone. Phishing campaigns distributing these malicious QR codes have been disguised as legitimate Signal security alerts, group invitations, or even official device-pairing instructions from the Signal website. In some cases, hackers have embedded these QR codes within fake applications designed to mimic software used by the Ukrainian military. Beyond remote phishing, Russian cyber operatives have also deployed this tactic in battlefield scenarios. The group APT44—also known as Sandworm, a unit linked to Russia's military intelligence agency (GRU)—has reportedly used the method on captured devices. Soldiers' Signal accounts are being linked to Russian-controlled infrastructure, allowing continued surveillance of sensitive conversations. This approach is difficult to detect because Signal does not have a centralized system for flagging new linked devices, meaning a successful breach could remain unnoticed for an extended period. Signal, in collaboration with Google, has since strengthened its security measures to counter these phishing attempts. The latest updates for both Android and iOS include enhanced protections designed to prevent unauthorized device linking. Users are urged to update their apps to the newest version and remain cautious of suspicious QR codes or unexpected device-linking requests. Read also: Ukrainian defense tech company Huless raises over $1 million for tethered drone systems We've been working hard to bring you independent, locally-sourced news from Ukraine. Consider supporting the Kyiv Independent.