
Kaspersky discovers sophisticated Chrome zero-day exploit used in active attacks
Kaspersky has identified and helped patch a sophisticated zero-day vulnerability in Google Chrome (CVE-2025-2783) that allowed attackers to bypass the browser's sandbox protection system. The exploit, discovered by Kaspersky's Global Research and Analysis Team (GReAT), required no user interaction beyond clicking a malicious link and demonstrated exceptional technical complexity. Kaspersky researchers have been acknowledged by Google for discovering and reporting this vulnerability.
In mid-March 2025, Kaspersky detected a wave of infections triggered when users clicked personalized phishing links delivered via email. After clicking, no additional action was needed to compromise their systems. Once Kaspersky's analysis confirmed that the exploit leveraged a previously unknown vulnerability in the latest version of Google Chrome, Kaspersky swiftly alerted Google's security team. A security patch for the vulnerability was released on March 25, 2025.
Kaspersky researchers dubbed the campaign 'Operation ForumTroll', as attackers sent personalized phishing emails inviting recipients to the 'Primakov Readings' forum. These lures targeted media outlets, educational institutions, and government organizations in Russia. The malicious links were extremely short-lived to evade detection, and in most cases ultimately redirected to the legitimate website for 'Primakov Readings' once the exploit was taken down.
The zero-day vulnerability in Chrome was only part of a chain that included at least two exploits: a still-unobtained remote code execution (RCE) exploit that apparently launched the attack, while the sandbox escape discovered by Kaspersky constituted the second stage. Analysis of the malware's functionality suggests the operation was designed primarily for espionage. All evidence points to an Advanced Persistent Threat (APT) group.
'This vulnerability stands out among the dozens of zero-days we've discovered over the years,' said Boris Larin, principal security researcher at Kaspersky GReAT. 'The exploit bypassed Chrome's sandbox protection without performing any obviously malicious operations – it's as if the security boundary simply didn't exist. The technical sophistication displayed here indicates development by highly skilled actors with substantial resources. We strongly advise all users to update their Google Chrome and any Chromium-based browser to the latest version to protect against this vulnerability.'
Google has credited Kaspersky for uncovering and reporting the issue, reflecting the company's ongoing commitment to collaboration with the global cybersecurity community and ensuring user safety.
Kaspersky continues to investigate Operation ForumTroll. Further details, including a technical analysis of the exploits and malicious payload, will be released in a forthcoming report once Google Chrome user security is assured. Meanwhile, all Kaspersky products detect and protect against this exploit chain and associated malware, ensuring users are shielded from the threat.
This discovery follows Kaspersky GReAT's previous identification of another Chrome zero-day (CVE-2024-4947), which was exploited last year by the Lazarus APT group in a cryptocurrency theft campaign. In that case, Kaspersky researchers found a type confusion bug in Google's V8 JavaScript engine that enabled attackers to bypass security features through a fake cryptogame website.
To safeguard against sophisticated attacks like these, Kaspersky security experts recommend implementing these key protective measures:
Ensure timely software updates: Regularly patch your operating system and browsers—especially Google Chrome—so attackers cannot exploit newly discovered vulnerabilities.
Adopt a multi-layered security approach: Along with endpoint protection, consider solutions like Kaspersky Next XDR Expert that leverage AI/ML to correlate data from multiple sources and automate detection and response against advanced threats and APT campaigns.
Leverage threat intelligence services: Up-to-date, contextual information—such as Kaspersky Threat Intelligence—helps you stay informed about emerging zero-day exploits and the latest attacker techniques.
About Kaspersky
Kaspersky is a global cybersecurity and digital privacy company founded in 1997. With over a billion devices protected to date from emerging cyberthreats and targeted attacks, Kaspersky's deep threat intelligence and security expertise is constantly transforming into innovative solutions and services to protect businesses, critical infrastructure, governments and consumers around the globe. The company's comprehensive security portfolio includes leading endpoint protection, specialized security products and services, as well as Cyber Immune solutions to fight sophisticated and evolving digital threats. We help over 200,000 corporate clients protect what matters most to them.
Learn more at www.kaspersky.com.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Zawya
6 hours ago
- Zawya
TECNO Continues the Partnership to Become Official Global Partner of Africa Cup of Nations (AFCON) 2025 and 2027, Fueling the Dream of Africans Through Football
Innovative AI-driven technology brand TECNO ( today announced that it is strengthening its already close relationship with the Confederation of African Football (CAF) by becoming the Official Global Partner of the TotalEnergies CAF Africa Cup of Nations, Morocco 2025 and TotalEnergies CAF Africa Cup of Nations, KE- UG - TA 2027, bringing its 'Stop at Nothing' spirit to the continent's fans for years to come. The announcement follows TECNO's successful sponsorship of the tournament in 2023 when it became exclusive smartphone sponsor for the event. TECNO's role as Official Global Partner and Official Smartphone and Exclusive Partner in the Smartphone category enables the company to further extend its 'Stop at Nothing' brand promise by deepening its engagement and enhancing its visibility. The brand also benefits from additional media and social media activation rights that will enable it to better connect with young people, share the joy of football, and nurture a shared spirit of daring and progress. The Vice President of Transsion Holdings, Benjamin Jiang said: 'This renewed partnership is a testament to the deep trust and shared success we've built with CAF. In our previous collaboration, we witnessed how football ignited passion and inspired dreams, and how AI-powered smart technologies became powerful tools to connect and empower millions across why this partnership goes beyond the game—it stands as a symbol of ambition and a platform where young people can shine, united by an unstoppable spirit of progress. It reflects our shared vision of harnessing AI-driven innovation to shape a brighter future for Africa. This renewed collaboration marks a significant milestone in TECNO's journey forward.' In addition, TECNO will continue its Dream on the Field Renovation Campaign in cooperation with CAF, launched in 2024. The philanthropic initiative aims to bring more empowerment to African football through technology and to foster youth engagement by improving access to quality playing fields in underserved communities, reflecting TECNO's enduring 'Stop at Nothing' ethos. Véron Mosengo-Omba, CAF General Secretary, said: 'TECNO's longstanding support for football in Africa has had a meaningful impact on the development of the sport. As the highest-profile football tournament on the continent, the Africa Cup of Nations symbolizes passion and the achievement of life-long dreams, resonating with young people's hope for the future and spirit of exploration. Deepening this partnership will help to ensure that AFCON 2025 and 2027 reach new heights of excitement to delight football fans.' The partnership represents a milestone for TECNO in further strengthening its emotional bond with younger generations by empowering them to achieve more possibilities, building on its position as a leading brand among the entire African continet through its imaging technology, product performance and design aesthetics. For any related media queries, please contact Distributed by APO Group on behalf of TECNO Mobile. About TECNO: TECNO is an innovative, AI-driven technology brand with a presence in over 70 markets across five continents. Committed to transforming the digital experience in global emerging markets, TECNO relentlessly pursues the perfect integration of contemporary aesthetic design with the latest technologies and artificial intelligence. Today, TECNO offers a comprehensive ecosystem of AI-powered products, including smartphones, smart wearables, laptops, tablets, smart gaming devices, the HiOS operating system, and smart home products. Guided by its brand essence of 'Stop At Nothing,' TECNO continues to pioneer the adoption of cutting-edge technologies and AI-driven experiences for forward-looking individuals, inspiring them to never stop pursuing their best selves and brightest futures. For more information, please visit TECNO's official site:


Martechvibe
a day ago
- Martechvibe
AdLift Announces the Launch of Tesseract
Tesseract delivers actionable insights for AI-savvy marketing strategies, whether it's identifying brand mentions in ChatGPT outputs or assessing visibility in Google's AI Overviews. Topics News Share Share AdLift Announces the Launch of Tesseract Whatsapp Linkedin AdLift has announced the launch of Tesseract. Tesseract is a tool designed to help brands, agencies, and marketers track and amplify their presence across the rapidly expanding landscape of Large Language Model (LLM) powered search platforms, such as ChatGPT, Gemini, Google AI Overviews, and Perplexity. AdLift Inc., now part of Liqvd Asia, has been at the forefront of innovation, bringing together talent to deliver the best solutions. With Tesseract, they're taking AI-powered marketing to the next level. As AI reshapes the way consumers find and interact with content, traditional SEO methods are fast becoming obsolete. This technology is built to unlock this new frontier, giving brands real-time visibility into how they are being discovered and represented within AI-powered responses. It helps marketers to not only monitor but also optimise their digital footprint where it counts—in the very engines powering the next generation of search. 'Search is undergoing a seismic shift. The dominance of traditional search engines is being challenged by AI-native platforms that interpret and present information differently,' said Prashant Puri, CEO & Co-Founder of AdLift Inc. 'Brands that don't adapt risk becoming invisible in this new landscape. Tesseract is our answer to this challenge—a revolutionary tool that puts brands back in control of their digital destiny.' ALSO READ: Unlike legacy SEO platforms, Tesseract decodes how LLMs display, prioritise, and contextualise brand content. Whether it's identifying brand mentions in ChatGPT outputs or assessing visibility in Google's AI Overviews, the platform delivers actionable insights for AI-savvy marketing strategies. 'AI agents are the future, and businesses are seeing the transformation since their introduction. There's a massive opportunity across industries, and with the Tesseract tool, we are proud to enjoy the first mover advantage of this service,' said Arron Goodin, Managing Director, AdLift Inc. 'As an agency, we are committed towards innovations, helping our clients and building a competitive edge with enhanced efficiency and deeper industry insights.' Arnab Mitra, Founder & Managing Director of Liqvd Asia, commented, 'At Liqvd Asia, innovation is our core. With Tesseract, we're not just responding to the AI revolution—we're shaping it.' 'This product reflects our commitment to empowering brands with cutting-edge solutions that anticipate the future of digital marketing. We believe Tesseract will be a game-changer, enabling brands to thrive in an AI-first world where visibility means everything.' By launching Tesseract, AdLift reaffirms its commitment to pushing the boundaries of digital innovation. ALSO READ:


Khaleej Times
a day ago
- Khaleej Times
Artificial Intelligence in cybersecurity: savior or saboteur?
Artificial intelligence has rapidly emerged as both a cornerstone of innovation and a ticking time bomb in the realm of cybersecurity. Once viewed predominantly as a force for good, enabling smarter threat detection, automating incident responses, and predicting attacks before they happen — AI has now taken on a double-edged role. The very capabilities that make it invaluable to cybersecurity professionals are now being exploited by cybercriminals to launch faster, more convincing, and more damaging attacks. From phishing emails indistinguishable from real business correspondence to deepfake videos that impersonate CEOs and public figures with chilling accuracy, AI is arming attackers with tools that were previously the stuff of science fiction. And as large language models (LLMs), generative AI, and deep learning evolve, the tactics used by bad actors are becoming more scalable, precise, and difficult to detect. 'The threat landscape is fundamentally shifting,' says Sergey Lozhkin, Head of the Global Research & Analysis Team for the Middle East, Türkiye, and Africa at Kaspersky. 'From the outset, cybercriminals began using large language models to craft highly convincing phishing emails. Poor grammar and awkward phrasing — once dead giveaways are disappearing. Today's scams can perfectly mimic tone, structure, and professional language.' But the misuse doesn't stop at email. Attackers are now using AI to create fake websites, generate deceptive images, and even produce deepfake audio and video to impersonate trusted figures. In some cases, these tactics have tricked victims into transferring large sums of money or divulging sensitive data. According to Roland Daccache, Senior Manager – Sales Engineering at CrowdStrike MEA, AI is now being used across the entire attack chain. 'Generative models are fueling more convincing phishing lures, deepfake-based social engineering, and faster malware creation. For example, DPRK-nexus adversary Famous Chollima used genAI to create fake LinkedIn profiles and résumé content to infiltrate organisations as IT workers. In another case, attackers used AI-generated voice and video deepfakes to impersonate executives for high-value business email compromise (BEC) schemes.' The cybercrime community is also openly discussing how to weaponize LLMs for writing exploits, shell commands, and malware scripts on dark web forums, further lowering the barrier of entry for would-be hackers. This democratisation of hacking tools means that even novice cybercriminals can now orchestrate sophisticated attacks with minimal effort. Ronghui Gu, Co-Founder of CertiK, a leading blockchain cybersecurity firm, highlights how AI is empowering attackers to scale and personalize their strategies. 'AI-generated phishing that mirrors human tone, deepfake technology for social engineering, and adaptive tools that bypass detection are allowing even low-skill threat actors to act with precision. For advanced groups, AI brings greater automation and effectiveness.' On the technical front, Janne Hirvimies, Chief Technology Officer of QuantumGate, notes a growing use of AI in reconnaissance and brute-force tactics. 'Threat actors use AI to automate phishing, conduct rapid data scraping, and craft malware that adapts in real time. Techniques like reinforcement learning are being explored for lateral movement and exploit optimisation, making attacks faster and more adaptive.' Fortifying Cyber Defenses To outsmart AI-enabled attackers, enterprises must embed AI not just as a support mechanism, but as a central system in their cybersecurity strategy. 'AI has been a core part of our operations for over two decades,' says Lozhkin. 'Without it, security operations center (SOC) analysts can be overwhelmed by alert fatigue and miss critical threats.' Kaspersky's approach focuses on AI-powered alert triage and prioritisation through advanced machine learning, which filters noise and surfaces the most pressing threats. 'It's not just about automation — it's about augmentation,' Lozhkin explains. 'Our AI Technology Research Centre ensures we pair this power with human oversight. That combination of cutting-edge analytics and skilled professionals enables us to detect over 450,000 malicious objects every day.' But the AI evolution doesn't stop at smarter alerts. According to Daccache, the next frontier is agentic AI — a system that can autonomously detect, analyze, and respond to threats in real time. 'Traditional automation tools can only go so far,' Daccache says. 'What's needed is AI that thinks and acts — what we call agentic capabilities. This transforms AI from a passive observer into a frontline responder.' CrowdStrike's Charlotte AI, integrated within its Falcon platform, embodies this vision. It understands security telemetry in context, prioritises critical incidents, and initiates immediate countermeasures, reducing analyst workload and eliminating delays during high-stakes incidents. 'That's what gives defenders the speed and consistency needed to combat fast-moving, AI-enabled threats,' Daccache adds. Gu believes AI's strength lies in its ability to analyze massive volumes of data and identify nuanced threat patterns that traditional tools overlook. 'AI-powered threat detection doesn't replace human decision-making — it amplifies it,' Gu explains. 'With intelligent triage and dynamic anomaly detection, AI reduces response time and makes threat detection more proactive.' He also stresses the importance of training AI models on real-world, diverse datasets to ensure adaptability. 'The threat landscape is not static. Your AI defenses shouldn't be either,' Gu adds. At the core of any robust AI integration strategy lies data — lots of it. Hirvimies advocates for deploying machine learning models across SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) platforms. 'These systems can correlate real-time threat intelligence, behavioral anomalies, and system events to deliver faster, more precise responses,' he says. 'Especially when it comes to detecting novel or stealthy attack patterns, machine learning makes the difference between catching a threat and becoming a headline.' Balancing Innovation with Integrity While AI can supercharge threat detection, response times, and threat simulations, it also brings with it the potential for misuse, collateral damage, and the erosion of privacy. 'Ethical AI use demands transparency, clear boundaries, and responsible data handling,' says Lozhkin.'Organisations must also ensure that employees are properly trained in the safe use of AI tools to avoid misuse or unintended exposure to threats.' He highlights Kaspersky's Automated Security Awareness Platform, which now includes dedicated sections on AI-assisted threats and responsible usage, reflecting the company's commitment to proactive education. When AI is deployed in red teaming or simulated cyberattacks, the risk matrix expands. Gu warns that AI systems, if left unchecked, can make decisions devoid of human context, potentially leading to unintended and widespread consequences. 'Ethical AI governance, robust testing environments, and clearly defined boundaries are essential,' he says, underlining the delicate balance required to simulate threats without crossing into unethical territory. Daccache emphasises the importance of a privacy-first, security-first approach. 'AI must be developed and operated with Privacy-by-Design and Secure-by-Design principles,' he explains. 'This extends to protecting the AI systems themselves — including their training data, operational logic, and outputs—from adversarial manipulation.' Daccache also points to the need for securing both AI-generated queries and outputs, especially in sensitive operations like red teaming. Without such safeguards, there's a real danger of data leakage or misuse. 'Transparency, accountability, and documentation of AI's capabilities and limitations are vital, not just to build trust, but to meet regulatory and ethical standards,' he adds. Despite AI's growing autonomy, human oversight remains non-negotiable. 'While AI can accelerate simulations and threat detection, it must be guided by skilled professionals who can interpret its actions with context and responsibility,' says Daccache. This human-AI collaboration ensures that the tools remain aligned with organisational values and ethical norms. Hirvimies rounds out the conversation with additional cautionary notes: 'Privacy violations, data misuse, bias in training datasets, and the misuse of offensive tools are pressing concerns. Transparent governance and strict ethical guidelines aren't optional, they're essential.' Balancing the Equation While AI promises speed, scale, and smarter defense mechanisms, experts caution that an over-reliance on these systems, especially when deployed without proper calibration and oversight — could expose organisations to new forms of risk. 'Absolutely, over-reliance on AI can backfire if systems are not properly calibrated or monitored,' says Lozhkin. 'Adversarial attacks where threat actors feed manipulated data to mislead AI are a growing concern. Additionally, AI can generate false positives, which can overwhelm security teams and lead to alert fatigue. To avoid this, companies should use a layered defence strategy, retrain models frequently, and maintain human oversight to validate AI-driven alerts and decisions.' This warning resonates across the cybersecurity landscape. Daccache echoes the concern, emphasising the need for transparency and control. 'Over-relying on AI, especially when treated as a black box, carries real risks. Adversaries are already targeting AI systems — from poisoning training data to crafting inputs that exploit model blind spots,' he explains. 'Without the right guardrails, AI can produce false positives or inconsistent decisions that erode trust and delay response.' Daccache stresses that AI must remain a tool that complements — not replaces—human decision-making. 'AI should be an extension of human judgement. That requires transparency, control, and context at every layer of deployment. High-quality data is essential, but so is ensuring outcomes are explainable, repeatable and operationally sound,' he says. 'Organisations should adopt AI systems that accelerate outcomes and are verifiable, auditable and secure by design.' Gu adds that blind spots in AI models can lead to serious lapses. 'AI systems are not infallible,' he says. 'Over-reliance can lead to susceptibility to adversarial inputs or overwhelming volumes of false positives that strain human analysts. To mitigate this, organizations should adopt a human-in-the-loop approach, combine AI insights with contextual human judgment, and routinely stress-test models against adversarial tactics.' Gu also warns about the evolving tactics of bad actors. 'An AI provider might block certain prompts to prevent misuse, but attackers are constantly finding clever ways to circumvent these restrictions. This makes human intervention all the more important in companies' mitigation strategies.' Governing the Double-Edged Sword As AI continues to embed itself deeper into global digital infrastructure, the question of governance looms large: will we soon see regulations or international frameworks guiding how AI is used in both cyber defense and offense? Lozhkin underscores the urgency of proactive regulation. 'Yes, there should definitely be an international framework. AI technologies offer incredible efficiency and progress, but like any innovation, they carry their fair share of risks,' he says. 'At Kaspersky, we believe new technologies should be embraced, not feared. The key is to fully understand their threats and build strong, proactive security solutions that address those risks while enabling safe and responsible innovation.' For Daccache, the focus is not just on speculative regulation, but on instilling foundational principles in AI systems from the start. 'As AI becomes more embedded in cybersecurity and digital infrastructure, questions around governance, risk, and accountability are drawing increased attention,' he explains. 'Frameworks like the GDPR already mandate technology-neutral protections, meaning what matters most is how organizations manage risk not whether AI is used.' Daccache emphasises that embedding Privacy-by-Design and Secure-by-Design into AI development is paramount. 'To support this approach, CrowdStrike offers AI Red Teaming Services, helping organisations proactively test and secure their AI systems against misuse and adversarial threats. It's one example of how we're enabling customers to adopt AI with confidence and a security-first mindset.' On the other hand, Gu highlights how AI is not only transforming defensive mechanisms but is also fuelling new forms of offensive capabilities. 'As AI becomes integral to both defence and offense in cyberspace, regulatory frameworks will be necessary to establish norms, ensure transparency, and prevent misuse. We expect to see both national guidelines and international cooperation similar to existing cybercrime treaties emerge to govern AI applications, particularly in areas involving privacy, surveillance, and offensive capabilities.' Echoing this sentiment, Hirvimies concludes by saying that developments are already underway. 'Yes. Regulations like the EU AI Act and global cyber norms are evolving to address dual-use AI,' he says. 'We can expect more international frameworks focused on responsible AI use in cyber defence, limits on offensive AI capabilities, and cross-border incident response cooperation. At QuantumGate, we've designed our products to support this shift and facilitate compliance with the country's cryptography regulations.'