Latest news with #HarmonicSecurity


Techday NZ
01-08-2025
- Business
- Techday NZ
Sensitive data exposure rises with employee use of GenAI tools
Harmonic Security has released its quarterly analysis finding that a significant proportion of data shared with Generative AI (GenAI) tools and AI-enabled SaaS applications by employees contains sensitive information. The analysis was conducted on a dataset comprising 1 million prompts and 20,000 files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June. According to the findings, 22% of files (total 4,400) and 4.37% of prompts (total 43,700) included sensitive data. The categories of sensitive data encompassed source code, access credentials, proprietary algorithms, merger and acquisition (M&A) documents, customer or employee records, and internal financial information. Use of new GenAI tools The data highlights that in the second quarter alone, organisations on average saw employees begin using 23 previously unreported GenAI tools. This expanding variety of tools increases the administrative load on security teams, who are required to vet each tool to ensure it meets security standards. A notable proportion of AI tool use occurs through personal accounts, which may be unsanctioned or lack sufficient safeguards. Almost half (47.42%) of sensitive uploads to Perplexity were made via standard, non-enterprise accounts. The numbers were lower for other platforms, with 26.3% of sensitive data entering ChatGPT through personal accounts, and just 15% for Google Gemini. Data exposure by platform Analysis of sensitive prompts identified ChatGPT as the most common origin point in Q2, accounting for 72.6%, followed by Microsoft Copilot with 13.7%, Google Gemini at 5.0%, Claude at 2.5%, Poe at 2.1%, and Perplexity at 1.8%. Code leakage represented the most prevalent form of sensitive data exposure, particularly within ChatGPT, Claude, DeepSeek, and Baidu Chat. File uploads and risks The report found that, on average, organisations uploaded 1.32GB of files in the second quarter, with PDFs making up approximately half of all uploads. Of these files, 21.86% contained sensitive data. The concentration of sensitive information was higher in files compared to prompts. For example, files accounted for 79.7% of all stored credit card exposure incidents, 75.3% of customer profile leaks, and 68.8% of employee personally identifiable information (PII) incidents. Files accounted for 52.6% of exposure volume related to financial projections. Less visible sources of risk GenAI risk does not only arise from well-known chatbots. Increasingly, regular SaaS tools that integrate large language models (LLMs) - often without clear labelling as GenAI - are becoming sources of risk as they access and process sensitive information. Canva was reportedly used for documents containing legal strategy, M&A planning, and client data. Replit and were involved with proprietary code and access keys, while Grammarly and Quillbot edited contracts, client emails, and internal legal content. International exposure Use of Chinese GenAI applications was cited as a concern. The study found that 7.95% of employees in the average enterprise engaged with a Chinese GenAI tool, leading to 535 distinct sensitive exposure incidents. Within these, 32.8% were related to source code, access credentials, or proprietary algorithms, 18.2% involved M&A documents and investment models, 17.8% exposed customer or employee PII, and 14.4% contained internal financial data. Preventative measures "The good news for Harmonic Security customers is that this sensitive customer data, personally identifiable information (PII), and proprietary file contents never actually left any customer tenant, it was prevented from doing so. But had organizations not had browser based protection in place, sensitive information could have ended up training a model, or worse, in the hands of a foreign state. AI is now embedded in the very tools employees rely on every day and in many cases, employees have little knowledge they are exposing business data." Harmonic Security Chief Executive Officer and Co-founder Alastair Paterson made this statement, referencing the protections offered to their customers and the wider risks posed by the pervasive nature of embedded AI within workplace tools. Harmonic Security advises enterprises to seek visibility into all tool usage – including tools available on free tiers and those with embedded AI – to monitor the types of data being entered into GenAI systems and to enforce context-aware controls at the data level. The recent analysis utilised the Harmonic Security Browser Extension, which records usage across SaaS and GenAI platforms and sanitises the information for aggregate study. Only anonymised and aggregated data from customer environments was used in the analysis.


Axios
31-07-2025
- Business
- Axios
Workers are spilling secrets to chabots
Sensitive corporate data appeared in more than 4% of generative AI prompts and over 20% of uploaded files in the second quarter of this year, according to new research from Harmonic Security, released Thursday. The big picture: The problem isn't new, but as workplace genAI use increases, many employers still lack or don't enforce AI policies, causing employees to use bots in secret or without proper training. By the numbers: Harmonic Security sampled a million prompts and 20,000 files submitted to 300 genAI tools and AI-enabled SaaS applications between April and June. 43,700 of the prompts (4.4%) and 4,400 of the uploaded files (22%) contained sensitive information. Between the lines: Personal and free chatbot accounts make up a large share of corporate data exposure. Nearly half (47.42%) of sensitive uploads to Perplexity were from users with standard (non-enterprise) accounts. About a quarter of the prompts with sensitive information came through the free version of ChatGPT, and another 15% of sensitive prompts were submitted via free versions of Google Gemini accounts. Zoom in: Overall, including free and paid tiers, ChatGPT was by far the biggest source of prompt-based information exposure, followed by Microsoft Copilot and Google Gemini. Code was the most common type of sensitive data sent to chatbots. Harmonic says code was "especially prevalant in ChatGPT, Claude, DeepSeek and Baidu Chat." The number of prompts containing proprietary code was disproportionately high in Claude, which is often regarded as the best AI tool for coders. Sensitive prompts to ChatGPT involved M&A planning, financial modeling, and investor communications. The intrigue: Tools that feel safe — like document editors or design platforms — may now include genAI features trained on user data, creating exposure risk that bypasses traditional controls, Harmonic says. Harmonic found that Canva, Replit, Grammarly, and other tools with LLMs embedded inside them were used for legal strategy, internal emails, client data, and code. These uses were often not flagged as AI tools by corporate systems.


Business Wire
17-07-2025
- Business
- Business Wire
1 in 12 Employees Use at Least One Chinese GenAI Tool at Work Reveals New Analysis of 14,000 End Users
LONDON & SAN FRANCISCO--(BUSINESS WIRE)--Harmonic Security has today released new research revealing widespread use of Chinese-developed generative AI (GenAI) applications within the workplace. The behavioral analysis, conducted over 30 days across a sample of approximately 14,000 end users in the United States and United Kingdom finds that 7.95%, or nearly one in 12 employees used at least one Chinese GenAI tool. Among the 1,059 users who engaged with Chinese GenAI tools, Harmonic Security detected 535 incidents of sensitive data exposure. The majority of exposure occurred via DeepSeek, which accounted for roughly 85% of the total, followed by Moonshot Kimi, Qwen, Baidu Chat and Manus. In terms of what sensitive data was exposed, code and development artifacts represented the largest category, making up 32.8% of the total. This included proprietary code, access keys, and internal logic. This was followed by mergers & acquisitions data (18.2%), personally identifiable information (PII) (17.8%), financial information (14.4%), customer data (12.0%), and legal documents (4.9%). Engineering-heavy organizations were found to be particularly exposed, as developers increasingly turn to GenAI for coding assistance, potentially without realizing the implications of submitting internal source code, API keys, or system architecture into foreign-hosted models. Alastair Paterson, CEO and co-founder Harmonic Security comments: 'All data submitted to these platforms should be considered property of the Chinese Communist Party given a total lack of transparency around data retention, input reuse, and model training policies, exposing organizations to potentially serious legal and compliance liabilities. But these apps are extremely powerful with many outperforming their US counterparts, depending on the task. This is why employees will continue to use them but they're effectively blind spots for most enterprise security teams.' Paterson continues: 'Blocking alone is rarely effective and often misaligned with business priorities. Even in companies willing to take a hardline stance, users frequently circumvent controls. A more effective approach is to focus on education and train employees on the risks of using unsanctioned GenAI tools, especially Chinese-hosted platforms. We also recommend providing alternatives via approved GenAI tools that meet developer and business needs. Finally, enforce policies that prevent sensitive data, particularly source code, from being uploaded to unauthorized apps. Organizations that avoid blanket blocking and instead implement light-touch guardrails and nudges see up to a 72% reduction in sensitive data exposure, while increasing AI adoption by as much as 300%." The data for this analysis was collected using insights from Harmonic Security Protect, which monitors user behavior around SaaS-based GenAI apps. All data was anonymized and sanitized prior to analysis. The dataset included file upload volumes, app usage frequency, and prompt-level detections of sensitive content exposure. To read the full report, please go to: About Harmonic Harmonic Security lets your teams adopt AI tools safely by protecting sensitive data in real time with minimal effort. It gives you full control and stops leaks so your teams can innovate confidently. For more information, visit


Zawya
09-06-2025
- Business
- Zawya
How unsanctioned staff AI use exposes firms to data breach?
As chat bots that continue to grow in prominence across the globe and grab the attention of billions of people, a silent problem of privacy breaches is brewing, putting at risk companies that process scores of personal data. Cybersecurity firm Harmonic Security analysed over 176,000 prompts input by about 8,000 users into popular generative (gen) AI platforms like ChatGPT, Google's Gemini, Perplexity AI, and Microsoft's Copilot, and found that troves of sensitive information make their way into the platforms through the prompts. In the quarter to March 2025, about 6.7 percent of the prompts tracked contained sensitive information including customer personal data, employee data, company confidential legal and finance details, or even sensitive code. About 30 percent of the sensitive data were legal and finance data on companies' planned mergers or acquisitions, investment portfolio, legal discourse, billing and payment, sales pipeline, or even financial projections. Read: AIdentity crisis: How tech is easing online fraudCustomer data like credit card numbers, transactions, or profiles also made their way to these platforms through the prompts, as did employee information like payroll details and employment profiles. Developers seeking to improve or perfect their codes using genAI tools also inadvertently passed on copyrighted or intellectual property material, security keys, and network information into the bots, exposing their companies to fraudsters. Asked about the safety of such information, chat bots like ChatGPT always say the information is safe and is not shared with third parties. Even their terms of service say as much, but experts have a warning. While the information may seem secure within the bots and pose no threat of breach, the experts say it is time companies start checking and restricting what information their employees feed into these platforms, or risk massive data breaches.'One of the privacy risks when using AI platforms is unintentional data leakage,' warns Anna Collard, senior vice president for content strategy at cybersecurity firm KnowBe4 Africa. 'Many people don't realise just how much sensitive information they're inputting.''Cyber hygiene now includes AI hygiene. This should include restricting access to genAI tools without oversight or only allowing those approved by the company.'While a majority of companies around the globe now acknowledge the importance of AI in their operations and are beginning to adopt it, only a few organisations have policies or checks for AI output. According to McKinsey's latest State of AI survey that interviewed business leaders across the globe, only 27 percent of companies fully review content generated by AI. Forty three percent of companies check less than 40 percent of such content. But AI use is growing by the minute. Large language Models (LLMs) like ChatGPT have trampled social media apps that have long been digital magnets in user visits and hours of daily interactions. Read: 'Godfather of AI' now fears it's unsafe. Proposes plan to rein it inMultiple studies, including the one by McKinsey, show that today, nearly three in four employees use genAI to complete simple tasks like writing a speech, proofreading a write-up, writing an email, analysing a document, generating a quotation, or even writing computer programmes. The rapid proliferation of Chinese-based LLMs like Deepseek is also seen increasing the threat of data breaches to companies. Over the past year, there has been an avalanche of new Chinese chat bots, including Baidu chat, Ernie Bot, Qwen chat, Manus, and Kimi Moonshot among others.'The Chinese government can likely just request access to this data, and data shared with them should be considered property of the Chinese Communist Party,' notes Harmonic in a recent report. © Copyright 2022 Nation Media Group. All Rights Reserved. Provided by SyndiGate Media Inc. (


Arabian Post
02-06-2025
- Business
- Arabian Post
Generative AI Tools Expose Corporate Secrets Through User Prompts
A significant portion of employee interactions with generative AI tools is inadvertently leaking sensitive corporate data, posing serious security and compliance risks for organisations worldwide. A comprehensive analysis by Harmonic Security, involving tens of thousands of prompts submitted to platforms such as ChatGPT, Copilot, Claude, Gemini, and Perplexity, revealed that 8.5% of these interactions contained sensitive information. Notably, 45.77% of the compromised data pertained to customer information, including billing details and authentication credentials. Employee-related data, such as payroll records and personal identifiers, constituted 26.68%, while legal and financial documents accounted for 14.95%. Security-related information, including access keys and internal protocols, made up 6.88%, and proprietary source code comprised 5.64% of the sensitive data identified. The prevalence of free-tier usage among employees exacerbates the risk. In 2024, 63.8% of ChatGPT users operated on the free tier, with 53.5% of sensitive prompts entered through these accounts. Similar patterns were observed across other platforms, with 58.62% of Gemini users, 75% of Claude users, and 50.48% of Perplexity users utilizing free versions. These free tiers often lack robust security features, increasing the likelihood of data exposure. ADVERTISEMENT Anna Collard, Senior Vice President of Content Strategy & Evangelist at KnowBe4 Africa, highlighted the unintentional nature of these data leaks. She noted that users often underestimate the sensitivity of the information they input into AI platforms, leading to inadvertent disclosures. Collard emphasized that the casual and conversational nature of generative AI tools can lower users' guards, resulting in the sharing of confidential information that, when aggregated, can be exploited by malicious actors for targeted attacks. The issue is compounded by the lack of comprehensive governance policies within organizations. A study by Dimensional Research and SailPoint found that while 96% of IT professionals acknowledge the security threats posed by autonomous AI agents, only 54% have full visibility into AI agent activities, and a mere 44% have established governance policies. Furthermore, 23% of IT professionals reported instances where AI agents were manipulated into revealing access credentials, and 80% observed unintended actions by these agents, such as accessing unauthorized systems or sharing inappropriate data. The rapid adoption of generative AI tools, driven by their potential to enhance productivity and innovation, has outpaced the development of adequate security measures. Organizations are now grappling with the challenge of balancing the benefits of AI integration with the imperative to protect sensitive data. Experts advocate for the implementation of stringent oversight mechanisms, including robust access controls and comprehensive user education programs, to mitigate the risks associated with generative AI usage.