logo
#

Latest news with #HarmonicSecurity

Generative AI Tools Expose Corporate Secrets Through User Prompts
Generative AI Tools Expose Corporate Secrets Through User Prompts

Arabian Post

time4 days ago

  • Business
  • Arabian Post

Generative AI Tools Expose Corporate Secrets Through User Prompts

A significant portion of employee interactions with generative AI tools is inadvertently leaking sensitive corporate data, posing serious security and compliance risks for organisations worldwide. A comprehensive analysis by Harmonic Security, involving tens of thousands of prompts submitted to platforms such as ChatGPT, Copilot, Claude, Gemini, and Perplexity, revealed that 8.5% of these interactions contained sensitive information. Notably, 45.77% of the compromised data pertained to customer information, including billing details and authentication credentials. Employee-related data, such as payroll records and personal identifiers, constituted 26.68%, while legal and financial documents accounted for 14.95%. Security-related information, including access keys and internal protocols, made up 6.88%, and proprietary source code comprised 5.64% of the sensitive data identified. The prevalence of free-tier usage among employees exacerbates the risk. In 2024, 63.8% of ChatGPT users operated on the free tier, with 53.5% of sensitive prompts entered through these accounts. Similar patterns were observed across other platforms, with 58.62% of Gemini users, 75% of Claude users, and 50.48% of Perplexity users utilizing free versions. These free tiers often lack robust security features, increasing the likelihood of data exposure. ADVERTISEMENT Anna Collard, Senior Vice President of Content Strategy & Evangelist at KnowBe4 Africa, highlighted the unintentional nature of these data leaks. She noted that users often underestimate the sensitivity of the information they input into AI platforms, leading to inadvertent disclosures. Collard emphasized that the casual and conversational nature of generative AI tools can lower users' guards, resulting in the sharing of confidential information that, when aggregated, can be exploited by malicious actors for targeted attacks. The issue is compounded by the lack of comprehensive governance policies within organizations. A study by Dimensional Research and SailPoint found that while 96% of IT professionals acknowledge the security threats posed by autonomous AI agents, only 54% have full visibility into AI agent activities, and a mere 44% have established governance policies. Furthermore, 23% of IT professionals reported instances where AI agents were manipulated into revealing access credentials, and 80% observed unintended actions by these agents, such as accessing unauthorized systems or sharing inappropriate data. The rapid adoption of generative AI tools, driven by their potential to enhance productivity and innovation, has outpaced the development of adequate security measures. Organizations are now grappling with the challenge of balancing the benefits of AI integration with the imperative to protect sensitive data. Experts advocate for the implementation of stringent oversight mechanisms, including robust access controls and comprehensive user education programs, to mitigate the risks associated with generative AI usage.

Arthur Open-Sources First Real-Time AI Evaluation Engine
Arthur Open-Sources First Real-Time AI Evaluation Engine

Associated Press

time31-03-2025

  • Business
  • Associated Press

Arthur Open-Sources First Real-Time AI Evaluation Engine

Build. Experiment. Scale. Now With Open-Source AI Evaluation. NEW YORK, March 31, 2025 /PRNewswire/ -- AI is evolving fast—but making it work at scale remains a challenge. Today, Arthur is launching the Arthur Engine, the first open-source, real-time AI evaluation engine designed to help teams monitor, debug, and improve Generative AI and traditional ML models. No black-box monitoring. No third-party dependencies. No data privacy risks. All for free. Why Real-Time AI Evaluation Matters in 2025 As AI adoption grows, so do its risks. Without real-time evaluation, organizations face: Data leaks— 8.5% of employee prompts contain sensitive data (Harmonic Security). Model degradation— AI models drift over time without ongoing monitoring. Debugging nightmares – Slow iteration cycles lead to poor model performance. The Arthur Engine solves these challenges by providing instant visibility, real-time guardrails, and on-the-fly model optimization—right inside your own environment. 'AI is moving fast, and we need to ensure it moves in the right direction. Open-sourcing the Arthur Engine puts powerful AI evaluation tools into the hands of developers, researchers, and builders worldwide.' — Ashley Nader, Lead AI PM at Arthur What Makes Arthur Engine Different? Unlike traditional AI monitoring tools, Arthur Engine runs locally—preserving data sovereignty and eliminating compliance risks. Real-Time AI Evaluation – Instantly detect failures before they impact production. Active Guardrails – Intervene in real-time to prevent hallucinations and bad outputs. Customizable Metrics – Tailor evaluations to your specific AI use case. Privacy-Preserving & Secure – Keep all data inside your infrastructure. Works Across All Models – Supports GPT, Claude, Gemini, open weights models, and traditional ML. 'By open-sourcing Arthur Engine, we're making AI trust and safety accessible to all developers—allowing them to safeguard AI systems with fully customizable, high-performance monitoring tools.' — Cherie Xu, Technical Lead, Machine Learning at Arthur AI Evaluation, Built for the Future The Arthur Engine is part of Arthur's broader AI performance monitoring suite, designed to help organizations: Validate AI outputs in real time Detect performance shifts before they become problems Ensure regulatory compliance and explainability This open-source release marks a new standard in AI transparency, security, and performance monitoring. AI is reshaping the world—let's make sure it performs the way it should. About Arthur Arthur is the leading AI performance company, empowering organizations to monitor, measure, and improve machine learning and generative AI models at scale. Designed for trust, accuracy, and efficiency, Arthur helps organizations optimize AI performance with real-time insights, proactive model monitoring, and cutting-edge guardrails. Backed by a research-led approach, Arthur delivers exclusive capabilities that enable teams to build, deploy and scale AI with confidence. Founded in 2019, Arthur has raised over $60M in venture funding from Index Ventures, Acrew Capital, Greycroft, Work-Bench, and other top investors.

Balancing AI and Security: A Jordan Imperative
Balancing AI and Security: A Jordan Imperative

Ammon

time28-01-2025

  • Business
  • Ammon

Balancing AI and Security: A Jordan Imperative

Ammon News - Jordan on its attempt to digitize government services is moving expeditiously to be at the forefront of digital transformation in the Middle East, with initiatives like the Jordan Vision 2025 and the National ICT Strategy aimed at leveraging technology to drive economic growth and improve public services. However, the risks associated with data leakage in AI tools could undermine these efforts. If sensitive government data is exposed, it could stall progress in key sectors such as e-governance, Tax, healthcare, and education, where AI is increasingly being deployed. A new report by Harmonic Security reveals alarming trends in data leakage through generative AI (GenAI) tools. The report, titled "From Payrolls to Patents: The Spectrum of Data Leaked into GenAI," highlights how sensitive information—ranging from customer data to proprietary code—is being inadvertently shared with AI platforms like ChatGPT, Copilot, and Gemini. For Jordanian government organizations leveraging AI tools, these findings underscore significant legal, security, and competitive risks. For example, in e-governance, AI tools are used to streamline citizen services, such as processing applications for permits, licenses, and social benefits. If sensitive citizen data is leaked, it could lead to identity theft, fraud, and other forms of cybercrime, eroding public confidence in digital services. Similarly, in healthcare, where AI is used for patient data analysis and diagnostics, a data breach could compromise patient privacy and violate medical confidentiality laws. According to the report, 8.5% of prompts entered into GenAI tools contain sensitive data. This includes customer data (45.77%), such as billing information and authentication credentials, employee data (26.83%), like payroll and personally identifiable information (PII), and even legal and financial data (14.88%), including mergers and acquisitions details. Alarmingly, 63.8% of ChatGPT users relied on the free tier, which often lacks robust security features and may use input data to train AI models. For Jordanian government entities, which increasingly rely on AI for tasks like document summarization, translation, and data analysis, these findings are particularly concerning. The inadvertent exposure of sensitive data highlight the needs for AI GUIDELINE usage or enact regulation for implantation of AI tool because it could lead to breaches of confidentiality, regulatory violations, and loss of public trust. For instance, if sensitive citizen data or internal government communications are leaked, it could result in legal repercussions under Jordan's data protection laws and damage the government's credibility. One of the key challenges in addressing data leakage is the lack of awareness among employees about the risks of sharing sensitive information with AI tools. The Harmonic report emphasizes the importance of user education as a critical component of AI governance. For Jordanian government organizations, this means implementing comprehensive training programs to educate employees on:What constitutes sensitive data. Also, Training should cover how to use AI tools responsibly, including the importance of using enterprise-grade versions and avoiding free-tier tools that may lack adequate security controls. Additionally, Employees should be trained to identify and report attempts to manipulate them into sharing sensitive data with unauthorized the rapid adoption of AI tools necessitates a more robust regulatory framework. Also, the government could consider:Mandating Data Protection Impact Assessments (DPIAs): Requiring organizations to conduct DPIAs before deploying AI tools to identify and mitigate potential risks. Another point is Promoting Transparency in AI Usage. These step Requiring organizations to disclose how they use AI tools and what measures are in place to protect data. Farther, Jordanian Authorities should collaborate with technology companies to develop secure AI solutions tailored to the needs of government organizations. The adoption of AI tools offers immense potential for Jordanian government organizations to enhance efficiency, improve service delivery, and drive innovation. However, as the Harmonic report highlights, this potential comes with significant risks. Data leakage through AI tools could lead to legal liabilities, security breaches, and reputational damage, undermining Jordan's digital transformation efforts. To address these challenges, Jordan must strike a balance between embracing AI and safeguarding sensitive data. This requires a multi-faceted approach that includes stronger regulatory frameworks, robust security measures, and comprehensive employee training. By taking proactive steps to mitigate the risks of data leakage, Jordan can continue to lead the region in digital innovation while protecting the privacy and security of its citizens and organizations.

Balancing AI and Security: A Jordan Imperative
Balancing AI and Security: A Jordan Imperative

Jordan Times

time27-01-2025

  • Business
  • Jordan Times

Balancing AI and Security: A Jordan Imperative

Jordan on its attempt to digitize government services is moving expeditiously to be at the forefront of digital transformation in the Middle East, with initiatives like the Jordan Vision 2025 and the National ICT Strategy aimed at leveraging technology to drive economic growth and improve public services. However, the risks associated with data leakage in AI tools could undermine these efforts. If sensitive government data is exposed, it could stall progress in key sectors such as e-governance, Tax, healthcare, and education, where AI is increasingly being deployed. A new report by Harmonic Security reveals alarming trends in data leakage through generative AI (GenAI) tools. The report, titled "From Payrolls to Patents: The Spectrum of Data Leaked into GenAI," highlights how sensitive information—ranging from customer data to proprietary code—is being inadvertently shared with AI platforms like ChatGPT, Copilot, and Gemini. For Jordanian government organizations leveraging AI tools, these findings underscore significant legal, security, and competitive risks. For example, in e-governance, AI tools are used to streamline citizen services, such as processing applications for permits, licenses, and social benefits. If sensitive citizen data is leaked, it could lead to identity theft, fraud, and other forms of cybercrime, eroding public confidence in digital services. Similarly, in healthcare, where AI is used for patient data analysis and diagnostics, a data breach could compromise patient privacy and violate medical confidentiality laws. According to the report, 8.5% of prompts entered into GenAI tools contain sensitive data. This includes customer data (45.77%), such as billing information and authentication credentials, employee data (26.83%), like payroll and personally identifiable information (PII), and even legal and financial data (14.88%), including mergers and acquisitions details. Alarmingly, 63.8% of ChatGPT users relied on the free tier, which often lacks robust security features and may use input data to train AI models. For Jordanian government entities, which increasingly rely on AI for tasks like document summarization, translation, and data analysis, these findings are particularly concerning. The inadvertent exposure of sensitive data highlight the needs for AI GUIDELINE usage or enact regulation for implantation of AI tool because it could lead to breaches of confidentiality, regulatory violations, and loss of public trust. For instance, if sensitive citizen data or internal government communications are leaked, it could result in legal repercussions under Jordan's data protection laws and damage the government's credibility. One of the key challenges in addressing data leakage is the lack of awareness among employees about the risks of sharing sensitive information with AI tools. The Harmonic report emphasizes the importance of user education as a critical component of AI governance. For Jordanian government organizations, this means implementing comprehensive training programs to educate employees on:What constitutes sensitive data. Also, Training should cover how to use AI tools responsibly, including the importance of using enterprise-grade versions and avoiding free-tier tools that may lack adequate security controls. Additionally, Employees should be trained to identify and report attempts to manipulate them into sharing sensitive data with unauthorized the rapid adoption of AI tools necessitates a more robust regulatory framework. Also, the government could consider:Mandating Data Protection Impact Assessments (DPIAs): Requiring organizations to conduct DPIAs before deploying AI tools to identify and mitigate potential risks. Another point is Promoting Transparency in AI Usage. These step Requiring organizations to disclose how they use AI tools and what measures are in place to protect data. Farther, Jordanian Authorities should collaborate with technology companies to develop secure AI solutions tailored to the needs of government organizations. The adoption of AI tools offers immense potential for Jordanian government organizations to enhance efficiency, improve service delivery, and drive innovation. However, as the Harmonic report highlights, this potential comes with significant risks. Data leakage through AI tools could lead to legal liabilities, security breaches, and reputational damage, undermining Jordan's digital transformation efforts. To address these challenges, Jordan must strike a balance between embracing AI and safeguarding sensitive data. This requires a multi-faceted approach that includes stronger regulatory frameworks, robust security measures, and comprehensive employee training. By taking proactive steps to mitigate the risks of data leakage, Jordan can continue to lead the region in digital innovation while protecting the privacy and security of its citizens and organizations.

Balancing AI and Security
Balancing AI and Security

Jordan Times

time27-01-2025

  • Business
  • Jordan Times

Balancing AI and Security

Jordan on its attempt to digitize government services is moving expeditiously to be at the forefront of digital transformation in the Middle East, with initiatives like the Jordan Vision 2025 and the National ICT Strategy aimed at leveraging technology to drive economic growth and improve public services. However, the risks associated with data leakage in AI tools could undermine these efforts. If sensitive government data is exposed, it could stall progress in key sectors such as e-governance, Tax, healthcare, and education, where AI is increasingly being deployed. A new report by Harmonic Security reveals alarming trends in data leakage through generative AI (GenAI) tools. The report, titled "From Payrolls to Patents: The Spectrum of Data Leaked into GenAI," highlights how sensitive information—ranging from customer data to proprietary code—is being inadvertently shared with AI platforms like ChatGPT, Copilot, and Gemini. For Jordanian government organizations leveraging AI tools, these findings underscore significant legal, security, and competitive risks. For example, in e-governance, AI tools are used to streamline citizen services, such as processing applications for permits, licenses, and social benefits. If sensitive citizen data is leaked, it could lead to identity theft, fraud, and other forms of cybercrime, eroding public confidence in digital services. Similarly, in healthcare, where AI is used for patient data analysis and diagnostics, a data breach could compromise patient privacy and violate medical confidentiality laws. According to the report, 8.5% of prompts entered into GenAI tools contain sensitive data. This includes customer data (45.77%), such as billing information and authentication credentials, employee data (26.83%), like payroll and personally identifiable information (PII), and even legal and financial data (14.88%), including mergers and acquisitions details. Alarmingly, 63.8% of ChatGPT users relied on the free tier, which often lacks robust security features and may use input data to train AI models. For Jordanian government entities, which increasingly rely on AI for tasks like document summarization, translation, and data analysis, these findings are particularly concerning. The inadvertent exposure of sensitive data highlight the needs for AI GUIDELINE usage or enact regulation for implantation of AI tool because it could lead to breaches of confidentiality, regulatory violations, and loss of public trust. For instance, if sensitive citizen data or internal government communications are leaked, it could result in legal repercussions under Jordan's data protection laws and damage the government's credibility. One of the key challenges in addressing data leakage is the lack of awareness among employees about the risks of sharing sensitive information with AI tools. The Harmonic report emphasizes the importance of user education as a critical component of AI governance. For Jordanian government organizations, this means implementing comprehensive training programs to educate employees on:What constitutes sensitive data. Also, Training should cover how to use AI tools responsibly, including the importance of using enterprise-grade versions and avoiding free-tier tools that may lack adequate security controls. Additionally, Employees should be trained to identify and report attempts to manipulate them into sharing sensitive data with unauthorized the rapid adoption of AI tools necessitates a more robust regulatory framework. Also, the government could consider:Mandating Data Protection Impact Assessments (DPIAs): Requiring organizations to conduct DPIAs before deploying AI tools to identify and mitigate potential risks. Another point is Promoting Transparency in AI Usage. These step Requiring organizations to disclose how they use AI tools and what measures are in place to protect data. Farther, Jordanian Authorities should collaborate with technology companies to develop secure AI solutions tailored to the needs of government organizations. The adoption of AI tools offers immense potential for Jordanian government organizations to enhance efficiency, improve service delivery, and drive innovation. However, as the Harmonic report highlights, this potential comes with significant risks. Data leakage through AI tools could lead to legal liabilities, security breaches, and reputational damage, undermining Jordan's digital transformation efforts. To address these challenges, Jordan must strike a balance between embracing AI and safeguarding sensitive data. This requires a multi-faceted approach that includes stronger regulatory frameworks, robust security measures, and comprehensive employee training. By taking proactive steps to mitigate the risks of data leakage, Jordan can continue to lead the region in digital innovation while protecting the privacy and security of its citizens and organizations.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store