logo
#

Latest news with #dataSecurity

Govt may introduce law to make MyDigital ID mandatory, says minister
Govt may introduce law to make MyDigital ID mandatory, says minister

Free Malaysia Today

time21-07-2025

  • Business
  • Free Malaysia Today

Govt may introduce law to make MyDigital ID mandatory, says minister

Federal territories minister Dr Zaliha Mustafa said the MyDigital ID system uses biometric and cryptographic technologies to ensure security and prevent data breaches. KUALA LUMPUR : The government is considering introducing a law to regulate and boost the MyDigital ID system to tackle the scepticism surrounding the initiative, the Dewan Rakyat was told today. Federal territories minister Dr Zaliha Mustafa said the current voluntary registration model for the digital ID system was a limitation that the government was looking to address. 'Right now the government is looking at the possibility of formulating an Act for MyDigital ID, to potentially make it mandatory for people to sign up,' she said. Zaliha was responding to a supplementary question from Ronald Kiandee (PN-Beluran), who had raised concerns about public confidence in the initiative. Kiandee cited the recent disruption of the autogate system at the Johor customs, immigration and quarantine (CIQ) complex, which he said was linked to integration issues, as an example of what could erode public trust. 'Three days ago, we were informed about a disruption to the autogate system at the Johor CIQ, which was said to have been caused by the integration system. 'Actually, this is a concern for Malaysians and the public regarding the initiative. There are concerns about privacy and security, the reliability of digital infrastructure and the implementing agencies, and the potential for misuse,' he said. Kiandee also noted that only 2.8 million Malaysians had registered for MyDigital ID as of the second quarter of 2025 – less than the number of those who had signed up for other government initiatives like PADU and Budi. MyDigital ID is a national digital identification initiative developed in 2016, aimed at providing a secure and authenticated method for verifying identities online. The system is intended for use across both public and private sectors to verify user identities during online transactions. Zaliha said the government did not store users' personal data, and that the MyDigital ID system used biometric and cryptographic technologies to ensure security and prevent data breaches. She also said the government was working with stakeholders to boost adoption of the platform. 'We are encouraging cooperation with all parties, including the private sector,' she said. Earlier, Zaliha said that the number of government and non-government systems integrated with MyDigital ID had nearly doubled to 82 since March. She said the platform would continue expanding its use across both public and private sectors, including the financial industry where six banks have completed sandbox testing under Bank Negara Malaysia.

Czech government bans DeepSeek usage in public administration
Czech government bans DeepSeek usage in public administration

Reuters

time09-07-2025

  • Business
  • Reuters

Czech government bans DeepSeek usage in public administration

PRAGUE, July 9 (Reuters) - The Czech government has banned the country's public administration from using any of the services of Chinese AI startup DeepSeek due to data security concerns, Prime Minister Petr Fiala said on Wednesday. The move follows various restrictions on DeepSeek in other countries including Germany, Italy and the Netherlands, driven by concerns about data protection. "The government decided on a ban on usage of AI products, applications, solutions, web pages and web services provided by DeepSeek within the Czech public administration," Fiala told a news conference shown live. Fiala said that, as a Chinese company, DeepSeek was obliged to cooperate with Chinese government bodies, which could give Beijing access to data stored on DeepSeek's servers in China. DeepSeek and the Chinese embassy in Prague did not immediately respond to requests for comment. DeepSeek shook the technology world in January with claims that it had developed an AI model to rival those from U.S. firms such as ChatGPT creator OpenAI at much lower cost. However, it has come under scrutiny in the United States and Europe for its data security policies. According to its own privacy policy, DeepSeek stores numerous pieces of personal data, such as requests to its AI programme or uploaded files, on computers in China.

Customers Are Already Demanding AI Security. Are You Listening?
Customers Are Already Demanding AI Security. Are You Listening?

Forbes

time08-07-2025

  • Business
  • Forbes

Customers Are Already Demanding AI Security. Are You Listening?

AI Data Security There was a time when companies decided how data was collected and used, with little input from the people it came from. That time has passed. Customers are asking sharper questions, expecting accountability, and making choices based on how much they trust the organizations they engage with. Cisco's 2024 Consumer Privacy Survey reflects this shift, with 75% of consumers saying they won't buy from companies they don't trust with their data. More than half have already changed providers because of privacy concerns. In addition, 78% expect AI to be used responsibly. These numbers reflect a change in how people evaluate businesses and therefore, a significant impact on a company's bottom line. Recent findings from Prosper Insights & Analytics reinforce that sentiment. When asked about concerns related to AI, 39% of adults said the technology needs more human oversight. Another 32% pointed to a lack of transparency, and more than a quarter were concerned about AI making incorrect decisions. Respondents also cited fears around job displacement and algorithmic bias, highlighting the demand for responsible AI is rooted in both practical fears and ethical expectations. People want systems they can understand, challenge and trust. Prosper - Concerns About Recent Developments in AI For organizations investing in AI, this change affects how technology decisions are made and how success is measured. AI systems increasingly play a role in customer-facing experiences, whether they're used to deliver product recommendations, support decisions or streamline transactions. These systems operate on personal data and are often judged by the quality of those interactions. That means trust, reliability and transparency have become just as important as accuracy or speed. The security environment is evolving in parallel. New risks are emerging as AI systems become more advanced. Vulnerabilities like model inversion, adversarial prompts and data poisoning create entry points for attackers that didn't exist with traditional software. Appknox recently conducted security reviews of AI-driven apps Perplexity and Deepseek and found issues ranging from weak network configurations to lax authentication and insufficient privacy protections. These findings underscore how new technology introduces new exposure and how security needs to evolve alongside capability. Internally, IT teams are feeling this pressure as they weigh the risks of adoption against the demands of innovation. A ShareGate survey of 650 professionals across North America and Europe showed that 57% of those exploring or deploying Microsoft Copilot identified security and access management as top concerns. Another 57% flagged data retention and quality as areas that needed improvement. These responses suggest that building the right foundation for trust is more important than building models or writing policies. That foundation can be difficult to establish when usage and understanding vary widely across the organization. According to a recent Prosper Insights & Analytics survey, 44% of executives already use generative AI tools like ChatGPT and Copilot, while only 27% of employees report the same. An additional 32% of employees said they've heard of these tools but don't understand them. This gap in experience and understanding introduces operational risk, especially when AI tools are adopted faster than organizations can educate and align their teams. Prosper - Heard of Generative AI Customers are paying attention to how companies approach this. Cisco's research shows that awareness of privacy laws has grown significantly in recent years. More than half of consumers say they are now familiar with their data rights. People are reviewing how their information is used, adjusting settings and opting out when they feel companies don't offer enough control or clarity. This level of engagement shows that trust must be earned, not assumed. Prosper Insights & Analytics data further reinforces this, with 59% of respondents reporting that they are either extremely or very concerned about their privacy being violated by AI systems. These findings reflect a deep emotional undercurrent that companies must take seriously if they want customers to stay engaged and confident in their use of AI-enabled services. Prosper - How Concerned are You About Privacy Being Violated From AI Using Your Data In healthcare, the importance of trust becomes even more pronounced. A recent Iris Telehealth survey found that 70% of respondents had concerns about how their mental health data would be protected when using AI-powered tools. When asked what would influence their trust, people pointed to clear explanations, strong encryption, collaboration with licensed professionals and systems that make it easy to shift from AI assistance to human care. Technology needs to be effective, but understandable and respectful of user autonomy. That expectation extends beyond healthcare. In any industry where AI interacts with customers, explainability matters. Business leaders are seeing that even well-functioning systems can lose credibility if their logic and purpose aren't communicated clearly. The case of Amazon's AI recruiting tool, which was found to disadvantage female applicants due to biased training data, remains a cautionary example. The company ultimately pulled the system, but the incident left a lasting impression of what happens when organizations overlook the importance of oversight and transparency. Responsible AI should reflect how companies see their role in the broader ecosystem of data, ethics, and service. Customers are forming opinions based on whether companies appear to handle information responsibly, communicate honestly and design technology in ways that respect the people who use it. Even simple measures like minimizing how long personal data is stored can signal that a business takes privacy seriously. Those efforts will soon be measured against evolving regulatory frameworks. The EU's AI Act introduces new requirements around transparency and risk management, especially for high-impact systems. In the US, emerging privacy laws are raising expectations across sectors. These legal changes reflect a growing belief that companies need to be more deliberate about how AI systems are developed and deployed. 'AI is evolving fast, but trust moves slower. Businesses need to meet regulatory expectations today while building systems flexible enough to meet tomorrow's. That means aligning with GDPR and the AI Act now, but also investing in explainability, continuous monitoring and ethical review processes. That's how you stay compliant and competitive,' said Bill Hastings, CISO, Language I/O. Many businesses are acting now rather than waiting for regulation. Some are embedding privacy-by-design principles into their development cycles. Others are producing clear AI usage policies and making transparency reports available to customers. Internal education is becoming more common too, with teams working to ensure employees understand how AI tools work and how to use them responsibly. 'Securing AI starts with visibility,' added Hastings. 'You can't protect what you don't fully understand, so begin by mapping where AI is being used, what data it touches and how decisions are made. From there, build in access controls, auditing and explainability features from day one. Trust grows when systems are designed to be clear, not just clever.' Doing this well often requires cross-functional coordination. Security, legal, product and compliance teams must work together from the start, not just at review points. Vendor evaluation processes need to include questions about AI ethics and security posture. Technical audits should examine how models behave under real-world conditions, including how they handle edge cases or unexpected inputs. This level of care goes beyond basic risk avoidance to shaping how customers perceive the entire relationship. Businesses that take the time to explain what their AI systems do, how decisions are made and how information is protected are showing customers they deserve their trust. These are the companies that build deeper loyalty and differentiate themselves in markets where products and services can otherwise feel interchangeable. Trust builds slowly through a pattern of responsible choices, clear communication, and consistent follow-through. AI is a powerful tool, but it works best in the hands of teams that treat security and ethics as shared values, not as checklists. As the landscape continues to evolve, the companies that earn lasting trust will be the ones that take the time to build systems and relationships that are meant to last.

The Walls Within: Why Organizations Cling to Data Silos in the Age of AI: By Erica Andersen
The Walls Within: Why Organizations Cling to Data Silos in the Age of AI: By Erica Andersen

Finextra

time07-07-2025

  • Business
  • Finextra

The Walls Within: Why Organizations Cling to Data Silos in the Age of AI: By Erica Andersen

The promise of Artificial Intelligence (AI) is tantalizing: smarter decisions, streamlined processes, and unprecedented insights. The promise is transformative. From predicting consumer behavior to automating complex tasks, AI offers a tantalizing glimpse into a future of unprecedented efficiency and innovation. Yet, despite this allure, organizations are often hesitant to embrace the full power of AI across the entire enterprise. Instead, we see a persistent trend: the deliberate creation and maintenance of data silos, where information remains walled off, and AI's access is carefully restricted. This isn't necessarily a sign of technological backwardness or a lack of vision. Rather, it's a complex tapestry woven with threads of business strategy, legal compliance, technical limitations, and ingrained organizational culture. This article delves into the multifaceted reasons behind this phenomenon, exploring why organizations are choosing to keep their AI contained within the familiar confines of their data silos. The Security Fortress: Protecting Data in a Vulnerable World At the heart of this reluctance lies a deep-seated concern for data security and privacy. Organizations are acutely aware of the potential for catastrophic data breaches, and the implications are severe. Protecting Sensitive Information: The risk of exposing sensitive information like Personally Identifiable Information (PII), financial records, trade secrets, and intellectual property is a constant threat. Restricting access is a fundamental strategy to minimize the "attack surface" and reduce the likelihood of a breach. This includes not only protecting against malicious actors but also accidental disclosures, which can have significant legal and reputational consequences. The risk of exposing sensitive information like Personally Identifiable Information (PII), financial records, trade secrets, and intellectual property is a constant threat. Restricting access is a fundamental strategy to minimize the "attack surface" and reduce the likelihood of a breach. This includes not only protecting against malicious actors but also accidental disclosures, which can have significant legal and reputational consequences. Compliance is King: Navigating the Regulatory Minefield: Regulations like GDPR (General Data Protection Regulation), CCPA (California Consumer Privacy Act), HIPAA (Health Insurance Portability and Accountability Act), LGPD (Lei Geral de Proteção de Dados - Brazil), and industry-specific mandates demand robust data privacy and security measures. Maintaining data silos is often seen as a practical way to simplify compliance by limiting the scope of data that needs to be protected. Regulations like GDPR (General Data Protection Regulation), CCPA (California Consumer Privacy Act), HIPAA (Health Insurance Portability and Accountability Act), LGPD (Lei Geral de Proteção de Dados - Brazil), and industry-specific mandates demand robust data privacy and security measures. Maintaining data silos is often seen as a practical way to simplify compliance by limiting the scope of data that needs to be protected. Unauthorized Access: A Primary Threat: Data silos create physical and logical barriers, making it significantly harder for unauthorized individuals or external actors to access and potentially misuse sensitive data. This includes implementing robust access controls, multi-factor authentication, and regular security audits. Data silos create physical and logical barriers, making it significantly harder for unauthorized individuals or external actors to access and potentially misuse sensitive data. This includes implementing robust access controls, multi-factor authentication, and regular security audits. Ethical Usage: Maintaining Control and Addressing Bias: Organizations want to ensure their data is used ethically and in accordance with their policies. Restricting access to AI models is a key mechanism for enforcing this control. This includes: Bias Detection and Mitigation: AI models can perpetuate biases present in the training data. Silos allow for careful curation of data and the application of bias detection and mitigation techniques. Explainability and Transparency: Organizations must be able to explain how their AI models make decisions. Silos can facilitate the development of explainable AI (XAI) by limiting the complexity of the data and the scope of the models. Accountability and Responsibility: Clearly defined roles and responsibilities are crucial for AI governance. Silos can help establish clear lines of accountability for data usage and model performance. Organizations want to ensure their data is used ethically and in accordance with their policies. Restricting access to AI models is a key mechanism for enforcing this control. This includes: The Competitive Edge: Data as a Strategic Weapon Beyond security, the desire to protect competitive advantage and intellectual property is another driving force behind data silo maintenance. Proprietary Data: The Secret Sauce: Data can be a valuable asset. Organizations may want to keep their unique data private to maintain a competitive edge. AI models trained on distinctive datasets can be a significant differentiator. This requires careful consideration of data licensing, access controls, and the potential for reverse engineering of AI models. Data can be a valuable asset. Organizations may want to keep their unique data private to maintain a competitive edge. AI models trained on distinctive datasets can be a significant differentiator. This requires careful consideration of data licensing, access controls, and the potential for reverse engineering of AI models. Trade Secrets: Guarding the Jewels: The data used to train AI models can reveal valuable insights and trade secrets, offering competitors a roadmap to replicate innovations. Restricting access helps prevent reverse-engineering and exploitation. This includes implementing strict non-disclosure agreements (NDAs) and protecting the intellectual property rights associated with the AI models and the underlying data. The data used to train AI models can reveal valuable insights and trade secrets, offering competitors a roadmap to replicate innovations. Restricting access helps prevent reverse-engineering and exploitation. This includes implementing strict non-disclosure agreements (NDAs) and protecting the intellectual property rights associated with the AI models and the underlying data. Data Leakage: Preventing Spills: Data silos act as barriers against data leakage, preventing valuable proprietary information from falling into the hands of competitors or external parties. This includes implementing robust data loss prevention (DLP) measures and monitoring for suspicious data activity. The Governance Imperative: Maintaining Control and Quality Organizations also prioritize control and governance over their data, recognizing the crucial role these play in the success of AI initiatives. Data Quality: A Foundation for Success: Organizations want to maintain control over the quality of the data used for AI training. Silos allow for better data governance and quality control within each department or function. This includes implementing data validation rules, data cleansing processes, and data governance frameworks. Organizations want to maintain control over the quality of the data used for AI training. Silos allow for better data governance and quality control within each department or function. This includes implementing data validation rules, data cleansing processes, and data governance frameworks. Accuracy and Reliability: The Pillars of Trust: Data accuracy and reliability are critical for AI model performance. Silos can help ensure that the data used for training is accurate and reliable, reducing the risk of biased or inaccurate results. This includes implementing data quality metrics, data lineage tracking, and data auditing processes. Data accuracy and reliability are critical for AI model performance. Silos can help ensure that the data used for training is accurate and reliable, reducing the risk of biased or inaccurate results. This includes implementing data quality metrics, data lineage tracking, and data auditing processes. Responsible AI: Managing the Lifecycle: Restricting access to data allows organizations to better manage the development, deployment, and monitoring of AI models. This helps ensure that models are used responsibly and ethically. This includes: Model Monitoring: Continuously monitoring AI model performance and identifying potential issues, such as drift or bias. Model Versioning: Tracking different versions of AI models and the associated data used for training. Model Auditing: Regularly auditing AI models to ensure compliance with regulations and ethical guidelines. Restricting access to data allows organizations to better manage the development, deployment, and monitoring of AI models. This helps ensure that models are used responsibly and ethically. This includes: The Technical Hurdles: Navigating the Complexities Beyond the strategic and legal aspects, technical and practical considerations also contribute to the prevalence of data silos. Integration Challenges: A Complex Undertaking: Integrating data from multiple sources can be incredibly complex and time-consuming. Organizations may lack the necessary infrastructure, skills, or resources to effectively integrate data across silos. This includes challenges related to data format compatibility, data semantics, and data governance. Integrating data from multiple sources can be incredibly complex and time-consuming. Organizations may lack the necessary infrastructure, skills, or resources to effectively integrate data across silos. This includes challenges related to data format compatibility, data semantics, and data governance. Data Standardization: A Formidable Task: Data from different sources may be in different formats or use different standards, making integration a challenging undertaking. This requires implementing data standardization processes, data transformation tools, and data governance frameworks. Data from different sources may be in different formats or use different standards, making integration a challenging undertaking. This requires implementing data standardization processes, data transformation tools, and data governance frameworks. Scalability and Performance: Managing the Volume: Integrating and processing large volumes of data can strain infrastructure and impact performance. Silos can help manage data volume and improve performance. This requires implementing scalable data storage solutions, data processing frameworks, and data optimization techniques. Integrating and processing large volumes of data can strain infrastructure and impact performance. Silos can help manage data volume and improve performance. This requires implementing scalable data storage solutions, data processing frameworks, and data optimization techniques. Legacy Systems: The Weight of History: Many organizations have legacy systems and infrastructure that are not designed for easy data sharing, adding another layer of complexity. This requires modernizing legacy systems, implementing data integration solutions, and gradually migrating data to more modern platforms. The Human Factor: Navigating Organizational Dynamics Finally, organizational culture and politics play a significant role in the decision to maintain data silos. Departmental Autonomy: Protecting Territories: Departments or business units may want to maintain their autonomy and control over their data, viewing it as a valuable resource. This requires fostering a culture of collaboration, promoting data sharing best practices, and establishing clear data governance frameworks. Departments or business units may want to maintain their autonomy and control over their data, viewing it as a valuable resource. This requires fostering a culture of collaboration, promoting data sharing best practices, and establishing clear data governance frameworks. Fear of Misuse: A Valid Concern: Some individuals or teams may be hesitant to share their data due to concerns about how it will be used or the potential for negative consequences. This requires establishing clear data usage policies, implementing data access controls, and providing training on responsible AI practices. Some individuals or teams may be hesitant to share their data due to concerns about how it will be used or the potential for negative consequences. This requires establishing clear data usage policies, implementing data access controls, and providing training on responsible AI practices. Lack of Trust: A Barrier to Collaboration: There may be a lack of trust between different departments or teams, making them unwilling to share data. This requires building trust through open communication, transparency, and collaborative projects. There may be a lack of trust between different departments or teams, making them unwilling to share data. This requires building trust through open communication, transparency, and collaborative projects. AI Anxiety: A Shift in Power: A department might fear that sharing data will lead to a loss of control or power, or that AI will replace human workers. This requires addressing these concerns through clear communication, providing training on AI technologies, and demonstrating the benefits of AI for both individuals and the organization as a whole. Highlighting how AI can augment human capabilities and improve job satisfaction is crucial. In Summary: A Delicate Balance The desire to maintain data silos in the context of AI adoption is a complex issue driven by a combination of factors, including data security, competitive advantage, regulatory compliance, technical challenges, and organizational culture. While data silos can offer benefits in terms of control and security, they can also hinder innovation and limit the potential of AI. Organizations must carefully weigh these competing considerations when developing their AI strategies, striving to find a balance that maximizes the benefits of AI while mitigating the risks. The future of AI adoption lies in finding innovative ways to navigate these complexities, fostering collaboration while safeguarding the valuable assets that organizations hold within their walls. This includes exploring strategies such as:

Fasoo Strengthens Zero Trust Data Control and Posture Management
Fasoo Strengthens Zero Trust Data Control and Posture Management

Associated Press

time07-07-2025

  • Business
  • Associated Press

Fasoo Strengthens Zero Trust Data Control and Posture Management

SEOUL, SOUTH KOREA, July 7, 2025 / / -- Fasoo, the leader in data-centric security, is empowering organizations with its advanced data security platform and data security posture management (DSPM) solutions. Recognizing today's distributed and data-rich environments, Fasoo's Zero Trust data control solutions are uniquely positioned to implement the core 'never trust, always verify' principle. 'Traditional perimeter-based security can no longer keep up with the pace of cloud adoption, AI integration, and evolving cyber threats,' said Jason Sohn, Executive Managing Director at Fasoo. 'Fasoo's latest advanced data security platform solutions offer organizations a holistic approach, centered on zero trust and data-centric principles, to gain visibility, enforce consistent policies, and maintain control across the entire data lifecycle.' Why Zero Trust and DSPM are Essential Now? As organizations increasingly operate in cloud-centric, remote, and hybrid environments, robust data security measures are crucial. Adopting a Zero Trust data control approach ensures continuous, data-centric protection by verifying every access attempt, minimizing insider and external threats, and enabling secure operations across diverse environments. DSPM is essential for safeguarding data in today's complex IT environments. As data moves across on-premises, multi-cloud, and hybrid systems, it is easy for sensitive information to become duplicated, exposed, or left unmanaged. DSPM helps organizations maintain visibility, assess risks, and enforce policies to protect their data and ensure compliance. Fasoo DSPM enhances this process with automated policy enforcement, real-time posture analysis, and seamless integration, providing businesses the clarity and control they need to stay secure and efficient. Fasoo: Core to Your Zero Trust Strategy Fasoo's solutions directly address Zero Trust principles: • Data-Centric Security by Design: Fasoo data security solutions protect data at the file level, enabling persistent encryption and access control wherever data resides—on devices, cloud, or external environments. • Continuous Verification: Access is dynamically controlled based on user, device, location, and usage context, ensuring that every access attempt is verified, not assumed. • Least Privilege Enforcement: Granular policy controls limit user permissions to only what is necessary, with dynamic rights management for viewing, editing, printing, and sharing. • Visibility and Auditing: The solutions provide full traceability of data access, sharing, and modifications, enabling fast threat detection, incident response, and audit readiness. Fasoo's Zero Trust Data Security Platform Includes: • Fasoo Enterprise DRM (FED): Apply persistent file encryption and dynamic access control throughout the data lifecycle. • Fasoo Data Radar (FDR): Discover and classify sensitive data automatically across databases, servers, and endpoint devices. • Fasoo DSPM: Deliver comprehensive visibility and assess security vulnerabilities across internal repositories and cloud environments. • Fasoo Smart Print (FSP) & Fasoo Smart Screen (FSS): Apply dynamic watermarks and prevent/deter/log unauthorized printing and screen capture. By focusing on securing the data itself, Fasoo provides a vital and unique layer within a Zero Trust framework, delivering foundational protection even if networks or endpoints are compromised. This fundamental strength ensures that Fasoo offers an essential defense, significantly reducing the risk of data breaches at the file level. For more information on Fasoo's Zero Trust data security solutions, please visit About Fasoo: Fasoo provides unstructured data security, privacy, and enterprise content platforms that securely protect, control, trace, analyze, and share critical business information while enhancing productivity. Fasoo's continuous focus on customer innovation and creativity provides market-leading solutions to the challenges faced by organizations of all sizes and industries. For more information, visit Jungyeon Lim Fasoo [email protected] Visit us on social media: LinkedIn Facebook YouTube X Legal Disclaimer: EIN Presswire provides this news content 'as is' without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store