logo
#

Latest news with #NISTAIRiskManagementFramework

Rising Threats Trigger U.S. Public Sector Security Upgrades
Rising Threats Trigger U.S. Public Sector Security Upgrades

Business Wire

time28-07-2025

  • Business
  • Business Wire

Rising Threats Trigger U.S. Public Sector Security Upgrades

STAMFORD, Conn.--(BUSINESS WIRE)--Public-sector organizations in the U.S. are adopting advanced cybersecurity solutions and services in response to a growing number of data breaches and evolving threats, according to a new research report published today by Information Services Group (ISG) (Nasdaq: III), a global AI-centered technology research and advisory firm. Public agencies need strong data protection strategies to continue delivering services and maintain public trust. They are working with providers to acquire and deploy effective security technologies and services. Share The 2025 ISG Provider Lens® Cybersecurity — Services and Solutions report for the U.S. Public Sector finds that agencies at the federal, state and local levels face increasingly sophisticated attackers targeting critical infrastructure and citizen data. The constant battle to protect data and infrastructure further complicates the government's digital transformation efforts, including integration of AI-enabled systems. 'Public agencies need strong data protection strategies to continue delivering services and maintain public trust,' said Nathan Frey, ISG partner and lead, U.S. Public Sector. 'They are working with providers to acquire and deploy effective security technologies and services.' Organizations are starting to employ AI to enhance security even as threat actors are weaponizing it, the report says. AI can automate the discovery of vulnerabilities and create more evasive malware and convincing deepfakes for social engineering. At the same time, agencies are using AI tools to enhance threat detection and conduct predictive analysis. They are also taking steps to protect AI models and data from attacks, guided by government standards such as the NIST AI Risk Management Framework. Risks to the public sector also arise from supply chain issues and the convergence of IT and OT systems, ISG says. The complex supply chains involved in government procurement come with vulnerabilities that require constant vendor risk management and monitoring. IT/OT convergence in critical energy, water, transportation and defense infrastructure can be compromised to disrupt operations, putting the public at risk. An early notification system prevented a major ransomware attack against transportation infrastructure in the U.S. in 2023. As agencies migrate to the cloud, they are deploying cloud security posture management and workload protection platforms to protect sensitive applications across distributed systems, the report says. Facing internal resource constraints, many are adopting managed detection and response services, which include continuous monitoring, threat hunting, expert-led incident response and other capabilities. Service providers play crucial roles in the U.S. public sector's cybersecurity and resilience, ISG says. At a strategic level, they relate cyber risks to agency objectives, demonstrating return on investment. Providers also help agencies meet strict compliance requirements and augment internal teams in a sector that often struggles to attract and retain cybersecurity talent. 'Cybersecurity services are stepping up to meet increasing public-sector demands for resilience and governance,' said Gowtham Sampath, assistant director and principal analyst, ISG Provider Lens Research, and lead author of the report. 'Providers enable clients to align security measures with agency goals and build effective defenses with limited resources.' The report also explores global cybersecurity technology trends relevant to the U.S. public sector, including increasing adoption of Identity and Access Management (IAM), extended detection and response (XDR) and security service edge (SSE). For more insights into the cybersecurity challenges facing U.S. public agencies, along with ISG's advice for addressing them, see the ISG Provider Lens® Focal Points briefing here. The 2025 ISG Provider Lens® Cybersecurity — Services and Solutions report for the U.S. Public Sector evaluates the capabilities of 86 providers across six quadrants: Identity and Access Management (Global), Extended Detection and Response (Global), Security Service Edge (Global), Technical Security Services, Strategic Security Services and Next-Gen SOC/MDR Services. The report names IBM as a Leader in five quadrants. It names Accenture, Capgemini, Deloitte, EY, HCLTech and Infosys as Leaders in three quadrants each. Broadcom, Fortinet, KPMG, Microsoft, Palo Alto Networks and Unisys are named as Leaders in two quadrants each. Cato Networks, Check Point Software, Cisco, CrowdStrike, CyberArk, Forcepoint, Leidos, ManageEngine, Netskope, Okta, One Identity (OneLogin), Ping Identity, SailPoint, Saviynt, SentinelOne, Trellix, Trend Micro, Versa Networks and Zscaler are named as Leaders in one quadrant each. In addition, Leidos is named as a Rising Star — a company with a 'promising portfolio' and 'high future potential' by ISG's definition — in two quadrants. BeyondTrust, HPE (Aruba), Sophos and Wipro are named as Leaders in one quadrant each. In the area of customer experience, PwC is named the global ISG CX Star Performer for 2025 among cybersecurity service and solution providers. PwC earned the highest customer satisfaction scores in ISG's Voice of the Customer survey, part of the ISG Star of Excellence™ program, the premier quality recognition for the technology and business services industry. A customized version of the report is available from Unisys. The 2025 ISG Provider Lens® Cybersecurity — Services and Solutions report for the U.S. Public Sector is available to subscribers or for one-time purchase on this webpage. About ISG Provider Lens® Research The ISG Provider Lens® Quadrant research series is the only service provider evaluation of its kind to combine empirical, data-driven research and market analysis with the real-world experience and observations of ISG's global advisory team. Enterprises will find a wealth of detailed data and market analysis to help guide their selection of appropriate sourcing partners, while ISG advisors use the reports to validate their own market knowledge and make recommendations to ISG's enterprise clients. The research currently covers providers offering their services globally, across Europe, as well as in the U.S., Canada, Mexico, Brazil, the U.K., France, Benelux, Germany, Switzerland, the Nordics, Australia and Singapore/Malaysia, with additional markets to be added in the future. For more information about ISG Provider Lens research, please visit this webpage. About ISG ISG (Nasdaq: III) is a global AI-centered technology research and advisory firm. A trusted partner to more than 900 clients, including 75 of the world's top 100 enterprises, ISG is a long-time leader in technology and business services that is now at the forefront of leveraging AI to help organizations achieve operational excellence and faster growth. The firm, founded in 2006, is known for its proprietary market data, in-depth knowledge of provider ecosystems, and the expertise of its 1,600 professionals worldwide working together to help clients maximize the value of their technology investments.

Exclusive: Who covers the damage when an AI agent goes rogue? This startup has an insurance policy for that
Exclusive: Who covers the damage when an AI agent goes rogue? This startup has an insurance policy for that

Yahoo

time24-07-2025

  • Business
  • Yahoo

Exclusive: Who covers the damage when an AI agent goes rogue? This startup has an insurance policy for that

Today, the Artificial Intelligence Underwriting Company (AIUC) is emerging from stealth with a $15 million seed round led by Nat Friedman at NFDG, with participation from Emergence, Terrain, and notable angels including Anthropic cofounder Ben Mann and former CISOs from Google Cloud and MongoDB. The company's goal? Build the insurance, audit, and certification infrastructure needed to bring AI agents safely into the enterprise world. That's right: Insurance policies for AI agents. AIUC cofounder and CEO Rune Kvist says that insurance for agents—that is, autonomous AI systems capable of making decisions and taking action without constant human oversight—is about to be big business. Previously the first product and go-to-market hire at Anthropic in 2022, Kvist's founding team also includes CTO Brandon Wang, a Thiel Fellow who previously founded a consumer underwriting business, and Rajiv Dattani a former McKinsey partner who led work in the global insurance sector, and was COO of METR, a research non-profit that evaluated OpenAI and Anthropic's models before deployment. Creating financial incentives to reduce risk of AI agent adoption At the heart of AIUC's approach is a new risk and safety framework called AIUC-1, designed specifically for AI agents. It pulls together existing standards like the NIST AI Risk Management Framework, the EU AI Act, and MITRE's ATLAS threat model—then layers on auditable, agent-specific safeguards. The idea is simple: make it easy for enterprises to adopt AI agents with the same kind of trust signals they expect in cloud security or data privacy. 'The important thing about insurance is that it creates financial incentives to reduce the risk,' Kvist told Fortune. 'That means that we're going to be tracking, where does it go wrong, what are the problems you're solving. And insurers can often enforce that you do take certain steps in order to get certified.' While there other startups also currently working on AI insurance products, Kvist said none are building the kind of agent standard that prevents risks like AIUC-1. 'Insurance & standards go hand-in-hand to create confidence around AI adoption,' he said. 'AIUC-1 creates a standard for AI adoption,' said John Bautista, partner at law firm Orrick and who helped create the standard. 'As businesses enter a brave new world of AI, there's a ton of legal ambiguities that hold up adoption. With new laws and frameworks constantly emerging, companies need one clear standard that pulls it all together and makes adoption massively simple,' he said. A need for independent vendors The story of American progress, he added, is also a story of insurance. Benjamin Franklin founded the country's first mutual fire insurance company in response to devastating house fires. In the 20th century, specialized players like UL Labs emerged from the insurance industry to test the safety of electric appliances. Car insurers built crash-test standards that gave birth to the modern auto industry. AIUC is betting that history is about to repeat. 'It's not Toyota that does the car crash testing, it's independent bodies.' Kvist pointed out. 'I think there's a need for an independent ecosystem of companies that are answering [the question], can we trust these AI agents?' To make that happen, AIUC will offer a trifecta: standards, audits, and liability coverage. The AIUC-1 framework creates a technical and operational baseline. Independent audits test real-world performance—by trying to get agents to fail, hallucinate, leak data, or act dangerously. And insurance policies cover customers and vendors in the event an agent causes harm, with pricing that reflects how safe the system is. If an AI sales agent accidentally exposes customer personally identifiable information, for example, or if an AI assistant in finance fabricates a policy or misquotes tax information, this type of insurance policy could cover the fallout. The financial incentive, Kvist explained, is the point. Just like consumers get a better car insurance rate for having airbags and anti-lock brakes, AI systems that pass the AIUC-1 audit could get better terms on insurance, in Kvist's view. That pushes AI vendors toward better practices, faster—and gives enterprises a concrete reason to adopt sooner, before their competitors do. Using insurance to align incentives AIUC's view is that the market, not just government, can drive responsible development. Top-down regulation is 'hard to get right,' said Kvist. But leaving it all to companies like OpenAI, Anthropic and Google doesn't work either—voluntary safety commitments are already being walked back. Insurance creates a third way to align incentives and evolves with the technology, he explained. Kvist likens AIUC-1 to SOC-2, the security certification standard that gave startups a way to signal trust to enterprise buyers. He imagines a world in which AI agent liability insurance becomes as common—and necessary—as cyber insurance is today, predicting a $500 billion market by 2030, eclipsing even cyber insurance. AIUC is already working with several enterprise customers and insurance partners (AIUC said it could disclose the names yet), and is moving quickly to become the industry benchmark for AI agent safety. Investors like Nat Friedman agree. As the former CEO of GitHub, Friedman saw the trust issues firsthand when launching GitHub Copilot. 'All his customers were wary of adopting it,' Kvist recalls. 'There were all these IP risks.' As a result, Friedman had been looking for an AI insurance startup for a couple of years. After a 90-minute pitch meeting, he said he wanted to invest—which he did, in a seed round in June, before Friedman moved to join Alexandr Wang at Mark Zuckerberg's new Meta Superintelligence Labs. In a few years, said Kvist, insuring AI agents will be mainstream. 'These agents are making a much bigger promise, which is 'we're going to do the work for you,'' he said. 'We think the liability becomes much bigger, and therefore the interest is much bigger.' This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

New report reveals major security flaws in multimodal AI models
New report reveals major security flaws in multimodal AI models

Techday NZ

time10-05-2025

  • Techday NZ

New report reveals major security flaws in multimodal AI models

Enkrypt AI has released a report detailing new vulnerabilities in multimodal AI models that could pose risks to public safety. The Multimodal Safety Report by Enkrypt AI unveils significant security failures in the way generative AI systems handle combined text and image inputs. According to the findings, these vulnerabilities could allow harmful prompt injections hidden within benign images to bypass safety filters and trigger the generation of dangerous content. The company's red teaming exercise evaluated several widely used multimodal AI models for their vulnerability to harmful outputs. Tests were conducted across various safety and harm categories as outlined in the NIST AI Risk Management Framework. The research highlighted how recent jailbreak techniques exploit the integration of text and images, leading to the circumvention of existing content filters. "Multimodal AI promises incredible benefits, but it also expands the attack surface in unpredictable ways," said Sahil Agarwal, Chief Executive Officer of Enkrypt AI. "This research is a wake-up call: the ability to embed harmful textual instructions within seemingly innocuous images has real implications for enterprise liability, public safety, and child protection." The report focused on two multimodal models developed by Mistral—Pixtral-Large (25.02) and Pixtral-12b. Enkrypt AI's analysis found that these models are 60 times more likely to generate child sexual exploitation material (CSEM)-related textual responses compared to prominent alternatives such as OpenAI's GPT-4o and Anthropic's Claude 3.7 Sonnet. The findings raise concerns about the lack of sufficient safeguards in certain AI models handling sensitive data. In addition to CSEM risks, the study revealed that these models were 18 to 40 times more susceptible to generating chemical, biological, radiological, and nuclear (CBRN) information when tested with adversarial inputs. The vulnerability was linked not to malicious text prompts but to prompt injections concealed within image files, indicating that such attacks could evade standard detection and filtering systems. These weaknesses threaten to undermine the intended purposes of generative AI and call attention to the necessity for improved safety alignment across the industry. The report emphasises that such risks are present in any multimodal model lacking comprehensive security measures. Based on the findings, Enkrypt AI urges AI developers and enterprises to address these emerging risks promptly. The report outlines several recommended best practices, including integrating red teaming datasets into safety alignment processes, conducting continuous automated stress testing, deploying context-aware multimodal guardrails, establishing real-time monitoring and incident response systems, and creating model risk cards to transparently communicate potential vulnerabilities. "These are not theoretical risks," added Sahil Agarwal. "If we don't take a safety-first approach to multimodal AI, we risk exposing users—and especially vulnerable populations—to significant harm." Enkrypt AI's report also provides details about its testing methodology and suggested mitigation strategies for organisations seeking to reduce the risk of harmful prompt injection attacks within multimodal AI systems. Follow us on: Share on:

Credo AI, IBM Collaborate to Advance AI Compliance for Global Enterprises
Credo AI, IBM Collaborate to Advance AI Compliance for Global Enterprises

Business Wire

time28-04-2025

  • Business
  • Business Wire

Credo AI, IBM Collaborate to Advance AI Compliance for Global Enterprises

SAN FRANCISCO--(BUSINESS WIRE)-- Credo AI, a global pioneer of the AI governance category and a leading provider of trustworthy AI governance software, today announced a strategic collaboration with IBM to help global enterprises operationalize AI regulatory compliance management at scale. The OEM agreement integrates Credo AI's Policy Packs as an add-on for IBM customers–branded by IBM as Compliance Accelerators in the IBM Marketplace. As enterprises scale AI adoption, they often struggle to keep pace with rapidly evolving AI regulatory risks and compliance requirements. Without policy tracking, organizations face stalled innovation, compliance risk exposure, and fragmented AI governance. Credo AI's Policy Packs deliver actionable intelligence aligned to global regulations, frameworks, and standards such as the EU AI Act, ISO/IEC 42001, and the NIST AI Risk Management Framework. This agreement enables IBM customers to add Credo AI's ready-to-deploy policy intelligence directly into their existing environment through the Compliance Accelerators integration. Credo AI is already trusted by Fortune 500s including Mastercard and Cisco, and has been named to Fast Company's Most Innovative Companies, CB Insights AI 100 most innovative AI startups, and the World Economic Forum's Technology Pioneers. This integration brings further governance depth and regulatory fidelity to IBM's end-to-end toolkit to help organizations manage risk, compliance, and the entire AI lifecycle. 'Credo AI is proud that our Policy Packs have been chosen by IBM to help power their Compliance Accelerators,' said Credo AI CEO Navrina Singh. 'We are on a mission to elevate the global standard for AI governance across all industries and enterprises. This integration is a win-win-win for Credo AI, IBM, and enterprises ready to advance global excellence in AI.' 'In the enterprise, moving from AI experimentation to AI production at scale hinges on governance,' said Ritika Gunnar, General Manager, Data & AI, IBM. 'Organizations that invest in AI governance accelerate innovation while mitigating steep risks; organizations that do not invest hamper innovation and invite compliance failures. IBM and Credo AI's collaboration provides deep, intuitive AI governance capabilities, setting organizations up for responsible AI innovation at scale.' The Compliance Accelerators add-on provides IBM customers direct access to Credo AI's Policy Packs—a specialized governance capability focused on regulatory compliance. Benefits of the Compliance Accelerators Add-on The Credo AI add-on in the IBM Compliance Accelerators provides IBM customers with the capacity to better streamline regulatory compliance processes. The benefits of Credo AI's add-on include: Access to Global AI Guidelines: Activation of Credo AI's curated Policy Packs aligned with global regulations, frameworks, and standards. Embedded Risk Management: Direct integration into IBM workflows to simplify AI risk assessment and compliance. Enterprise-Ready Deployment, Out of the Box: Deployment via IBM Marketplace, allows rapid adoption within existing workflows. Introduction to Credo AI Governance Capabilities: Provides firsthand experience with Credo AI's policy intelligence, paving a clear path toward deeper AI governance maturity. IBM customers to learn more about the Compliance Accelerators add-on powered by Credo AI's Policy Packs, visit IBM's Media Center. To explore Credo AI's comprehensive AI governance platform and advisory services, trusted by the world's most iconic brands, visit Credo AI. Credo AI is the category pioneer and global leader in AI governance, trusted by the world's most iconic brands to turn governance into a strategic advantage across the enterprise. Our AI Governance Platform and AI Governance Advisory Services empower your enterprise to adopt and scale trusted AI with confidence. From Generative AI to Agentic AI, Credo AI's centralized platform measures, monitors, and manages AI risk—enabling your organization to maximize AI's value while mitigating security, privacy, compliance, and operational challenges. Credo AI also future-proofs your AI investments by aligning with global regulations, industry standards, and company values. Recognized as Fast Company's Most Innovative Companies, CB Insights AI 100, Inc. Best Workplaces, and the World Economic Forum Technology Pionee r, Credo AI is leading the charge in accelerating the adoption of trusted AI.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store