
How AI Can Transform Cybersecurity Compliance And Hardening Efforts
Organizations face an unprecedented challenge in 2025: balancing rapid technology adoption with increasingly complex cybersecurity compliance requirements. As regulations like the EU's Digital Operational Resilience Act (DORA) and updated NIST frameworks take effect, artificial intelligence presents a transformative solution that can significantly reduce compliance burdens while strengthening security resilience.
The Compliance Crisis
The cybersecurity landscape has become fragmented and overwhelming. According to KPMG research, 65% of organizations report low confidence in investing in new cyber technologies due to a lack of understanding or trust. Meanwhile, Zscaler ThreatLabz found that enterprises are blocking nearly 60% of AI/ML transactions, indicating that compliance concerns are causing overly restrictive approaches that hinder innovation.
Traditional compliance relies on manual processes, periodic audits and reactive remediation methods that are resource-intensive and inadequate for addressing dynamic cyber threats. According to Splunk, "While 42% of board members believe CISOs spend an extensive amount of time and effort on regulatory activities, only 29% of CISOs say that is the case." This reveals a perception gap that highlights how compliance obligations can divert security leaders from strategic initiatives, creating a cycle of reactive management that leaves organizations vulnerable.
AI As A Compliance Force Multiplier
AI offers a path toward efficient, proactive compliance management. Rather than replacing human oversight, AI serves as a force multiplier that automates routine tasks, identifies vulnerabilities before they become critical and provides real-time compliance insights across complex organizational structures.
Traditional audits occur quarterly or annually, leaving vulnerability gaps between assessments. AI-powered solutions monitor systems continuously, analyzing configurations, access patterns and data flows to identify compliance deviations in real time. Machine learning algorithms process vast amounts of log data and security metrics to detect patterns indicating potential violations, which is particularly valuable for organizations managing legacy systems alongside modern infrastructure.
Organizations struggle with patch management due to IT environment complexity. AI revolutionizes this by analyzing vulnerability data, threat intelligence and system criticality to prioritize patches automatically. Instead of relying solely on vendor severity ratings, AI considers specific organizational context, for instance, prioritizing a medium-severity patch for a public-facing service over a high-severity patch for an isolated internal system based on active threat intelligence.
The regulatory landscape evolves rapidly. Recent policy updates require organizations to adapt security practices frequently. AI helps organizations stay current by automatically analyzing new requirements and mapping them to existing security controls. Natural language processing algorithms parse regulatory documents, identify specific requirements and compare them to current compliance postures, enabling proactive gap remediation.
Implementation Strategies
Organizations should begin with high-impact, low-risk applications. Configuration management represents an ideal starting point because AI can verify system compliance with security baselines without accessing sensitive data or making autonomous changes. Security information and event management (SIEM) enhancement offers another entry point, improving threat detection accuracy while reducing false positives.
Rather than implementing comprehensive solutions immediately, build capabilities gradually through pilot projects that demonstrate value and develop internal expertise. Focus on areas where manual processes are most time-consuming and error-prone for the clearest ROI. Invest in training programs to develop both technical AI management skills and analytical capabilities for interpreting AI outputs.
Organizations must maintain transparency in AI implementations to satisfy oversight requirements. AI systems used for compliance should provide clear explanations for recommendations and maintain detailed decision logs. This transparency is essential for regulatory compliance and stakeholder trust.
Addressing Key Challenges
AI effectiveness depends heavily on data quality and integration. Organizations often struggle with siloed systems and inconsistent data formats. Before implementing AI solutions, invest in data governance and integration capabilities to ensure AI systems have access to comprehensive, accurate information. Implement data quality standards and automated validation processes.
Successfully implementing AI for compliance requires developing new skills within IT and security teams, both technical AI management skills and analytical capabilities for interpreting outputs. Address resistance through education, value demonstration and gradual implementation that builds confidence over time.
Balance AI security benefits with deployment risks. CISA guidance emphasizes applying zero-trust principles to AI systems and implementing robust governance frameworks. Conduct thorough risk assessments and implement appropriate safeguards before production deployment. For third-party AI solutions, develop comprehensive vendor management processes addressing AI-specific risks and transparency requirements.
Measuring Success
Establish clear metrics for evaluating AI implementation success:
• Efficiency Metrics: Time required for compliance assessments, automated versus manual checks ratio and administrative burden reduction
• Effectiveness Metrics: Proactive versus reactive violation detection percentage, remediation time and security posture improvement
• Cost Metrics: Personnel cost reduction, decreased audit preparation time and avoided violation costs
The Path Forward
AI integration into cybersecurity compliance represents a fundamental shift toward proactive, efficient security management. As organizations face mounting pressure to protect data while managing complex regulatory requirements, AI offers a practical solution for achieving more with less.
Success requires thoughtful implementation, prioritizing transparency, maintaining human oversight and gradually building confidence in AI capabilities. Organizations beginning this journey now will be better positioned for the evolving threat landscape and increasingly complex regulatory environment.
The question isn't whether organizations can afford to implement AI for compliance; it's whether they can afford not to. In an environment where cyber threats evolve rapidly and regulatory requirements become more stringent, AI represents the most promising path toward sustainable cybersecurity resilience.
Leaders should view AI as a powerful amplifier of human cybersecurity capabilities rather than a replacement. By automating routine tasks, providing intelligent insights and enabling proactive risk management, AI helps organizations protect resources while serving stakeholders effectively.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time Business News
19 minutes ago
- Time Business News
Unlocking Opportunities with ISACA AAIA: Your Path to IT Audit Excellence
In an era defined by rapid technological transformation, staying current in the IT audit profession demands more than traditional knowledge. It requires continuous adaptation to new technologies and risks—especially those involving artificial intelligence. The ISACA AAIA certification (Artificial Intelligence Audit Certificate) is designed to meet this need. As organizations increasingly integrate AI into their systems and decision-making processes, the ability to audit these systems effectively is becoming essential. The AAIA credential signals that an IT professional is prepared to assess AI models not only for performance but also for accountability, transparency, and compliance—cornerstones of modern auditing standards. The ISACA AAIA is more than a niche certificate—it's a response to the accelerating use of AI in critical business functions. Traditional IT audit credentials cover cybersecurity, risk, and assurance, but they rarely dive deep into artificial intelligence. The AAIA fills this gap by focusing on the distinct and emerging challenges of AI auditing. It empowers professionals to evaluate AI through a lens of governance, control, and compliance, ensuring these technologies are deployed responsibly and safely. ISACA is a globally respected authority in IT governance and audit, and the AAIA builds on that reputation with content that is both timely and technical. Unlike general AI courses that teach how to build models, AAIA emphasizes the auditor's role in ensuring that AI systems operate ethically, within legal boundaries, and in alignment with organizational risk appetite. This makes the certification highly relevant in industries such as finance, healthcare, and government, where compliance is as important as innovation. As artificial intelligence becomes embedded in customer service platforms, financial risk assessments, logistics planning, and countless other areas, the risks associated with its use grow significantly. Concerns about algorithmic bias, lack of transparency, and unintended consequences have placed AI in the regulatory spotlight. Governments and oversight bodies are already crafting legal frameworks—such as the EU's AI Act—that will require businesses to demonstrate due diligence in how AI is developed and used. In this context, AI auditing is not just a best practice but an emerging requirement. Businesses must be prepared to explain how their algorithms work, what data was used to train them, how outcomes are validated, and whether safeguards are in place to prevent discriminatory or harmful results. Professionals who can carry out these audits bring immediate value to their organizations. The AAIA credential prepares individuals to meet these demands, combining established audit principles with AI-specific knowledge and application. The AAIA certification is well-suited to a range of professionals who are either currently involved in IT auditing or aiming to expand their scope into AI oversight. IT auditors, compliance officers, risk analysts, and cybersecurity specialists will find this credential particularly beneficial as it enhances their ability to work with emerging technologies. It is also valuable for AI developers and data scientists who want to understand the governance and audit expectations around the systems they create. For those already holding certifications such as CISA, CRISC, or CDPSE, the ISACA AAIA serves as a strong complement, reinforcing their expertise with a future-ready specialization. As industries shift toward automation and data-driven operations, having the skills to audit AI systems will open doors to leadership roles in both internal audit functions and external consulting engagements. The AAIA curriculum delivers a robust framework for understanding and auditing artificial intelligence systems. It begins with a foundation in AI governance and explores how ethical considerations, such as fairness and accountability, must be integrated into system design and evaluation. Candidates study the AI lifecycle, learning to identify risks and assess controls at each phase—from data collection and model training to implementation and monitoring. The course emphasizes the importance of transparency in AI, helping auditors evaluate how well systems can explain their decision-making processes. Understanding how to construct and assess AI-specific control objectives is also a key component, along with recognizing data integrity issues that could impact model reliability. Students are introduced to global regulatory standards and compliance requirements, giving them the tools to ensure AI systems meet legal expectations. In addition, the program explores how AI can enhance internal audit capabilities, helping professionals understand its use in fraud detection, predictive analytics, and process automation. Achieving the ISACA AAIA certification marks a significant advancement in one's professional credentials. It distinguishes the holder as someone with the knowledge and readiness to audit one of the most complex and consequential technologies of the modern age. In a competitive job market, this edge is invaluable. Employers across industries are seeking professionals who not only understand technology but can also evaluate its risks and verify compliance in an increasingly regulated environment. The AAIA opens new pathways in AI risk management, compliance consulting, and governance leadership. It can lead to higher compensation, greater job security, and access to opportunities at the intersection of IT and innovation. As organizations grow more dependent on data-driven automation, the demand for certified AI auditors will only rise. Holding this certification shows that you can bridge the gap between business strategy, technical systems, and regulatory oversight. The trajectory of artificial intelligence is clear: more adoption, more integration, and more scrutiny. As AI becomes central to strategic decision-making, its impact must be measured not just by efficiency, but by safety, ethics, and legality. Regulators are taking notice, and enterprises must be ready. Professionals certified in ISACA AAIA will be at the forefront of this shift, leading AI audit projects, advising on governance frameworks, and guiding organizations toward responsible innovation. In the coming years, AI auditing will evolve from a specialized task to a standard expectation. Those who prepare now, through certifications like the AAIA, will be best positioned to influence and shape how AI is used across sectors. Whether in corporate audit teams, independent assurance firms, or regulatory agencies, AAIA holders will have a crucial role in ensuring that AI serves the public and corporate interest responsibly. To earn the AAIA credential, candidates must complete ISACA's official training program and pass a comprehensive exam. The course is designed to accommodate working professionals with flexible, self-paced modules covering both theory and real-world application. While there are no mandatory prerequisites, familiarity with IT auditing, cybersecurity, or data governance is helpful for mastering the material. The exam tests a candidate's ability to evaluate AI systems from an audit perspective, covering governance structures, risk identification, compliance obligations, and performance validation. Upon completion, professionals will not only hold a valuable certification but also gain the confidence and knowledge to handle AI audit responsibilities in dynamic environments. The ISACA AAIA certification represents a major step forward for professionals aiming to lead in a digital-first world. As AI becomes central to how organizations operate, the ability to ensure its responsible use will define the next generation of auditors and risk managers. This credential is more than a professional milestone—it is a mark of trust, expertise, and relevance in one of the most urgent areas of modern technology. With AI continuing to transform industries, the need for certified professionals in AI auditing is not a passing trend—it's the new standard. By earning the AAIA, you're positioning yourself to unlock opportunities, drive accountability, and help shape the future of ethical and effective artificial intelligence deployment. TIME BUSINESS NEWS

Associated Press
2 hours ago
- Associated Press
EU Awards €1.5M Grant to AIxBlock, With €61.5M More in the Pipeline
EU backs AIxBlock's mission to decentralize AI development and automate infrastructure workflows—starting with a €1.5M grant and €61.5M in non-dilutive funding pipeline CALIFORNIA, Aug. 04, 2025 (GLOBE NEWSWIRE) -- In a significant show of institutional support, AIxBlock, a decentralized AI tech startup in the Bay Area, has secured a €1.5 million innovation grant from the European Union, alongside with up to €61.5 million in 2 additional grants pre-approved to support its long-term expansion across Europe. The backing positions AIxBlock as a frontrunner in a fast-emerging category: decentralized, automation-first AI infrastructure. The move signals growing European interest in infrastructure that empowers open, decentralized, community-driven platforms—not just hyperscalers—to play a leading role in the AI ecosystem. A Regulatory-Ready Alternative to Centralized AI Infrastructure AIxBlock's €1.5M secured grant comes with zero dilution (with onboarding (KYC) underway prior to fund disbursement), allowing the company to deepen its technology footprint in the EU while preserving ownership and control. The €61.5M in pre-approved funding of 2 other grants will include support for building a physical decentralized GPU network, leveraging underutilized data centers in the region to offer scalable, eco-aligned compute infrastructure in future phases. What AIxBlock Builds AIxBlock is a full-stack platform for end-to-end AI development and workflow automation. It allows startups, agencies, and enterprises to: The platform is designed to give builders full ownership of their infrastructure stack—without vendor lock-in—and to make AI development and automation more affordable, modular, and decentralized. Open Source with Long-Term Contributor Incentives AIxBlock is built with an open-source foundation and invites the global developer and AI community to contribute. Those who participate in improving the platform—whether through code, infrastructure, validation, or ecosystem support—will be eligible for long-term revenue and benefit sharing, proportionate to the value and level of their contribution. This model ensures that core contributors, infrastructure providers, and validators are aligned with the platform's long-term success, with transparent tracking of input and rewards over time. Blockchain as a Coordination and Incentive Layer Blockchain technology plays a critical role in AIxBlock's architecture, serving as the underlying layer for transparency, incentive distribution, and coordination across its decentralized ecosystem. It powers: Scaling Through EU Infrastructure, Responsibly A key part of AIxBlock's roadmap involves tapping into the EU's large base of underutilized data centers to deploy its decentralized GPU network. This expansion is planned for future phases supported by the broader grant pipeline, reflecting a strategy rooted in efficiency, sustainability, and digital sovereignty. By revitalizing existing infrastructure rather than building from scratch, AIxBlock aims to deliver scalable compute capacity while minimizing both environmental impact and geopolitical dependence. Quiet Execution, Long-Term Vision Without any paid marketing since MVP, AIxBlock has already onboarded over 12,000 users organically and secured some enterprise clients who are deploying large-scale model development workflows on the platform. The team continues to focus on expanding core capabilities and preparing for the next wave of infrastructure activation across the EU. As regulators, enterprises, and builders begin to question the scalability and transparency of centralized AI pipelines, platforms like AIxBlock offer an emerging alternative—designed for compliance, customization, and control. Strategic Positioning for Market Leadership With EU support secured and a substantial funding pipeline in place, AIxBlock has built both the credibility and resources to scale with confidence. Its ability to win institutional grants—combined with growing demand for decentralized infrastructure—makes it a compelling partner for strategic investors and ecosystem collaborators. The convergence of AI infrastructure demand and decentralized technology adoption presents a generational opportunity. EU backing provides the regulatory confidence that institutional investors and enterprise clients increasingly seek when evaluating infrastructure plays. As the decentralized AI landscape matures, platforms with public-sector backing, proven technology, and execution capital will be best positioned to lead. AIxBlock's current standing—anchored by confirmed funding, a pre-approved pipeline, and a clear expansion roadmap—signals its emergence as a serious contender in the global AI infrastructure race. Building the AI Operating Layer for Europe With institutional trust, a clear multi-stage public funding pipeline, and a differentiated approach to AI infrastructure, AIxBlock is positioning itself as a critical operating layer for the next era of enterprise and open-source AI development. At a time when developers are demanding more freedom, privacy regulations are tightening, and the cost of compute is rising, AIxBlock offers a new model: Decentralize the resources. Automate the workflows. Reward the community. Build locally, scale globally. About AIxBlock AIxBlock is a next-generation AI infrastructure platform built for developers, startups, and enterprises to build, fine-tune, and deploy AI models with unmatched flexibility and cost efficiency. Combining decentralized GPU resources, end-to-end AI development tools, and workflow automation, AIxBlock eliminates vendor lock-in and reduces compute costs by up to 90%. The platform offers a full-stack, automation-first environment—from data collection and labeling to model training, deployment, and integration with external apps—powered by a decentralized marketplace of models and compute providers. With institutional funding from the European Union (€1.5M Secured) and a roadmap to activate €61.5M more in non-dilutive grants, AIxBlock is building the AI operating layer for Europe and beyond. Website | X | Telegram Contact: Lee Ng [email protected] Disclaimer: This content is provided by AIxBlock. The statements, views, and opinions expressed in this content are solely those of the content provider and do not necessarily reflect the views of this media platform or its publisher. We do not endorse, verify, or guarantee the accuracy, completeness, or reliability of any information presented. We do not guarantee any claims, statements, or promises made in this article. This content is for informational purposes only and should not be considered financial, investment, or trading advice. Investing in crypto and mining-related opportunities involves significant risks, including the potential loss of capital. It is possible to lose all your capital. These products may not be suitable for everyone, and you should ensure that you understand the risks involved. Seek independent advice if necessary. Speculate only with funds that you can afford to lose. Readers are strongly encouraged to conduct their own research and consult with a qualified financial advisor before making any investment decisions. Neither the media platform nor the publisher shall be held responsible for any fraudulent activities, misrepresentations, or financial losses arising from the content of this press release. In the event of any legal claims or charges against this article, we accept no liability or responsibility. Globenewswire does not endorse any content on this page. Legal Disclaimer: This media platform provides the content of this article on an 'as-is' basis, without any warranties or representations of any kind, express or implied. We assume no responsibility for any inaccuracies, errors, or omissions. We do not assume any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information presented herein. Any concerns, complaints, or copyright issues related to this article should be directed to the content provider mentioned above. Photos accompanying this announcement are available at:
Yahoo
2 hours ago
- Yahoo
Pharma CEOs downplay impact of tariffs amid rising cost concerns
Despite analyst warnings of heightened financial risk, pharma CEOs across the US and Europe remain confident that the latest US-EU trade deal and associated tariffs will have limited impact. As per a new deal between the two global economic powerhouses, the EU will pay the US a tariff rate of 15% for pharmaceuticals. Medicines are the largest European exports to the US by value, and the EU accounts for approximately 60% of all pharmaceutical imports to the US. Top-selling drugs such as AbbVie's Humira (adalimumab), MSD's Keytruda (pembrolizumab), and Novo Nordisk's Ozempic (semaglutide), for example, are manufactured in Europe and sent to the US, representing billion-dollar markets. Analysts from GlobalData's Pharma Strategic Intelligence team say that: 'Companies manufacturing pharmaceutical products in Europe will need to anticipate financial exposure when planning launches in the US due to the unfavourable gross to net dynamics, weakened pricing leverage with US payers, and slower commercial uptake as insurers reassess cost effectiveness due to the tariffs.' US President Donald Trump has had a sharp focus on the pharma industry since assuming office. Analysts predict that adding duties to incoming goods will likely elevate costs across the pharmaceutical value chain, ultimately raising drug prices for patients. The policies make for interesting analysis when combined with his desire to cut prescription prices in the country. Diederik Stadig, sector economist for TMT & healthcare at ING, wrote in a July note: 'A tariff would hurt consumers most of all, as they would feel the inflationary effect of tariffs directly when paying for prescriptions.' The European Federation of Pharmaceutical Industries and Associations (EFPIA), for example, has maintained that tariffs on medicines are ineffective. The group says such policies only hinder patient access to medicines. GlobalData also forecasts disruptions to launch planning, particularly late-stage assets with EU-based manufacturing and production planned for entry into the US. R&D budgets of pharmaceutical companies are already under pressure and the firefighting of tariffs could place additional strain on resources. CEOs signal resilience CEOs of big pharma companies, however, are maintaining a brave face despite the projected headwinds. The UK-EU trade deal announcement arrived in the middle of the pharma industry's Q2 reporting period, where execs were pressed on financial outlooks. In a Q2 earnings call, AbbVie's CEO Rob Michael said the US company is 'fairly insulated' from any tariff-related impacts in 2025, though caveated that the company would not speculate on the longer-term consequences. AstraZeneca shared the same sentiment. CEO Pascal Soriot said the British-Swedish drugmaker is 'almost self-sufficient in terms of supply,' adding that 'tariffs is not an issue that is really affecting us very much.' However, AstraZeneca is on a long list of pharma companies transferring manufacturing to the US. This includes a $4bn investment to build a new manufacturing plant in Virginia. Sanofi has outlaid $20bn to bolster US manufacturing through 2030 and Roche unveiled a similar $50bn investment strategy, to name just a couple. The team at GlobalData added: 'Ultimately, the recent US-EU trade deal has increased the level of uncertainty within the pharmaceutical industry, raising concerns on the potential of tariffs increasing past 15% in the future. 'While the full impact will take time to unfold, it will be interesting to see the adoption of different strategies on how the pharmaceutical industry looks to balance innovation, and ensuring patient access, while managing the pressures of tariffs as they unfold into a certain reality.' A critical part of the industry's future also relies on the outcome of the US government's Section 232 investigation into the drug sector. The probe, which Trump initiated to evaluate the role of medicine imports on national security, could result in further tariffs being imposed. MSD CEO Rob Davis said in the company's Q2 earnings call: 'We need to see more clarity both from the administration and just overall as to how exactly [the 15% tariff] is going to play out. It's still not clear exactly how this relates relative to the February investigation and the timing of whether these apply now or will be phased in until there's further guidance.' AbbVie's Michael said: 'We're having constructive discussions with the administration on sectoral tariffs. Clearly, the best way to motivate that is through tax incentives as well as a trade agenda that prioritises innovation. We're well positioned as a company, but we're not going to be able to really give you any details until we understand the outcome of the 232 investigation.' Navigate the shifting tariff landscape with real-time data and market-leading analysis. Request a free demo for GlobalData's Strategic Intelligence . "Pharma CEOs downplay impact of tariffs amid rising cost concerns" was originally created and published by Pharmaceutical Technology, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.