Balancing AI innovation with security
Günter Bayer, chief information officer, at Stryve, said it was important to understand the dual nature of AI in security: its transformative potential in bolstering defences and the critical imperative of securely integrating AI into business operations.
The key insight: while AI revolutionises cyber defences, businesses must abandon no-cost tools that compromise data security and invest in enterprise-grade solutions to maintain control over their digital assets.
For years, AI has been an unsung hero in the security world, operating behind the scenes to enhance existing tools, Bayer said.
'AI has been around in the security space in the back end for a while, but it is still human-assisted. For example, we have used a filtering service for over ten years, and it has AI built into the backend,' he said.
This human-assisted AI has underpinned foundational security services like advanced filtering, where intelligent algorithms analyse vast amounts of data to identify and block threats. Such AI-powered services have become instrumental in protecting organisations from malicious content and unauthorised access for over a decade.
The quiet efficiency of these backend AI systems has allowed security professionals to operate more effectively, sifting through noise to pinpoint genuine threats.
This evolution signifies a shift from reactive, signature-based detection to proactive, predictive threat intelligence, where AI's ability to discern patterns and anomalies is paramount.
Agentic AI revolution
The current wave of AI integration is markedly different. We are now witnessing an explosion of so-called agentic AI applications, where AI models are increasingly interactive and directly accessible to users. This shift, while promising, introduces new complexities.
As AI models become more refined and adaptable, their potential to revolutionise security operations grows exponentially. AI-driven systems can analyse colossal datasets, identify emerging attack vectors, automate threat responses, and even predict potential vulnerabilities before they are exploited.
Imagine AI autonomously patching known exploits, detecting sophisticated phishing attempts by analysing behavioural anomalies, or orchestrating a comprehensive response to a cyberattack in real-time. This future, where AI functions as a hyper-efficient digital guardian, is rapidly approaching.
This transformative power comes with a significant caveat: the imperative to securely leverage AI. As Bayer states, 'in the cloud world, in relation to the people's data, it's always your responsibility; it doesn't matter who you are hosting with, AWS or Azure or whomever.'
This fundamental principle of data sovereignty and responsibility remains unchanged, even with the advent of advanced AI.
Hidden cost of free AI
The burgeoning trend of individuals and businesses leveraging 'free' AI services presents a particularly thorny security issue.
'What does free mean,' Bayer said. 'All it means is you don't pay with money. You do pay some other way, though, and that's not great. If you are in business, you should just pay.'
Recent incidents underscore this danger. One noted AI, despite robust security measures, recently faced a vulnerability where specific prompting could lead to data exfiltration. This highlights that even well-designed AI systems can be exploited if not used with extreme caution.
Programmers are one group susceptible to these risks, Bayer said. The temptation to input production code into free AI services for rapid debugging or code generation is high. However, this shortcut can inadvertently expose proprietary algorithms, trade secrets, or even critical vulnerabilities to third parties.
AI has been around in the security space in the back end for a while, but it is still human-assisted
'I know AI is helping programmers a lot but, many of them, they just want a quick fix, so they're putting production code into free services,' he said.
Furthermore, malicious actors are increasingly adept at 'bypassing the guardrails' of AI models, using clever prompts and techniques to extract sensitive information or generate harmful content. This 'prompt injection' and data mining by unauthorised parties represent a significant threat that organisations must actively mitigate.
Bayer's view on this is unequivocal: 'Don't use any free service. If you're a business, pay for it. If you want, say, Copilot, then pay for your own instance.'
This advice is not about stifling innovation but about exercising due diligence and prioritising robust security. For businesses, investing in enterprise-grade, paid AI solutions that offer dedicated instances, robust data privacy controls, and clear service level agreements is paramount.
This ensures that data remains within a controlled environment and that the responsibility for its security is clearly defined. While AI tools offer immense benefits in productivity and efficiency, their integration must be approached strategically: balancing innovation with unwavering security principles.
Today, AI's role in cybersecurity is at a crossroads. Its power to enhance defences, automate responses, and predict threats is undeniable. However, the secure adoption of AI across organisations is not merely a technical challenge but also a cultural and strategic one.
By embracing AI with a clear understanding of its risks and a commitment to secure implementation, businesses can harness its transformative power to build resilient and future-proof cyber defences.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Post
9 hours ago
- Business Post
Balancing AI innovation with security
In the rapidly evolving landscape of cloud and cybersecurity, artificial intelligence (AI) is both a powerful enabler and a formidable challenge. While AI has long been a quiet force in the backend of security operations, its increasingly agentic and public-facing applications are poised to reshape how organisations defend their digital assets. Günter Bayer, chief information officer, at Stryve, said it was important to understand the dual nature of AI in security: its transformative potential in bolstering defences and the critical imperative of securely integrating AI into business operations. The key insight: while AI revolutionises cyber defences, businesses must abandon no-cost tools that compromise data security and invest in enterprise-grade solutions to maintain control over their digital assets. For years, AI has been an unsung hero in the security world, operating behind the scenes to enhance existing tools, Bayer said. 'AI has been around in the security space in the back end for a while, but it is still human-assisted. For example, we have used a filtering service for over ten years, and it has AI built into the backend,' he said. This human-assisted AI has underpinned foundational security services like advanced filtering, where intelligent algorithms analyse vast amounts of data to identify and block threats. Such AI-powered services have become instrumental in protecting organisations from malicious content and unauthorised access for over a decade. The quiet efficiency of these backend AI systems has allowed security professionals to operate more effectively, sifting through noise to pinpoint genuine threats. This evolution signifies a shift from reactive, signature-based detection to proactive, predictive threat intelligence, where AI's ability to discern patterns and anomalies is paramount. Agentic AI revolution The current wave of AI integration is markedly different. We are now witnessing an explosion of so-called agentic AI applications, where AI models are increasingly interactive and directly accessible to users. This shift, while promising, introduces new complexities. As AI models become more refined and adaptable, their potential to revolutionise security operations grows exponentially. AI-driven systems can analyse colossal datasets, identify emerging attack vectors, automate threat responses, and even predict potential vulnerabilities before they are exploited. Imagine AI autonomously patching known exploits, detecting sophisticated phishing attempts by analysing behavioural anomalies, or orchestrating a comprehensive response to a cyberattack in real-time. This future, where AI functions as a hyper-efficient digital guardian, is rapidly approaching. This transformative power comes with a significant caveat: the imperative to securely leverage AI. As Bayer states, 'in the cloud world, in relation to the people's data, it's always your responsibility; it doesn't matter who you are hosting with, AWS or Azure or whomever.' This fundamental principle of data sovereignty and responsibility remains unchanged, even with the advent of advanced AI. Hidden cost of free AI The burgeoning trend of individuals and businesses leveraging 'free' AI services presents a particularly thorny security issue. 'What does free mean,' Bayer said. 'All it means is you don't pay with money. You do pay some other way, though, and that's not great. If you are in business, you should just pay.' Recent incidents underscore this danger. One noted AI, despite robust security measures, recently faced a vulnerability where specific prompting could lead to data exfiltration. This highlights that even well-designed AI systems can be exploited if not used with extreme caution. Programmers are one group susceptible to these risks, Bayer said. The temptation to input production code into free AI services for rapid debugging or code generation is high. However, this shortcut can inadvertently expose proprietary algorithms, trade secrets, or even critical vulnerabilities to third parties. AI has been around in the security space in the back end for a while, but it is still human-assisted 'I know AI is helping programmers a lot but, many of them, they just want a quick fix, so they're putting production code into free services,' he said. Furthermore, malicious actors are increasingly adept at 'bypassing the guardrails' of AI models, using clever prompts and techniques to extract sensitive information or generate harmful content. This 'prompt injection' and data mining by unauthorised parties represent a significant threat that organisations must actively mitigate. Bayer's view on this is unequivocal: 'Don't use any free service. If you're a business, pay for it. If you want, say, Copilot, then pay for your own instance.' This advice is not about stifling innovation but about exercising due diligence and prioritising robust security. For businesses, investing in enterprise-grade, paid AI solutions that offer dedicated instances, robust data privacy controls, and clear service level agreements is paramount. This ensures that data remains within a controlled environment and that the responsibility for its security is clearly defined. While AI tools offer immense benefits in productivity and efficiency, their integration must be approached strategically: balancing innovation with unwavering security principles. Today, AI's role in cybersecurity is at a crossroads. Its power to enhance defences, automate responses, and predict threats is undeniable. However, the secure adoption of AI across organisations is not merely a technical challenge but also a cultural and strategic one. By embracing AI with a clear understanding of its risks and a commitment to secure implementation, businesses can harness its transformative power to build resilient and future-proof cyber defences.


Irish Examiner
16 hours ago
- Irish Examiner
AI screening of J1 students' social media 'prone to error', says Irish-American lobby group
A prominent Irish-American lobby group has hit out at the new requirements for J1 visas, and raised fears that artificial intelligence screening of social media accounts will be 'used extensively and is prone to error'. The Ancient Order of Hibernians said it had 'deep concerns' over the US Government directive requiring a review of up to five years of social media activity for every J1 visa applicant, which gives consular officers 'broad discretionary authority'. 'But without clear, objective criteria — and human oversight of AI screenings — we risk eroding the integrity of the J-1 Exchange Visitor Program, chilling participation among young applicants, depriving US government, businesses, educational and cultural institutions of essential summer interns, and weakening the longstanding ties that bind our nations,' its political education chair Neil F Cosgrove said. Irish students usually travel in large numbers to the United States each summer on J1 visas but, under the Trump administration, they now have to adjust the privacy settings on all their social media profiles to public. The US State Department recently announced it would now 'conduct a comprehensive and thorough vetting, including online presence, of all student and exchange visitor applicants' under the new guidance. The Ancient Order of Hibernians said given the way this vetting could be applied, this policy threatens one of America's most successful cultural and economic partnerships with Ireland. It said that officers may reach 'inconsistent decisions' as terms like 'hostility' and 'advocacy', which are meant to be flagged during screening, lack objective benchmarks. For students, the five-year look back on their social media will reach back to their early teens and could lead to treating youthful social media activity before maturity as some form of a security threat to the US, it said. It went on: 'Given the volume of applicants and the vast data set — five years of social media activity per person — we anticipate AI screening will be used extensively and is prone to error. 'In April 2025, PhD student Suguru Onda was wrongly flagged by an AI-driven criminal records check over a minor fishing limit citation in his home country of Japan, leading to an abrupt visa cancellation.' In the case of Mr Onda, his visa was reinstated but J1 students could find themselves already sent home before an error is corrected. The group added that the programme helps to fuel critical US sectors such as hospitality and tourism and businesses on the ground would feel the loss of Irish summer workers. It called on congress and policymakers to exclude any social media content from before someone turned 18, while requiring actual human checks of AI-generated flagged content along with a quarterly report on the use of AI screenings. Aontas na Mac Léinn in Éirinn (Amlé), formerly the Union of Students in Ireland, has also raised significant concerns about the measures and said they send a "damaging signal". Read More US visa crackdown is an assault on students' rights


Irish Independent
21 hours ago
- Irish Independent
Wicklow-Wexford chair of AI committee wants it to ‘lead public debate' on new tech
Chair of the new Oireachtas committee on artificial intelligence (AI), Deputy Malcolm Byrne, has said the committee aims to 'lead public debate' on the merits of the new technology and that it will explore the necessary 'guardrails' to ensure its safe use in the healthcare, education and energy sectors. Led by Wicklow-Wexford TD Malcolm Byrne, the AI committee – like other Oireachtas committees – will advise the Houses of the Oireachtas, receive submissions and presentations from members of the public, interest groups and government departments, while scrutinising government expenditure and debating proposed legislation. Encouraging individuals and interested organisations to submit views on issues they would like to see investigated, Deputy Byrne said the groundbreaking technology could make the delivery of public services more efficient, improve healthcare outcomes and 'speed up many of the mundane tasks in many jobs', but cautioned there will be challenges, and 'guardrails must be in place to ensure AI is designed and used safely'. 'The first public meeting of the committee took place last week, and the 14-member body will be examining issues in modular form from the autumn, looking at themes such as AI and healthcare, AI and education and AI and Energy,' the Fianna Fáil deputy said. 'I was honoured to be asked to Chair this Committee as how technology affects our lives really interests me, and I had been pushing to get such a committee established during the last Oireachtas 'There are a number of organisations and companies in Wicklow and Wexford already using AI in their processes, and it will soon be essential that everyone has some knowledge of this new tech.'