Latest news with #StateofIT


Techday NZ
12-06-2025
- Business
- Techday NZ
SMBs overestimate cyber readiness as tools & AI uptake lag
A new global survey shows a significant gap between small and medium-sized businesses' confidence in their cybersecurity readiness and the actual measures they have in place to defend against evolving threats. The "State of IT Security for SMBs in 2025" report, released by Devolutions, draws on responses from 445 IT, security, and executive professionals around the world. It finds that while 71% of SMBs say they feel confident in handling a major cybersecurity incident, only 22% report having an advanced cybersecurity posture. This disparity suggests that many organisations may be at greater risk than they believe. PAM practices The report highlights privileged access management (PAM) as a particular area of vulnerability. More than half of SMB respondents (52%) still depend on manual solutions—such as spreadsheets or shared digital vaults—to manage privileged credentials. This reliance on manual methods has actually increased since 2023, raising concerns about efficiency and security. "Manual access management isn't just inefficient – it's dangerous," notes Maurice Côté, VP Product at Devolutions. "The human is often the weakest link – and spreadsheets don't make us stronger. SMBs need lightweight, easy-to-deploy PAM tools designed for their reality." Despite the increasing risks, many SMBs have not adopted automated or fit-for-purpose tools to manage sensitive access rights, potentially exposing them to insider threats and credential misuse. Slow uptake of AI Artificial Intelligence (AI) is being discussed widely as a potential game-changer for cybersecurity. The report finds that 71% of SMBs intend to increase their use of AI-driven tools, which can aid in threat detection, anomaly identification, and predictive analysis. However, only 25% of respondents are currently leveraging AI in their cybersecurity practices, and 40% say they have not started at all. The slower pace of adoption is partly attributed to concerns about cyber threats targeting AI systems themselves, issues of data privacy, and a shortage of in-house expertise to implement advanced technology. "Artificial intelligence is a powerful advancement, but like fire, it must be handled with care," said Martin Lemay, CISO at Devolutions. "It's not without flaws, and its reliance on vast amounts of data makes strong governance and clear regulations essential to prevent misuse." This highlights that while AI can offer efficiency and intelligence in defending digital assets, it introduces new challenges that SMBs must navigate carefully. Budget issues The report also notes a general trend of increased investment in cybersecurity, with 63% of SMBs boosting their security budgets. However, nearly a third still allocate less than 5% of their overall IT budgets to security-related spending. This raises questions about whether new investment is being targeted effectively toward the highest-priority areas. "Budget increases are encouraging, but throwing more money at cybersecurity doesn't work if it's not aligned with real risks," said Simon Chalifoux, CIO at Devolutions. "SMBs need to spend with intention – on tools, processes and training that match their environment." The survey findings indicate that organisations often spend in ways that do not correspond to their most significant security risks, leaving gaps that could be exploited by attackers. From awareness to action Across all key areas—PAM, AI adoption, and budgeting—the report identifies a pattern: increased awareness is not always translating into practical action. While SMBs are more alert to cyber threats than in the past, many have not yet implemented measures that are widely considered best practice. "Cybersecurity isn't a checklist – it's a commitment," said David Hervieux, CEO of Devolutions. "It's not enough to feel secure; SMBs need to build the systems, habits and culture that make them secure. That means measuring their posture honestly – and investing like it truly matters. Because it does." As cyber threats become more sophisticated, organisations face growing pressure to close the gap between perceived preparedness and the reality of their cybersecurity defences. The report suggests that without updated tools, smarter spending, and a commitment to continuous improvement, SMBs risk remaining vulnerable as the threat landscape evolves.


Techday NZ
10-06-2025
- Business
- Techday NZ
Agentic AI adoption rises in ANZ as firms boost security spend
New research from Salesforce has revealed that all surveyed IT security leaders in Australia and New Zealand (ANZ) believe that agentic artificial intelligence (AI) can help address at least one digital security concern within their organisations. According to the State of IT report, the deployment of AI agents in security operations is already underway, with 36 per cent of security teams in the region currently using agentic AI tools in daily activities—a figure projected to nearly double to 68 per cent over the next two years. This surge in AI adoption is accompanied by rising investment, as 71 per cent of ANZ organisations plan to increase their security budgets in the coming year. While slightly lower than the global average (75 per cent), this signals a clear intent within the region to harness AI for strengthening cyber defences. AI agents are being relied upon for tasks ranging from faster threat detection and investigation to sophisticated auditing of AI model performance. Alice Steinglass, Executive Vice President and General Manager of Salesforce's Platform, Integration, and Automation division, said, "Trusted AI agents are built on trusted data. IT security teams that prioritise data governance will be able to augment their security capabilities with agents while protecting data and staying compliant." The report also highlights industry-wide optimism about AI's potential to improve security but notes hurdles in implementation. Globally, 75 per cent of surveyed leaders recognise their security practices need transformation, yet 58 per cent are concerned their organisation's data infrastructure is not yet capable of supporting AI agents to their full potential. As both defenders and threat actors add AI to their arsenals, the risk landscape is evolving. Alongside well-known risks such as cloud security threats, malware, and phishing attacks, data poisoning has emerged as a new top concern. Data poisoning involves malicious actors corrupting AI training data sets to subvert AI model behaviour. This, together with insider threats and cloud risks, underscores the need for robust data governance and infrastructure. Across the technology sector, the expanding use of AI agents is rapidly reshaping industry operations. Harsha Angeri, Vice President of Corporate Strategy and Head of AI Business at Subex, noted that AI agents equipped with large language models (LLMs) are already impacting fraud detection, business support systems (BSS), and operations support systems (OSS) in telecommunications. "We are seeing opportunities for fraud investigation using AI agents, with great interest from top telcos," Angeri commented, suggesting this development is altering longstanding approaches to software and systems architecture in the sector. The potential of agentic AI extends beyond security and fraud prevention. Angeri highlighted the emergence of the "Intent-driven Network", where user intent is seamlessly translated into desired actions by AI agents. In future mobile networks, customers might simply express their intentions—like planning a family holiday—and rely on AI-driven networks to autonomously execute tasks, from booking arrangements to prioritising network resources for complex undertakings such as drone data transfers. This approach coins the term "Intent-Net", promising hyper-personalisation and real-time orchestration of digital services. The rapid penetration of AI chips in mobile devices also signals the mainstreaming of agentic AI. Angeri stated that while only about 4 to 5 per cent of smartphones had AI chips in 2023, this figure has grown to roughly 16 per cent and is expected to reach 50 per cent by 2028, indicating widespread adoption of AI-driven mobile services. However, industry experts caution that agentic AI comes with considerable technical and operational challenges. Yuriy Yuzifovich, Chief Technology Officer for AI at GlobalLogic, described how agentic AI systems, driven by large language models, differ fundamentally from classical automated systems. "Their stochastic behaviour, computational irreducibility, and lack of separation between code and data create unique obstacles that make designing resilient AI agents uniquely challenging," he said. Unlike traditional control systems where outcomes can be rigorously modelled and predicted, AI agents require full execution to determine behaviour, often leading to unpredictable outputs. Yuzifovich recommended that enterprises adopt several key strategies to address these challenges: using domain-specific languages to ensure reliable outputs, combining deterministic classical AI with generative approaches, ensuring human oversight for critical decisions, and designing with modularity and extensive observability for traceability and compliance. "By understanding the limitations and potentials of each approach, we can design agentic systems that are not only powerful but also safe, reliable, and aligned with human values," he added. As businesses across sectors embrace agentic AI, the next years will test the ability of enterprises and technology vendors to balance innovation with trust, resilience, and security. With rapid advancements in AI agent deployment, the industry faces both the opportunity to transform digital operations and the imperative to manage the associated risks responsibly.


Techday NZ
09-06-2025
- Business
- Techday NZ
AI agents to play key role in ANZ IT security, report finds
The latest Salesforce State of IT report indicates that IT security leaders in Australia and New Zealand anticipate AI agents will address at least one of their organisation's digital security issues. The survey reveals that all respondents see a role for AI agents in assisting with IT security, with 36 per cent of IT security teams in the region currently using such agents in their daily operations. The proportion of security teams using AI agents is expected to grow rapidly, with predictions it will reach 68 per cent within the next two years. According to the findings, 71 per cent of organisations in Australia and New Zealand are planning to increase their security budgets during the year ahead, just below the global average of 75 per cent. AI agents were highlighted as being capable of supporting various tasks, including faster threat detection, more efficient investigations, and comprehensive auditing of AI model performance. The global survey, which included more than 2,000 enterprise IT security leaders—with 100 respondents from Australia and New Zealand—also pointed to several challenges associated with adopting AI in security practices. Despite widespread recognition that practices need to evolve, with 75 per cent of respondents acknowledging the need for transformation, 58 per cent expressed concern that their organisations' data infrastructure was not yet ready to maximise the potential of AI agents. "Trusted AI agents are built on trusted data," said Alice Steinglass, EVP & GM, Salesforce Platform, Integration, and Automation. "IT security teams that prioritise data governance will be able to augment their security capabilities with agents while protecting data and staying compliant." The report noted that while both IT professionals and malicious actors are integrating AI into their operations, autonomous AI agents offer an opportunity for security teams to reduce manual workloads and focus on more complex challenges. However, deploying agentic AI successfully requires a strong foundation in data infrastructure and governance. In addition to familiar threats such as cloud security vulnerabilities, malware, and phishing, the report found that IT leaders now also rank data poisoning within their top three concerns. Data poisoning involves the manipulation of AI training data sets by malicious actors. This concern is cited alongside cloud security threats and insider or internal threats. Follow us on: Share on:


Khaleej Times
26-03-2025
- Business
- Khaleej Times
86% of software development teams in UAE will use AI agents within two years
AI agents are set to revolutionise software development processes in the UAE, where 100 per cent of teams use, or expect to use, AI code for generation, a report showed. According to Salesforce's new State of IT survey of software development leaders, UAE leaders expect their teams will increasingly focus on working with business stakeholders, editing code written by AI, and architecting complex systems. The large global study of more than 2,000 software development leaders, including 100 IT leaders in the UAE and a supplementary survey of 250 frontline developers in the United States, found that 86 per cent of teams in the UAE will use AI agents within two years and 70 per cent believe AI agents will be essential as traditional development tools. The UAE data aligns with global trends, with more than 9 in 10 developers around the world excited about AI's impact on their careers, while an overwhelming 96 per cent expect it to change the developer experience for the better. The survey also revealed that more than four in five globally believe AI agents will become as essential to app development as traditional software tools. The report highlights nearly unanimous excitement about agentic AI. Developers are not only looking to agents to unlock greater efficiency and productivity, but 92 per cent globally believe agentic AI will help them advance in their careers. Some developers, however, believe that they, as well as their organisations, need more training and resources to build and deploy a digital workforce of AI agents. Developers have often been painted as wary of AI, but new research reveals they are enthusiastic about the industry's shift to AI agents. The arrival of agentic AI provides developers with the opportunity to focus less on tasks like writing code and debugging and grow into more strategic, high-impact work. And with developers increasingly using agents powered by low-code/no-code tools, development is becoming faster, easier, and more efficient than ever — regardless of developers' coding abilities. 'This survey indicates a strong enthusiasm for AI among UAE developers. However, the results also reveal that organisations have significant work to retool for the agentic AI era. Salesforce looks forward to providing partners and customers across the UAE and the Middle East with the tools to deploy and manage agentic AI effectively, allowing them to supercharge their transformation plans while supporting ambitious government goals such as the UAE National Strategy for Artificial Intelligence 2031,' said Mohammed Alkhotani, Senior Vice President and General Manager, Salesforce Middle East. 'AI agents are revolutionising the way developers work, making software development faster, more efficient, and more enjoyable. This powerful digital workforce streamlines development by assisting with writing, reviewing, and optimising code — unlocking new levels of productivity. By automating tedious tasks like data cleaning, integration, and basic testing, AI agents free developers to shift their focus from manual coding to high-value problem-solving, architecture, and strategic decision-making,' said Alice Steinglass, EVP & GM, Platform, Integration and Automation. Developers are enthusiastic about agentic AI and its impact on their career Respondents are excited for agents to take on simple, repetitive tasks, freeing them up to focus on high-impact projects that contribute to larger business goals. 96 per cent of developers globally are enthusiastic about AI agents' impact on the developer experience. Developers are most eager to use AI agents for debugging and error resolution, then for generating test cases and building repetitive code. The arrival of agentic AI comes at a time when 92 per cent of developers are looking to measure their productivity based on impact over output. With the help of AI agents, developers believe they'll focus more on high-impact projects like AI oversight and architecting complex systems. With agents powered by low-code/no-code tools, developers of all levels can now build and deploy agents. Respondents believe these tools will help democratise and scale AI development for the better. 85 per cent of developers globally, and 68 per cent in the UAE, who are using agentic AI currently use low-code/no-code tools. 77 per cent of developers globally, and 78 per cent in the UAE, say that low-code/no-code tools can help democratise AI development. 78 per cent of developers globally say that the use of low-code/no-code app development tools can help scale AI development. Developers say updated infrastructure and more testing capabilities and skilling opportunities are critical as they transition to building and deploying AI agents. Infrastructure Needs: Many developers (82 per cent globally, and 81 per cent in the UAE) believe their organisation needs to update their infrastructure to build/deploy AI agents. Over half (56 per cent) of developers say their data quality and accuracy isn't sufficient for the successful development and implementation of agentic AI. Testing Capabilities: Nearly half (48 per cent globally and 46 per cent in the UAE) of developers say their testing processes aren't fully prepared to build and deploy AI agents. Skills and Knowledge: More than 80 per cent of developers globally, and 74 per cent in the UAE, believe AI knowledge will soon be a baseline skill for their profession, but over half don't feel their skillsets are fully prepared for the agentic era. Meanwhile, 56 per cent of software development leaders in the UAE say they've introduced employee training on AI.