
AI leaders urge ethics, data & trust to unlock value
Wade Weirman, Principal Data Lead at Rackspace Technology ANZ, underscores that the real engine behind artificial intelligence is not the algorithms or technologies themselves, but the data that feeds them. "AI only works when data is trusted, accessible, and embedded across the organisation," Weirman says. He warns that without strong data leadership, AI deployments risk misfiring or even generating biased results that could erode public trust. "Getting this right isn't optional; it's essential for long-term scalability and societal trust."
For David Lloyd, Chief AI Officer at Dayforce, the spotlight is now on HR teams to steer their organisations through the current wave of AI-driven change. Lloyd believes HR has a unique opportunity to lead business transformation, emphasising the adoption of AI agents that automate complex, high-volume tasks and foster a more adaptable workforce. "HR can become the driving force behind ethical and trustworthy AI adoption, shaping how organisations align technology with values."
From a technology infrastructure perspective, Justin Hurst, Chief Technology Officer APAC at Extreme Networks, advises businesses to approach AI with careful planning rather than hasty adoption. Hurst suggests that teams should be given room to experiment and learn, while training in data literacy and network automation must become strategic priorities. "The future of network engineering is not about replacing people with AI, but about enabling them to work smarter and more strategically," he notes.
The environmental impact of AI is also under scrutiny. Simon Wistow, Co-Founder of Fastly, points to the growing significance of energy efficiency in AI development. With nearly half of respondents in Fastly's recent survey indicating a preference for energy-efficient AI models if costs were tied to consumption, Wistow calls for transparency and optimisation of infrastructure. "We all need to start thinking about AI and the internet as something physical - because it is. It uses real resources, generates emissions, and has real-world consequences," he observes.
Matthew Hardman, APAC Chief Technology Officer at Hitachi Vantara, reinforces the importance of enterprise-wide data governance and infrastructure. His company's survey found that 43% of successful AI projects in Australia attributed their success to strong governance and project management, with 35% highlighting high-quality data. "By addressing these challenges and responding to increased scrutiny from boards and regulators, businesses can unlock AI's transformative potential," Hardman says.
Cybersecurity emerges as both a risk and opportunity in today's AI landscape. Les Williamson, Regional Director ANZ at Check Point Software Technologies, points to a recent surge in AI-powered cyberattacks and urges organisations to embed security measures from the design stage. "A well-governed AI can revolutionise cyber security, streamline auditing processes, and ensure regulatory compliance across industries," he says.
Patrick Harding, Chief Product Architect at Ping Identity, and Gareth Cox of Exabeam both highlight the evolving nature of identity management in an AI-driven world - a landscape complicated by autonomous agents and deepfake technologies. Harding stresses, "Building and maintaining trust in every digital interaction is more essential than ever," urging organisations to rethink traditional security and identity models.
On the workforce front, Shaun Leisegang of Tecala and Pieter Danhieux of Secure Code Warrior see AI as an enabler rather than a replacement, shifting the focus from automation to human augmentation. "AI is not a replacement for human potential, but rather a partner in unlocking it," says Leisegang. Danhieux echoes this sentiment, emphasising the necessity for human expertise in navigating AI's vulnerabilities and maximising its productivity gains.
Across industries, the message is clear: to unlock the vast opportunities presented by AI, organisations in Australia and New Zealand must balance rapid innovation with responsibility, sustainability, and a relentless focus on both data quality and human talent. As companies integrate AI deeper into their operations, leadership, governance, and transparency will be key to realising not just smarter machines, but smarter organisations able to thrive in an increasingly AI-powered world.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
23-07-2025
- Techday NZ
DDoS attacks hit new peak with over 250 billion requests in major June surge
Fastly's latest DDoS Weather Report for June 2025 has detailed a surge in sophisticated attacks, including a coordinated event targeting a major high technology provider with over 250 billion malicious requests. The company's findings are based on telemetry from its global edge network, which handles up to 427 Terabits per second of traffic and 1.8 trillion requests each day. Fastly's systems detected trillions of attempted distributed denial-of-service (DDoS) attacks at network layers 3 and 4, but new trends point to more elusive and dangerous application-layer (layer 7) attacks. According to the June report, the scale and duration of attacks hit new highs, with Fastly observing nearly two attacks per minute on average throughout the month. The month's figures were heavily skewed by two days of unprecedented activity on 6 and 7 June, which saw attack volumes twenty times greater than any other day in 2025. Major incident details On these two days, attackers focused their efforts on a single large enterprise customer in the high technology sector. Fastly reports that "over the course of just two days, bad actors launched two separate attacks reaching a cumulative 250+ billion requests." The initial attack started at 10 pm local time and lasted for over four hours, peaking at 1.6 million requests per second. The attack originated from numerous countries, including Germany, China, the United States, India, and especially the Netherlands. Fastly's systems identified and contained the attack within seconds, using identifiers such as hostname and TLS details to differentiate malicious from legitimate traffic. The first wave concluded at around 2:15 am, but less than thirty minutes later, a second barrage began and persisted for 19 more hours, peaking even higher at 1.7 million requests per second. Describing the attack pattern, the report states, "Bringing data from both attacks together reveals that while the majority of the traffic came from the Netherlands, the United States, Germany, and Indonesia, each of the rules automatically created to mitigate the attack featured one additional country (France, China, or the United Kingdom). This appears to be a concerted effort by the attacker to hide their tracks." Despite the massive scale, Fastly confirmed that "the customer experienced no downtime or latency impacts and our proprietary Attribute Unmasking technology still honed in on their attack characteristics." Broader trends Overall, Fastly counted 77,451 individual DDoS "events" in June, which is just eight fewer than the previous month. The company notes that "if we were to evenly distribute events in June, we'd have seen almost two attacks every minute." The report also highlights that while enterprises accounted for the largest volume of attack traffic due to the major incident, the majority of attack "events" targeted small and medium businesses, particularly those in the media and entertainment sector. Fastly's analysis suggests this industry remains a frequent target, "possibly because this industry is the most likely to gain the unwanted attention of attackers who disagree with content published on their sites." Mitigation strategies Fastly reviewed how its DDoS Protection rules were triggered, noting consistent patterns in the use of IP address and geolocation across recent months – with geolocation included in 67% of rules in May. The June report shifted focus to the use of JA4 signatures, a type of TLS client fingerprint. "While it isn't uncommon for JA4s to be shared amongst completely legitimate requests, when combined with other parameters, they create an effective lens through which we can identify an attacker," the report explained. Notably, one JA4 signature featured in 17% of all rules for June. Analysis found this was linked to a botnet with significant distribution and a focus on customers in European news agencies and hyper-regional platforms. Based on its activity, Fastly referred to the likely perpetrator as the "Byline Banshee," explaining that "their attacks have been quite noisy, just as the wailing spirit the name comes from. We'll keep an eye on whether the Byline Banshee makes a resurgence in future months!" Actionable guidance "It's important to note that this report only represents one month of data and should be used with first-party insights from your observability tools and longer-term research to create a comprehensive view. However, from this data alone, there are a few key learnings you can integrate into your existing security efforts: Ensure your defence is robust enough to handle application DDoS attacks at the scale of 1 billion+ RPS. While in the past we've seen attacks of this size target the largest Enterprise customers on our platform, June's attack on an organisation of Commercial size makes it clear that just because those organisations make less revenue, they're no less likely to receive the unwanted attention of attackers. Consider leveraging signatures like JA4 to identify attackers (or leveraging products like Fastly DDoS Protection that automatically incorporate them in rules). While not a novel concept, this provides yet another lens to look at attacks through and accurately separate the traffic without impacting legitimate users. Be mindful of how you're leveraging geo-based decisioning if you're still manually creating rules or rate limits (or shift to automatic rule creation). As seen in the Byline Banshee's attacks this month, the vast majority of traffic came from countries that don't fit the nation-state definition. Automatically mitigate disruptive and distributed attacks."


Techday NZ
18-07-2025
- Techday NZ
AI revolution brings innovation & anxiety across global sectors
Artificial Intelligence (AI) is swiftly transforming the way businesses operate, heightening both innovation and complexity across sectors. As AI Appreciation Day brings global attention to these advances, industry leaders across the Asia-Pacific (APAC) region and beyond are urging organisations to not only recognise AI's achievements but also navigate its associated challenges with care, strategy, and inclusivity. AI - not just a tool but a vital resource Daniel Hein, Field CTO at Informatica Asia Pacific and Japan, highlights the rapid adoption of Generative AI in APAC, noting the region's ambition to stay ahead on the global stage. "AI goes beyond just a tool – it's a vital resource. It supports smarter adaptation measures, accelerates resilience planning, and fosters inclusive innovation that transcends industries and national boundaries," says Hein. He outlines the influence of Agentic AI in optimising decision-making for sectors such as energy and finance, while Generative AI is democratising problem-solving, empowering even non-technical users. However, Hein stresses that organisations must strengthen data foundations and prioritise continuous upskilling, especially in AI literacy. "Investing in strong data infrastructure and continuous upskilling is critical to fully harness AI's capabilities and future-proof organisations," he advises. AI extends beyond productivity gains AI's influence extends well beyond productivity gains. In cybersecurity, Bernard Montel, EMEA Technical Director and Security Strategist at Tenable, describes how AI is integral to modern defence strategies. Montel points to the technology's ability to analyse vast datasets, automate threat detection, and maintain secure systems. He emphasises that AI should be viewed as an augmenting tool, not a replacement for human talent. "This means designing systems where AI handles repetitive tasks, freeing humans to focus on complex problem-solving, innovation, and ethical oversight," Montel explains. Yet, as AI grows in sophistication, so do the threats. The proliferation of AI-generated deepfakes and automated malware has heightened the need for robust cyber defence strategies, including exposure management and comprehensive employee training. Montel urges organisations to embrace ethical AI development and continued vigilance to unlock the technology's full potential while protecting digital infrastructure. The benefit and risk of AI development The software development sector has also felt AI's rapid impact. Nick Durkin, Field CTO at Harness, acknowledges the increased speed of code generation delivered by AI, but warns of a wider 'blast radius' of errors if flawed AI-generated code enters production. "Generating code is easy, but getting it safely into production is the hard part," Durkin notes. He advocates for embedding AI throughout the software delivery lifecycle, not just in code writing, to support higher-quality and safer software deployments. Meanwhile, Michael Bachman, Head of Architecture and AI Strategy at Boomi Innovation Group, points to the emergence of the "Agent Economy," in which autonomous AI agents operate across platforms. With the increasing complexity and the risk of "agent sprawl," Bachman stresses the necessity for a centralised framework to govern AI agents and manage their interactions safely and effectively. AI adoption grows, but anxiety remains The legal field is no exception to AI's reach. Jennifer Poon, Legal Solutions Director at NetDocuments, observes that lawyers themselves are now leading technology adoption, recognising AI's potential to boost productivity by automating routine work. She recommends integrating AI directly into the workflows and systems lawyers already use, ensuring the technology enhances precision and security without disrupting established protocols. Model Context Protocol (MCP), described by Mehdi Goodarzi, Global Head – GenAI Consulting at Hexaware Technologies, offers a promising way to scale AI responsibly across enterprises. MCP enables large language models and agents to share context and interact "intelligently" across platforms, but Goodarzi highlights the need for ongoing development of governance and privacy standards as the technology matures. Despite the enthusiasm among business leaders, research from WalkMe has uncovered significant employee anxiety about AI adoption. Vivek Behl, VP Strategy, warns that many workers, especially from younger generations, feel overwhelmed and left behind by the swift arrival of new tools. Survey data from WalkMe and Opinium indicates that 71% of UK office workers feel new AI tools are being introduced faster than they can learn to use them, while nearly half report feeling more worried than excited about AI at work. Behl calls on organisations to focus on employee experience and support, warning that "digital adoption isn't just a nice-to-have – it's now essential to realising AI's full value." Ensuring that employees are comfortable and proficient with new tools is crucial to embedding AI successfully and mitigating risks associated with rapid technological change. Success isn't measured by innovation alone, but responsible integration As AI matures, its appreciation day serves as a timely reminder: success is not measured by technological innovation alone, but by an organisation's ability to integrate AI responsibly, support its workforce, and safeguard against new risks. The coming years will likely see even more transformative opportunities - and challenges - as businesses seek to balance AI's promise with prudent governance and a focus on human-centred values.


Techday NZ
16-07-2025
- Techday NZ
AI in education & recruitment must support, not replace, humans
Leaders across education and recruitment are highlighting the importance of human engagement and ethical judgement in the age of artificial intelligence, coinciding with this year's AI Appreciation Day. Their remarks reflect an urgent global conversation about maintaining a balance between technological advancement and the core values of human agency within vital sectors such as education and employment. Mr Siva Balakrishnan, Founder and CEO of Vserve, marked the occasion by addressing AI's transformative impact on education. He asserted, "On AI Appreciation Day 2025, we celebrate AI's power to transform education while safeguarding human agency. AI should elevate educators' creativity and judgment, not replace them." Mr Balakrishnan advocated for a harmonious integration, stating, "By blending smart automation with empathetic teaching, we can use AI to scale learning without diminishing our human essence. Our goal must be to empower people to direct technology, ensuring classrooms where technology amplifies human insight and values." His comments reinforce a growing belief among education technologists and pedagogical experts that AI, while capable of automating administrative tasks and personalising learning, must be deployed so as to augment, rather than overshadow, the crucial role of teachers. The focus, Mr Balakrishnan emphasised, remains on cultivating environments where human insight, compassion, and ethical decision-making are amplified by technology rather than replaced by it. Echoing these sentiments, Mr Praveen Joshi, Managing Director and Founding Member of RSK Business Solutions, underscored the imperative of protecting individual autonomy as automation expands its influence. "In an age where AI transforms education and automation becomes pervasive, safeguarding human agency is vital. Technology must support, not supplant our ability to choose, create, and connect," he explained. "As machines reshape learning, it is up to us to ensure they amplify our humanity rather than diminish it, keeping compassion, creativity, and ethical judgement at the forefront." Industry analysts note that as education systems worldwide increasingly turn to artificial intelligence for adaptive learning and efficiency improvements, debates persist regarding transparency, accountability, and the preservation of essential human qualities. The comments by Mr Balakrishnan and Mr Joshi stress a need for critical policy development to ensure that schools and institutions use AI responsibly, setting boundaries that preserve educators' unique abilities and responsibilities. On the recruitment front, Joel Delmaire, AI leader and Chief Product Officer at Jobadder, has gathered perspectives from leading Australian recruiters considering the disruptive role AI is playing in their industry. Delmaire's discussions with sector professionals sought to illuminate both the opportunities and challenges AI presents for talent acquisition and workforce management, as recruiters and employers increasingly turn to automation for candidate screening, shortlisting, and engagement. Recruiters contacted for Delmaire's report were broadly optimistic about AI's role in streamlining repetitive tasks, such as parsing CVs, scheduling interviews, and improving the match between candidate skills and job requirements. However, several stressed that AI, if left unchecked, risks inadvertently introducing bias or missing out on the intangible qualities - such as emotional intelligence, adaptability, and teamwork - that human recruiters are best placed to detect. There is a growing call for ongoing human oversight and regular audits of AI-powered recruitment tools, to ensure fairness and uphold the principles of diversity and inclusion. As sectors including education and recruitment undergo rapid evolution under AI's influence, the voices of industry leaders and practitioners emphasise the importance of keeping human values at the centre of technological progress. Their perspectives reinforce that AI's greatest potential may not lie in replacing human roles, but in serving as a powerful ally - supporting creativity, upholding ethical standards, and ensuring that core human skills continue to define excellence in both classrooms and workplaces.