logo
Only 44% of CIOs seen as AI-savvy by CEOs, survey finds

Only 44% of CIOs seen as AI-savvy by CEOs, survey finds

Techday NZ06-05-2025

A recent survey by Gartner has revealed that only 44% of Chief Information Officers (CIOs) are viewed by their Chief Executive Officers (CEOs) as being "AI-savvy."
The Gartner CEO and Senior Business Executive Survey gathered responses from 456 CEOs and other senior executives around the world between June and November 2024, focusing on CEO perceptions of their organisation's executive teams regarding artificial intelligence (AI).
According to the survey, 77% of CEOs believe that AI is ushering in a new era of business. Despite this, a considerable number of CEOs do not believe that their technology leadership—particularly their CIOs, Chief Information Security Officers (CISOs), and Chief Data Officers (CDOs)—possess the necessary knowledge and capabilities to support, drive, or accelerate business outcomes amid rapid AI evolution.
David Furlonger, Distinguished Vice President Analyst and Gartner Fellow, commented on these findings, stating, "We have never seen such a disproportionate gap in CEOs' impressions about technological disruption. AI is not just an incremental change from digital business. AI is a step change in how business and society work. A significant implication is that, if savviness across the C-suite is not rapidly improved, competitiveness will suffer, and corporate survival will be at stake."
The survey noted that CEO concerns over the technology expertise of their executive teams are not new.
In the 2019–2020 iteration of the Gartner CEO survey, CEOs already considered their executive teams' technological knowledge insufficient for the demands of the digital era. The latest findings indicate that this perceived gap in executive preparedness is widening as AI takes on a more transformative role.
Among the primary barriers to effective AI deployment identified by CEOs were the organisation's inability to hire a sufficient number of skilled individuals, as well as challenges in calculating value and outcomes associated with AI initiatives. These issues are seen as the top two limiting factors hampering advancement and utilisation of AI in business contexts.
Jennifer Carter, Principal Analyst at Gartner, elaborated on how CEO perspectives are shifting in response to AI: "CEOs have shifted their view of AI from just a tool to a transformative way of working. This change has highlighted the importance of upskilling. As leaders recognize AI's potential and its impact on their organizations, they understand that success isn't just about hiring new talent. Instead, it's about equipping their current employees with the skills needed to seamlessly incorporate AI into everyday tasks."
This emphasis on upskilling is part of a broader strategic effort as organisations seek to adapt and thrive within the changing business environment. According to the survey, 66% of CEOs reported that their current business models are not sufficiently fit for purposes involving AI, underscoring the necessity for improved AI understanding and capability across all mission-critical priorities.
The study also suggests that C-suite executives, beyond those in technology roles, require greater education and training in AI-related competencies as CEO expectations increase and AI becomes more integrated into overall business strategy.
The findings are expected to inform business leaders, human resources departments, and executive teams about the growing importance of AI proficiency and adaptability within their organisations, particularly with respect to technology adoption and implementation leadership.
These results are included in the broader Gartner research programme titled "2025 CEO Survey — The Year of Dynamic Capacity," which explores how senior executives are addressing challenges and opportunities arising from technological and societal changes.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI
‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI

The Spinoff

time2 hours ago

  • The Spinoff

‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI

As AI becomes more ubiquitous, teachers across the board are struggling to adapt. Hera Lindsay Bird reports. *All names in this article have been changed. Julia*, a New Zealand teacher working in an American university, doesn't explicitly forbid her students from using AI. The top ranked public university where she teaches academic writing has told her not to. Instead, the students in her classroom must abide by two simple rules. They must ask for permission. And they must provide the LLM prompts, transcripts, and a written reflection on how AI contributed to their work. Of her class of approximately 20 students, several have acknowledged using ChatGPT. It's possible others have managed to sneak AI past her undetected, but she's not so worried about that, because it means 'on some level they got the point of the class, which was about adapting writing to different audiences, and as the audience for their assignments I was an openly seething AI hater.' But there are two repeat offenders who continue to hand in essay after essay of LLM generated writing. The essays are 'bloviating nonsense,' complete with fake quotes and made up sources. The two students repeatedly dodge her requests for in-person meetings. They only reluctantly agree to meet through Zoom. When she questions them about their use of ChatGPT, they lie, saying they only ever used it as a brainstorming tool. Only when she manages to 'circle them in conversation' do they grudgingly give her a fuller account of how they produced the work. Julia is sympathetic to the pressures her students are under. She's patient with them. She painstakingly walks them through the assignment step by step, offering suggestions for how they might resubmit their work. 'What was really sad was the fact it didn't seem to stick,' she says. 'I got final AI assignments in the end, and had to again chase them for versions they had at least edited using their own language, righteously incandescent with frustration like Hades in Disney's Hercules.' She passes the students with low marks. She's not convinced the university would back her up if her students contested their failing grade. 'My main beef with AI is that it made me into a grown adult asshole who had 18-year- old enemies,' she says. 'I wanted to be a teacher not a cop.' In the last few years LLMs and generative AI has graduated from a novelty tool that generates attractive women with more than the amount of medically recommended thumbs, to something that can write a convincingly mediocre essay on the role of ambiguity in Henry James's The Turn of the Screw. Two weeks ago, James D. Walsh published a piece in New York Magazine called 'Everyone is Cheating Their Way Through College' about how students and teachers in the US college system are dealing with education in a post LLM world. The article painted a grim picture about the ubiquity of students using generative AI to cheat on their assignments, claiming, as per the subtitle, 'ChatGPT has unraveled the entire academic project'. New Zealand universities are facing similar problems. In May, Victoria University of Wellington made the decision to enforce a handwritten exam policy for two third year law papers in an attempt to prevent student cheating. Days after the announcement, the university backpedalled. The Post reported that, in response to student concerns, the university had settled on a compromise. Half of the exam would comprise a handwritten multi-choice test. Students would still be permitted to use their laptops for the second half of the exam. Use of AI was still strictly forbidden and would be 'enforced through active and enhanced invigilation.' Universities aren't the only ones struggling with the new reality. Almost every educator I spoke with, from primary school teachers to those supervising postgraduate dissertations, raised serious concerns, with some teachers estimating that up to 80% of their students relied on ChatGPT to complete assignments. I spoke to MA supervisors whose history students theses were riddled with fictitious sources and 'archival' Midjourney photographs, and primary and intermediate school teachers, who said students as young as 11 were using it to answer simple personal prompts, such as 'what did you do in the summer holidays?' and 'what was your favourite penguin in the text?' All of this raises the question: What's the point of an education? Is this the inevitable result of an increasingly consumerist tertiary sector, where students are simply paying for credentials in exchange for entry into the job market? Should we learn to embrace AI as an educational inevitability, in the way that we've come to accept the pocket calculator? Are we being unnecessarily catastrophist? Ben*, an English teacher at a North Island all-boys school, says 'I find it really frustrating to read boosterish pieces about it in the press because they all assume that students will use it to learn. AI is seen as offering kids autonomy over their own learning, but this is not an autonomy they know what to do with! Students are using it to solve their problems and get answers, not to think with.' Ben worries that information literacy is falling by the wayside. 'Traditional research and evaluation skills are totally bypassed by AI,' he says. 'If [a student] asks 'what is evidence about how veganism is unhealthy', it will supply them with exactly that, without them having to do any sleuthing, comparison or evaluation'. The greatest concern from teachers was the atrophying effect it was having on students' critical thinking capacities. Julia from the undergraduate writing programme says, 'They can't get their head around why authorship matters, so they can't tell slop from scientific journals.' This problem is further complicated by the fact that 'reputable journals are accepting GPT bollocks that slips through'. Lily*, a lecturer in the arts at a large New Zealand university, predicts that her colleagues will be shocked come exam season. 'There is going to be a vast difference between what has been submitted for take home assignments and what students will be able to produce in exams.' There is an argument to be made that ChatGPT can level the playing field, especially for students who struggle to translate their learning into academic language. But as other teachers were quick to point out, over-reliance on this technology actually has the potential to widen the gap. Mark*, a special education teacher working at a US high school, uses the example of a student with a reading disability. 'If a student has difficulty reading, difficult reading frustrates them. They can just tell the bot to read the article and answer the questions. They'll get the correct answers… and of course their special ed documentation will say they're making progress when actually they aren't'. Ben is adamant that AI is a tool being utilised in unhelpful ways. 'When all students have AI, why would they choose to do the hard task when they can make it three times as easy in an instant? Again we assume students understand learning in a more holistic sense and that they know what's best for their own academic progress. But that just isn't true. Students will (like all humans!) take the easy route out most of the time, and by enabling this, they're missing out.' So what, precisely, are teachers supposed to do about this problem? Unlike the plagiarism detectors of the previous academic generation, there doesn't seem to be any surefire way to prove that LLMs have been used, even though most teachers felt confident in their ability to detect the linguistic hallmarks of ChatGPT, citing essays that sound like 'management reports' chequered with phrases like 'these themes converge' or 'a conceptual alignment is revealed'. One teacher noted the 'degrassi era syntax'. Preventing students from using AI is becoming increasingly difficult, as the technology becomes more ubiquitous. Even if teachers ask for handwritten essays, or check over a Google document's revision history in an attempt to rumble obvious copy paste jobs, students are quick to adapt. 'They could get AI to rewrite it in the style of a 15-year-old, complete with spelling errors,' says Ben. Or painstakingly copy the LLM's essay by hand. So far, the only reliable method of preventing students from using AI is to revert to technology-free classrooms, with handwritten or oral examinations, but even this solution is potentially short-lived, as technology like Google Glass becomes more mainstream. Some teachers have gone to great lengths to identify students using AI, like the lecturers in the New York Magazine piece, who hid 'gotcha' instructions in their assignments, such as including 'mention Ariana Grande' in white font to identify students who pasted the assignment brief directly into ChatGPT. But strategies like this have a limited shelf life, as students communicate and learn to adapt. The efforts of teachers to prevent the use of LLMs are often undermined by the educational institutions themselves. Many of the assessors I spoke to said that even when they had identified something they were 99% sure was written by generative AI, they were often powerless to do anything about it. Some teachers were explicitly discouraged from banning AI. Others had banned AI, but the policy was effectively unenforceable, as the use of AI is notoriously hard to prove, and the universities feared complaints or litigation. Many teachers I spoke to felt as if they had been left to navigate the grey areas alone. Unsurprisingly, many teachers are becoming increasingly disillusioned. Julia said dealing with AI took up half of her mental load, 'due to the cat-and-mouse of it all'. Another teacher, Jo* who has been teaching undergrad at a large New Zealand university for around a decade, says policing it is a nightmare. 'It takes up so much time and our workload calculations haven't changed to account for this'. It's not just a physical toll for Jo either. 'Dealing with student work which is entirely AI written is by a wide margin the most discouraging thing I've experienced working in academia, and given how many very discouraging things have happened lately that's really saying something.' Concerns over AI aren't limited to educators. I spoke to a few students, who were dismayed to discover their essays were being marked by AI. 'There is a trend for teachers to use it for marking but I'm not a fan,' says Ben. 'Marking is about finding out where your students are at so you can sequence your lessons. Taking the teacher out of the equation is antithetical to good teaching. It's like driving with your eyes closed.' Other teachers have begun using AI as a lesson planning tool. For some, this is an effective time-saving measure and eases the burden on a sector that is notoriously under-resourced and overworked. As Ben is quick to point out, teachers using AI to plan lessons is hardly equivalent to students using it to answer questions. Teachers, he points out, have the competence and skill base required to use AI 'selectively and thoughtfully'. But the outcomes aren't necessarily positive. One witness reported a new entrant teacher getting students to write descriptions of monsters, before entering the descriptions into ChatGPT to create illustrations. But the generated pictures were inappropriately terrifying. To quote the observer; 'Why cant they draw the fucking monsters?' The news isn't all bad. I spoke to some teachers who said they'd observed positive use of LLMs by students to further their learning, like a Romanian sociology lecturer who saw students practising statistical analysis by generating fictitious interview subjects. Others, like Rachel*, a teacher at an international primary school in Malaysia, say that LLMs are a useful tool for translation. 'For some kids, it has really improved their creativity. If it's used effectively and taught well, it could revolutionise teaching.' But by and large, the mood was depressing. Are we staring down the barrel of the future, in which we're simply teaching one robot how to talk to another? Ben, the high school teacher, suggested a 'de-teching' of the classroom was inevitable, and that he and his colleagues have already seen positive interpersonal results from the government's school cellphone ban. 'We need to start thinking about what skills we want humans to have, regardless of their utility in an economy that will slowly be replaced by AI. There's no point trying to predict the jobs of the future. Let's think: what are the things we want everyone in our society to know and understand? What are the key skills, human and disciplinary, that we want everyone in our society to work on? What kind of traits and characteristics do we want to encourage for the good of the individual and the community? That's where we need to start,' says Ben. 'It's in our hands to make a more robust and more human education system. In fact, we have to. If we don't, and if we allow kids to outsource to AI how to think, write, imagine, and work, we'd be doing them a generational disservice.' Julia, despite her struggles, is quick to caution against hyperbole. She believes claims that everyone is using AI are 'highly exaggerated', and the people pushing these products are incentivised to promote the idea that 'not to cheat is to get left behind'. Most of her students, she says, are 'honestly trying to get by on their own thought and expression. So it's certainly not a lost battle we should just give up on due to perceived ubiquity of the tech'. Most teachers agree that the Rubicon has been crossed, and complete abstinence from AI tech is no longer possible. But providing students with the tools to think critically and parse fact from fiction is perhaps more urgent than ever, in a world that is increasingly full of bad actors and disinformation. As Julia says, 'eventually they'll need to rub two brain cells together in a circumstance where the stakes feel real to them, and the less they've practiced thinking and expressing their thoughts the more cringe their failures will be.'

Cloudera joins AI-RAN Alliance to boost AI in telecoms sector
Cloudera joins AI-RAN Alliance to boost AI in telecoms sector

Techday NZ

time2 days ago

  • Techday NZ

Cloudera joins AI-RAN Alliance to boost AI in telecoms sector

Cloudera has joined the AI-RAN Alliance, a global consortium focused on integrating artificial intelligence into telecommunications infrastructure, with particular relevance for service providers in Australia and New Zealand. The AI-RAN Alliance, whose founding members include NVIDIA, also counts Dell, SoftBank, T-Mobile, KT and LG U+ among its participants. The group aims to address the integration of AI within current and emerging telecommunications networks and to standardise the use of AI for optimising shared infrastructure, accelerating edge AI application development, and providing reliable deployment models for AI in telecoms. Cloudera's entry into the alliance follows a period of growing interest among telecommunications providers in using AI to optimise network operations and reduce operational costs. Virtualisation and new infrastructure architectures are key drivers, and AI is seen as an important means of improving service efficiency and enabling new business opportunities for operators. The complexities associated with deploying AI at scale across distributed edge environments present significant challenges for the sector. Telecoms need to take an enterprise-wide approach to operationalise these technologies within the radio access network (RAN) if they are to unlock commercial benefits. As a member of the AI-RAN Alliance, Cloudera will participate in the 'Data for AI-RAN' working group, which is tasked with standardising data orchestration, large language model driven network automation, and hybrid-enabled MLOps across telecommunications and AI workloads. According to the company, this involvement will aim to align data and AI pipeline development with operational requirements, thereby supporting quicker innovation and the deployment of AI-native use cases. Cloudera will also support the Alliance's three stated objectives-AI-for-RAN, AI-and-RAN, and AI-on-RAN-and will work to accelerate the use of AI in real-world scenarios. Potential applications include service level agreement-driven network availability and real-time anomaly detection. The company plans to develop and evaluate reference architectures that telecoms operators can deploy in live environments, facilitating shorter development cycles and improving collaboration around model reusability. Another focus is demonstrating the use of Cloudera's platform for real-time decision-making at the network edge. This will involve enabling scalable preparation of training data and MLOps, as well as operationalising AI inference at scale, while maintaining governance and edge-to-core orchestration. Keir Garrett, Regional Vice President for Cloudera Australia and New Zealand, said, "Joining the AI-RAN Alliance enhances our ability to drive innovation and operational excellence for telecommunications providers across Australia and New Zealand. Leading telcos are already leveraging AI to optimise networks, improve engagement, and streamline operations, with edge computing enabling scalable transformation. Now, we're focused on guiding them through the next phase-unlocking greater value while future-proofing infrastructure. Just as smart highways enhance outdated roads, this shift ensures telcos meet growing user demand with speed, reliability, and adaptability-paving the way for the future of connectivity." Speaking about the company's contribution to the Alliance, Abhas Ricky, Chief Strategy Officer at Cloudera, commented, "Cloudera is proud to bring its data and AI expertise to the AI-RAN Alliance. The network is the heart of the telecom business, both in driving margin growth and in service transformation, and AI can unlock substantial value across those dimensions. Given our leadership in the domain - having powered data and AI automation strategies for hundreds of telecommunications providers around the world, we now look forward to accelerating innovation alongside fellow AI-RAN Alliance members and bringing our customers along. Our goal is to help define the data standards, orchestration models, and reference architectures that will power intelligent, adaptive, and AI-native networks of the future." Jemin Chung, Vice President Network Strategy at KT, said, "We are proud to collaborate with Cloudera and fellow AI-RAN Alliance members in the 'Data for AI-RAN' working group. As AI becomes increasingly central to next-generation networks, the ability to harness data securely and at scale will be a key differentiator. Through this initiative, we look forward to defining best practices that enable AI-centric RAN evolution and improve operational intelligence." Dr Alex Jinsung Choi, Principal Fellow, SoftBank's Research Institute of Advanced Technology, and Chair of the AI-RAN Alliance, said, "Cloudera is an incredible addition to the AI-RAN Alliance, which has grown rapidly as demand for improved AI access and success increases across the industry. The company's leadership in data and AI, combined with their extensive telecommunications footprint, will play a vital role in advancing our shared vision of intelligent, AI-native networks."

Bitdefender Launches Powerful Compliance Management Solution Unified With Endpoint Security
Bitdefender Launches Powerful Compliance Management Solution Unified With Endpoint Security

Scoop

time3 days ago

  • Scoop

Bitdefender Launches Powerful Compliance Management Solution Unified With Endpoint Security

Designed as an add-on to Bitdefender GravityZone, the companys flagship unified security and risk analytics platform, GravityZone Compliance Manager minimises complexity by unifying compliance, risk, and security operations in a single platform. GravityZone Compliance Manager Reduces the Cost and Complexity of Regulatory Compliance Requirements Across All Environments and Industries Bitdefender, a global cybersecurity leader, today announced GravityZone Compliance Manager, a new addition to its GravityZone platform that helps organisations reduce the burden of compliance and streamline audit readiness. Designed specifically for today's complex regulatory landscape, the solution provides real-time visibility, automated remediation, audit-ready reports, and one-click compliance documentation fully integrated with Bitdefender endpoint security and risk analytics. 'GravityZone Compliance Manager performed well for us during early access. The continuous monitoring and assessment feature reduced our reliance on manual scans, saving valuable time. Because it's integrated into our existing security stack, we've avoided the additional cost and complexity of using external tools. It has simplified our operations by eliminating the need for multiple point solutions,' stated Alin Paunescu, chief information security officer at Patria Bank. In recent research, Gartner® recommends that organisations, 'Combine compliance and risk management effectively by prioritising the implementation of impact-based assessments and automated, continuous monitoring capabilities¹.' With regulations like GDPR, PCI DSS, NIS2 and DORA introducing stricter penalties, organisations can no longer afford fragmented or manual compliance approaches. The financial consequences of non-compliance are severe with fines up to €20 million or four per cent of global annual turnover under GDPR and US$100 thousand per month under PCI DSS. These penalties come in addition to significant reputational damage organisations face that often follows regulatory violations. Regulatory demands are increasing, but most organisations still rely on fragmented tools and manual processes. Designed as an add-on to Bitdefender GravityZone, the company's flagship unified security and risk analytics platform, GravityZone Compliance Manager minimises complexity by unifying compliance, risk, and security operations in a single platform. It delivers real-time compliance scoring, automated reporting, and guided remediation, all without the need for specialised in-house expertise. Key Benefits of GravityZone Compliance Manager: Automated Audit-Ready Reports in Seconds –Instantly generate compliance reports aligned with auditor requirements using existing Bitdefender tooling. GravityZone Compliance Manager simplifies audit preparation by automating evidence collection and removing reporting complexity. Reports are structured for auditor review and include an executive summary of the organisation's overall compliance score, a breakdown of compliant versus non-compliant checks, and a risk overview detailing the severity of high, medium, and low risks. One Platform for Security, Risk Management, and Compliance – GravityZone Compliance Manager builds on Bitdefender's unified platform by adding compliance management to a foundation that already includes prevention, detection, response, and risk analytics. Combined with Bitdefender Proactive Hardening and Attack Surface Reduction (PHASR), which proactively reduces exposure by disabling unused or risky system tools, organisations can both harden their environments and stay continuously aligned with compliance requirements. When risks are resolved, compliance status updates automatically which streamlines operations and improves organisations' cybersecurity posture. Supports Major Industry and Geo Specific Compliance Standards – GravityZone Compliance Manager provides immediate visibility into endpoint compliance posture and streamlines regulatory alignment with out-of-the-box support for major frameworks—including region and industry-specific standards such as GDPR, HIPAA, DORA, NIS 2 Directive, PCI DSS, SOC 2, ISO 27001, CISv8, CMMC 2.0 and more. Businesses quickly identify and remediate compliance gaps with a single click and can drill down further into specific standards or benchmarks to view detailed information on associated risks and affected assets. 'The consequences of non-compliance, including financial loss, operational disruption, and reputational damage, rival those of a data breach or ransomware attack, yet most businesses lack the resources or specialised talent needed to manage compliance with confidence,' said Andrei Florescu, president and general manager of Bitdefender Business Solutions Group. 'GravityZone Compliance Manager is a game-changer that consolidates compliance, risk management, and endpoint security on a single platform, enabling businesses to meet regulatory demands effortlessly and reduce complexity to strengthen cyber resilience.' Availability Bitdefender GravityZone Compliance Manager is available now for new and existing customers. All Risk Management users receive automatic access to a basic standard with real-time insights and best-practice guidelines. A full Compliance Manager add-on license unlocks support for advanced compliance frameworks, detailed scoring, full compliance visibility, and exportable reports. For more information visit here. Notes: ¹Gartner, 2025 Strategic Roadmap for Cyber GRC, Jie Zhang, Michael Kranawetter, October 4, 2024. Gartner is a registered trademark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved. * Legal Notice Bitdefender GravityZone Compliance Manager features and reports are designed to help organisations with compliance-related security activities, in particular with assessing and helping maintain compliance with its listed standards and baselines but can neither fully replace internal efforts nor guarantee that an organisation will pass a compliance audit. Bitdefender recommends working with an approved auditor to obtain any official compliance certifications. About Bitdefender Bitdefender is a cybersecurity leader delivering best-in-class threat prevention, detection, and response solutions worldwide. Guardian over millions of consumers, enterprises, and government environments, Bitdefender is one of the industry's most trusted experts for eliminating threats, protecting privacy, digital identity and data, and enabling cyber resilience. With deep investments in research and development, Bitdefender Labs discovers hundreds of new threats each minute and validates billions of threat queries daily. The company has pioneered breakthrough innovations in antimalware, IoT security, behavioural analytics, and artificial intelligence and its technology is licensed by more than 180 of the world's most recognised technology brands. Founded in 2001, Bitdefender has customers in 170+ countries with offices around the world. For more information, visit

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store