
C-Suite misalignment slows GenAI adoption
The findings show a misalignment among C-Suite leaders when it comes to business goals and operational readiness for GenAI deployment.
The report, which includes data from an NTT Data survey of more than 2,300 senior GenAI decision makers, comprising 1,500 C-Suite leaders across 34 countries, found that while CEOs and business leaders are committed to GenAI adoption, CISOs (chief information security officers) and operational leaders lack the necessary guidance, clarity and resources to fully address security risks and infrastructure challenges associated with deployment.
Nearly all (99%) C-Suite executives are planning further GenAI investments over the next two years, with 67% of CEOs planning significant commitments.
In parallel, 95% of CIOs (chief information officers) and CTOs (chief technology officers) report that GenAI has already driven, or will drive, greater cybersecurity investments, with organisations ranking improved security as one of the top three business benefits realised from GenAI deployment in the last 12 months.
Yet, even with this optimism, there is a notable disconnect between strategic ambitions and operational execution with nearly half of CISOs (45%) expressing negative sentiments toward GenAI adoption.
More than half (54%) of CISOs say internal guidelines or policies on GenAI responsibility are unclear, yet only 20% of CEOs share the same concern – revealing a stark gap in executive alignment.
Despite feeling cautious about the deployment of GenAI, security teams still acknowledge its business value.
In fact, 81% of senior IT security leaders with negative sentiments still agree GenAI will boost efficiency and impact the bottom-line.
NTT Data's research further reveals a critical gap between leadership's vision and the capabilities of their teams.
While 97% of CISOs identify as decision makers on GenAI, 69% acknowledge that their teams lack the necessary skills to work with the technology.
In addition, only 38% of CISOs say their GenAI and cybersecurity strategies are aligned compared to 51% of CEOs.
Adding to the complexity, 72% of organisations surveyed still lack a formal GenAI usage policy and just 24% of CISOs strongly agree that their organisation has a robust framework for balancing risk with value creation.
Beyond internal misalignment, 88% of security leaders said legacy infrastructure is greatly affecting business agility and GenAI readiness, with modernising IoT, 5G and edge computing identified as essential for future progress.
To navigate these obstacles, 64% of CISOs are prioritising
co-innovation with strategic IT partners rather than relying on standalone AI solutions.
Notably, security leaders top criteria when assessing GenAI technology partners is end-to-end GenAI service offerings.
'As organisations accelerate GenAI adoption, cybersecurity must be embedded from the outset to reinforce resilience. While CEOs champion innovation, ensuring seamless collaboration between cybersecurity and business strategy is critical to mitigating emerging risks,' said Sheetal Mehta, senior vice-president and global head of cybersecurity at NTT Data Inc.
'A secure and scalable approach to GenAI requires proactive alignment, modern infrastructure and trusted co-innovation to protect enterprises from emerging threats while unlocking AI's full potential.'
'Collaboration is highly valued by line-of-business leaders in their relationships with CISOs.
'However, disconnects remain, with gaps between the organisation's desired risk posture and its current cybersecurity capabilities,' said Craig Robinson, research vice-president, security services at IDC.
'While the use of GenAI clearly provides benefits to the enterprise, CISOs and Global Risk and Compliance leaders struggle to communicate the need for proper governance and guardrails, making alignment with business leaders essential for implementation.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Free Malaysia Today
an hour ago
- Free Malaysia Today
Baidu posts quarterly revenue drop as consumption flags
Baidu achieved revenue of US$4.56 billion in Q2 2025. (EPA Images pic) BEIJING : Chinese internet giant Baidu recorded a slight drop in quarterly revenue today, dragged down by a persistent slump in domestic spending as its push into artificial intelligence (AI) accelerates. Beijing-based Baidu, the operator of China's top search engine, generates a significant proportion of its revenue from online ads, making its performance highly susceptible to fluctuations in the country's spending patterns. The firm achieved revenue of ¥32.7 billion (US$4.56 billion) during the second quarter (Q2) of 2025, down 4% year-on-year, according to a statement to the Hong Kong Stock Exchange. Revenue from online marketing during the period was down 15% year-on-year to ¥16.2 billion, the statement showed. China is facing an uncertain economic outlook as cautious consumers navigate a years-long downturn in the property market, high unemployment and trade tensions with Washington. Retail sales – a key gauge of consumer demand in China – grew at a slower rate in July than expected, official data showed on Friday. Following years of tight regulation of the vast Chinese tech sector, Beijing is hoping that recent advancements in AI will provide the spark needed to jumpstart the domestic economy. Baidu also said today that its net profit during Q2 was ¥7.3 billion – a 33% jump year-on-year but down 5% from the previous quarter. The company has invested heavily in AI, placing it in an increasingly competitive race alongside China's other tech giants Tencent, Alibaba and ByteDance. It has also sought to advance its autonomous 'robotaxi' services abroad. Baidu and Lyft announced plans this month to launch robotaxis on the rideshare app in Germany and Britain in 2026, pending regulatory approval. The firm said in a joint statement with Uber in July that it plans to offer driverless cars on the Lyft competitor's app in Asia and the Middle East this year.


New Straits Times
5 hours ago
- New Straits Times
Ethical fault line: Integrity in the age of AI
WHEN many affirmed it as the next great leap in human innovation, on par with the internet's emergence in the 1990s or television debut in the 1950s. Yet, as emphasised in the recent Awani AI Roundtable, the question is no longer whether AI will transform the media and journalism landscape. The real issue is whether we can control its power without eroding the skills and values that keep society's heartbeat steady. This question arises because AI has already begun reshaping journalism's possibilities. From producing multi-platform content to streamlining newsroom operations, it offers near-limitless capacity to strengthen the profession. The sky's the limit sentiment resonates globally as AI can analyse trends, translate languages in seconds, edit videos and draft articles in moments. Used wisely, it can elevate journalistic thinking, freeing reporters to focus on deep investigations, data analysis and creative storytelling. Yet, this potential comes with a cautionary note: technology is never a neutral force, for it carries the biases, values and blind spots of its creators and controllers. That is why we must be careful. Technology has long been cast as society's saviour, particularly for communities marginalised by geography, economics or politics. AI must remain a tool, not a belief system. Like any tool, AI can be used for either ethical or harmful purposes. In the wrong hands, it risks reinforcing power imbalances, spreading disinformation and establishing media monopolies which are the challenges that require critical human oversight. Oversight matters because the greatest threat may not be spectacular scandals like fake news or deepfakes, but the quiet surrender of our ability to think critically. When AI delivers answers faster than human reasoning, it tempts us to bypass questioning, doubting and innovating. This vulnerability is amplified in education systems that lead compliance over curiosity. Without reform, AI could fill this intellectual space and produce a generation proficient at writing prompts but unable to thoroughly evaluate information. Such risks deepen when AIs development and control rest in the hands of a few global tech giants. This concentration of power risks creating a new form of media colonialism where algorithms shaping our news flows are designed to serve commercial or political interests far beyond our borders. For Malaysia, this is more than an abstract concern. Over-reliance on foreign AI tools, without building local Large Language Models (LLMs), could erode our narrative sovereignty. Journalism must not remain a passive consumer of imported technology; it must become a co-creator, ensuring AI reflects local contexts, languages and values. Without this, our stories risk being reframed to suit someone else's agenda. This concentration of technological power also reshapes the information supply chain, from data collection to content distribution, in ways that are increasingly automated and centralised. Smaller players are pushed aside, while the speed of AI-generated disinformation outpaces human fact-checking. If convincing fake videos can be produced in minutes, we will need verification systems of equal sophistication and transparency so that truth does not perpetually trail behind lies. These risks intersect with another shift: the changing relationship between younger audiences and news. One of the most striking points from the Awani AI Roundtable was that many in Generation Z, despite being digitally savvy, actively practise news avoidance, turning away from traditional reporting altogether. This detachment leaves them more vulnerable to the echo chambers of unimportant content and viral falsehoods. Yet, their deep familiarity with digital tools also positions them to reinvent journalism for the platforms they inhabit. To tap into this potential, we must draw young people into journalism not as content creators chasing virality, but as storytellers serving the public interest. This requires education reforms that integrate AI literacy, ethical reasoning and investigative skills into journalism curricula. AI can accelerate their work, but it cannot replace the human labour of uncovering truths and connecting them to communities. Such preparation is important because the ethical fault line in journalism has never been about speed or efficiency; it is about integrity. Without firm ethical guidelines, AI could normalise corrupt, irrelevant or dangerous practices. The risk of ethical outsourcing emerges when we assume tech companies have embedded fairness and accountability into their systems. They have not. AI mirrors human prejudices and at times, amplifies them. Only attentive human oversight can close these gaps. This raises an urgent question: how should AI be regulated? Should the rules be set globally, nationally or through a blend of both? Global frameworks offer consistency but often fail to account for local realities. National regulations can be more responsive but are vulnerable to political manipulation. A hybrid model, blending international norms with local enforcement and shared accountability between public and private sectors, may be the most practical path. For journalism, regulation must go beyond technical standards. It requires clear rules on transparency, such as labelling AI-generated content; consent, including whose data is used to train AI and correction ensuring harmful outputs can be swiftly removed. This leads to a central principle that AI should not replace journalists but work alongside them. Imagine AI-driven fact-checking tools flagging inaccuracies in real time or algorithms suggesting diverse sources to prevent one-dimensional reporting. These tools can strengthen journalism but only when paired with cultural practices that prioritize human empathy, curiosity and trust; in this way, AI complements rather than replaces journalism. The most effective partnerships occur when AI supports, rather than seizes, the creative process. Newsrooms must be proactive, with clear guidelines covering transparency, accountability and regular bias audits. Journalists should be trained not only in AIs technical uses but also in its ethical and social implications. Crucially, AI can be used to better serve marginalised audiences, delivering tailored content without sacrificing integrity or succumbing to sensationalism. Ultimately, AI will reshape the media industry but the deeper question is whether it will reshape us. Generation Z will inherit leadership roles and their approach to AI will determine journalisms trajectory for decades. They must be equipped to see the forest of social consequences, not just the trees of technological proficiency. In our rush to embrace AI, we must remember what makes journalism irreplaceable: the human ability to ask hard questions, challenge power and connect personal stories to the public good. AI can enhance this mission, but it cannot lead it. As the Awani AI Roundtable concluded, the future of journalism is not an AI-versus-human contest, but a fusion of humans using AI wisely. The sky may indeed be the limit, but our anchor must remain grounded in ethics, community and reality.

Malay Mail
5 hours ago
- Malay Mail
Water and energy are Malaysia's edge in global tech race, says Anwar
KUALA LUMPUR, Aug 20 — Prime Minister Datuk Seri Anwar Ibrahim today said Malaysia's abundance of water and energy resources has made the country an attractive destination for data centre, semiconductor and artificial intelligence (AI) investors. For that reason, Anwar said the water and energy portfolios were placed under Deputy Prime Minister Datuk Seri Fadillah Yusof as they are 'too strategic for the country'. 'If we ask investors, their first two reasons are usually water and energy, and professional workforce is only the third reason. 'We cannot defend our credibility as a peaceful and competent country if we cannot resolve basic issues such as energy and water,' he said at the official launch of the Langat 2 water treatment plant (LRA) at Hulu Langat, Selangor today. The prime minister also stressed the need to upgrade old pipes to reduce non-revenue water and prepare for droughts by exploring modern, water-efficient technologies. Constructed to meet rising demand in the Klang Valley, the Langat 2 water treatment plant can now treat up to 1,130 million litres of water per day (MLD) from Sungai Semantan in Pahang. The raw water is channelled from Sungai Semantan through a 44.6-kilometre tunnel across the Titiwangsa Range to Hulu Langat. The Langat 2 facility will supply treated water to the Western Corridor (KL City Centre, South KL, Petaling, Sungai Besi, Kinrara and Puchong) and the Northern Corridor (Ampang, North KL from Wangsa Maju to Maluri, Keramat and Gombak). The RM4.2 billion project was funded and developed by Pengurusan Aset Air Bhd (PAAB). The plant is equipped with modern technologies including the Granular Activated Carbon double filtration system, a lamella clarifier inclined plate settler, and real-time monitoring through the supervisory control and data acquisition system. It also incorporates environmentally friendly processes such as static mixing, zero discharge, and hosts Malaysia's first and longest pipe conveyor system for water treatment residual. Fadillah, who was also present at the launch, said Malaysia's water supply access stood at 97 per cent in 2024 while the water reserve margin in the peninsula and Labuan was at a satisfactory 14.9 per cent. He added that Malaysia aims to develop local water technologies that meet international standards under the Water Sector Transformation Plan 2040, as well as explore the use of recycled water for non-food industries.