logo
AI adoption in organizations: Will it hit the buffers?

AI adoption in organizations: Will it hit the buffers?

Forbes13-05-2025

AI is developing at breakneck speed. Every day new developments and breakthroughs are announced. The big players in the west like Google, Microsoft, Amazon, Tesla, Nvidia are making big bets on commercial success and in the east China is in the mix too with DeepSeek and government backed programs. The AI 'space race' is fully in play.
But whilst the technology is advancing rapidly, the critical question for me remains: are organisations really ready to harness AI's full potential to achieve the promised 15%-30% productivity improvements being predicted by the World Economic Forum? Or will corporate AI adoption hit a wall.
I recently led a study group working with AI industry experts involving a number of organisations from different sectors that examined the implications of AI on jobs, people, and organisational structures. The experience has been illuminating, leading me to conclude that successful AI adoption is predominantly an organizational and behavioural change challenge and less of a technological one. Yet paradoxically, many companies are treating AI as a technology deployment, handing the reigns over to their CIOs.
As part of our research, we developed a simple questionnaire designed to identify AI's impact on specific jobs by examining several key factors. Participating organisations were asked to invite employees to complete this survey, but few actually did. When pressed for reasons, we discovered that they were uncomfortable even raising the subject of AI with their employees for fear of the questions that might arise.
Consider the dilemma: what happens after an employee completes the questionnaire revealing their jobs could change radically? This triggers awkward questions: Is my job secure? Will AI reduce our headcount? What will happen to me? Will I be retrained?
Making matters worse, "AI", for many remains a fuzzily defined concept encompassing various technologies from Large Language Models (LLMs) like Co-pilot, Chat GPT, Co-Pilot, Gemini and Claude to sophisticated machine learning algorithms. This ambiguity compounds the challenge – leaders are rightly uncomfortable about engaging employees on topics they themselves don't fully understand making AI adoption tough.
These conversations become particularly fraught in organisations facing competitive pressures and declining margins, where leadership may view AI primarily as a cost-reduction tool. Companies experiencing growth might frame AI more positively as enabling increased work capacity without additional hiring. However, this narrative doesn't resonate in organisations whose leaders are explicitly seeking efficiency gains through cost base reduction.
Successful AI adoption requires, at minimum, a clearly articulated narrative and comprehensive education programme to build leadership and employee AI competence and confidence. Organisations must develop honest, coherent communication and change programs and prepare for challenging questions before initiating broader AI conversations.
Cultural trust matters enormously too in AI adoption. Organizations with high levels of trust between leadership and employees can have difficult conversations much more easily and honestly than those that don't. Where leadership has demonstrated a consistent record of transparency, care and ethical change management, employees approach AI conversations with greater openness, less immediate suspicion and less 'resistance'.
Conversely, in low-trust cultures where leadership credibility has eroded through the historical breaking of promises, contradictory actions and statements, AI initiatives are likely suffer from resistance from a justifiably sceptical workforce limiting the ability to achieve the productivity benefits of higher trust organisations.
Beyond cultural challenges, there are numerous technical hurdles. Many IT departments have restricted external AI tool deployment due to legitimate concerns about:
These technical concerns intersect with growing ethical and legal considerations surrounding AI adoption, including:
A comprehensive global study by Deloitte involving over 2,700 executives found that identified governance, ethics, and regulatory compliance as significant barriers to AI implementation, outranking even technical and skills challenges (Deloitte, "State of AI in the Enterprise", 2024).
Without comprehensive change programs addressing education, skills, communication, employee engagement, technology, processes, governance, and regulation, widespread AI adoption will be slow. Organisations will go up blind alleys hitting adoption buffers and the promised productivity gains will be hard to achieve.
Leaders that treat AI as an opportunity for organisational renewal, systematically evolving their organizations for a new world will reap the much promised productivity gains and associated competitive advantage. Those that don't may well be left behind.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Fed's Daly Says Muted Tariff Impact May Open Door to Cut in Fall
Fed's Daly Says Muted Tariff Impact May Open Door to Cut in Fall

Bloomberg

time27 minutes ago

  • Bloomberg

Fed's Daly Says Muted Tariff Impact May Open Door to Cut in Fall

By Catarina Saraiva and Updated on Save Federal Reserve Bank of San Francisco President Mary Daly said she's seeing increasing evidence that tariffs may not lead to a large or sustained inflation surge, helping bolster the case for a rate cut in the fall. 'My modal outlook has been for some time that we would begin to be able to adjust the rates in the fall, and I haven't really changed that view,' Daly said Thursday in an interview on Bloomberg Television.

AI Gave The World Infinite Content—Now What?
AI Gave The World Infinite Content—Now What?

Forbes

time27 minutes ago

  • Forbes

AI Gave The World Infinite Content—Now What?

Tejas Manohar is the cofounder/co-CEO of Hightouch. Just a few years ago, generative AI (GenAI) felt more like a curiosity than a tool. We asked language models to write love letters in the style of tech bros or explain quantum physics to a 5-year-old. Visual platforms responded to prompts like "a dragon in a business suit, pixel art style" or "a Renaissance portrait of a barista." The results, while novel and amusing, were rarely practical for business. That has changed. By the end of 2024, GenAI outputs became sharper, more polished and increasingly indistinguishable from human-created work. In 2025, with tools like GPT-4, Midjourney, Runway and Canva AI becoming widely adopted, content creation is no longer the bottleneck it once was. Soon, marketing teams will be able to generate dozens of creative options in minutes. However, this shift introduces a new problem: With so much content, how do we decide what to use, for whom and when? Most marketers are now using GenAI to create assets. While Salesforce reports that 76% of marketers use AI to generate content, the processes for deploying that content haven't evolved. The typical workflow still involves pasting AI-generated copy into spreadsheets, testing a couple of variants, manually picking a winner and repeating it all. That might work in the short term, but it's not scalable. More importantly, it doesn't improve over time. More content is not the solution unless there's a system to decide which content to use and how. Imagine an orchestra where every musician trained at Juilliard, but there's no conductor. That's what marketing looks like in a GenAI world without decisioning. There's creativity, but no coordination. Marketers today face a flood of assets, but the bigger challenge is figuring out what to send, to which audience and when. These are not creation problems. These are decisioning problems. And we're still trying to solve them using tools and mental models—journey builders, marketing calendars and simple A/B tests—built for a world where content is scarce. Traditional workflows assume that you'll create a handful of subject lines, define a few segments and test some variations. But GenAI doesn't create one or two options—it creates hundreds. Suddenly, you're staring at thousands of possible combinations across messaging, timing, audience and channels. Marketers can't test every option. They can't manually orchestrate every journey, and they certainly can't rely on batch-and-blast methods anymore. A new approach is needed. For many organizations, AI decisioning has become a key part of their AI strategy. This new category of technology sits between content creation and content delivery. It enables marketers to deploy AI agents that make real-time decisions about which content to send to which user. These systems use reinforcement learning (the same type of machine learning behind self-driving cars and streaming recommendation engines) to optimize for business outcomes like conversions, retention or lifetime value. Think of how platforms like Google and Meta Ads operate. You set your goals, upload creative assets and the system optimizes combinations to deliver results. Now imagine that same model applied to email, push, in-app messaging and CRM. That's what AI decisioning aims to achieve, only this time with transparency and control built in. To adopt AI decisioning effectively, companies need to get the basics right first. That means clarifying goals, improving data access and identifying where manual decisions slow things down. Start small by pinpointing bottlenecks in your workflow, whether that's testing content, segmenting audiences or managing channels. Silos are a major hurdle. When teams like marketing, data and product work in isolation, decisioning falls flat. Aligning around shared goals, metrics and timelines helps break down these walls and ensures AI systems have the inputs they need to be effective. The best way to begin is with a focused use case, such as optimizing subject lines or send times. Prove value quickly and then scale. AI decisioning is not about replacing everything at once; it is about creating a system that learns and improves over time. Used together, these technologies form a closed-loop system. GenAI generates content while AI decisioning systems select the right assets for each user based on performance data. As results come in, those insights feed back into the content generation process, allowing both creation and decisioning to improve continuously. GenAI acts as the input layer, creating at scale. AI decisioning functions as the optimization layer, learning what works and when. Combined, they create a flywheel where content fuels decisions and decisions enhance future content. But none of this works without human oversight. Marketers still need to be involved. AI systems must be transparent, auditable and accountable. Teams need to know how decisions are made, what experiments are running and have the ability to approve content and manage risks. In the coming months, content bottlenecks will fade as GenAI becomes even more integrated into daily workflows. But that's only the first step. The true differentiator will be how effectively teams can deploy the content they generate to drive meaningful results. The winners in the next era of marketing won't be the ones who generate the most creative assets. They'll be the ones who build systems that know what to do with them and can adapt in real time. So keep prompting and creating. But remember: the next meaningful shift in marketing won't just come from creation—it will come from smarter decisioning. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Edge AI Applications As The Catalyst For AI PC Market Growth
Edge AI Applications As The Catalyst For AI PC Market Growth

Forbes

time27 minutes ago

  • Forbes

Edge AI Applications As The Catalyst For AI PC Market Growth

Ajith Sankaran, Executive Vice President, C5i. getty Despite all the buzz, the adoption of high-performance AI PCs with powerful neural processing units (NPUs) has been especially sluggish. Since their launch in mid-2024, these devices have captured just 5% of AI PC market sales. This can be attributed to several factors: • AI PCs typically command a significant price premium without clearly articulated benefits. Many users remain unconvinced that these costs translate to meaningful improvements in computing experiences. • Compatibility concerns persist, particularly with first-generation advanced RISC machine (ARM)-based systems that may not support legacy software. • There is a scarcity of software applications that fully harness AI PC capabilities. According to a 2024 ICD report, the global market for personal computing devices was "set to grow 3.8% in 2024, reaching 403.5 million units." However, this growth is primarily driven by a nearly double-digit growth in tablets. According to Jitesh Ubrani of IDC, 'There seems to be a big disconnect between supply and demand as PC and platform makers are gearing up for AI PCs and tablets to be the next big thing, but the lack of clear use cases and a bump in average selling prices has buyers questioning the utility.' I believe the answer to realizing the potential of AI PCs in enterprise scenarios lies in understanding and utilizing edge AI. To understand why, let's take a closer look at how these systems operate. Edge AI And Its Relationship With AI PCs Edge AI represents the convergence of AI and edge computing, enabling AI algorithms to run directly on local devices rather than in remote data centers. This approach processes data where it's generated, eliminating the need to send information to the cloud for analysis and returning results almost instantaneously. AI PCs are well-positioned to serve as powerful edge AI platforms due to their unique hardware architecture. They integrate three processing components: • A central processing unit (CPU) for general computing tasks. • A graphics processing unit (GPU) for parallel processing workloads. • A neural processing unit (NPU) optimized for AI computations. This triad of capabilities allows AI PCs to handle edge AI applications with efficiency. The performance benefits can be substantial; security company CrowdStrike reported that its software's CPU consumption dropped from 35% to 1% when running on machines equipped with Intel NPUs. Global shipments of AI PCs are projected to reach 114 million units in 2025, accounting for 43% of all PC shipments. I believe that edge AI that incorporates the latest advances in generative AI and agentic AI could provide tangible benefits that justify the premium pricing of AI PC for consumers and enterprises. As more developers create software that leverages NPUs and other specialized AI hardware, the value proposition should become clearer, driving increased adoption across both consumer and enterprise segments. Emerging Edge AI Applications Driving AI PC Demand • Manufacturing Intelligence Manufacturing environments are proving to be fertile ground for edge AI applications. AI systems running locally on AI PCs can monitor equipment health in real time, detecting anomalies and predicting potential failures before they occur. This can reduce costly downtime. Quality control represents another application. AI-powered cameras connected to edge computing systems can inspect products for defects with precision and consistency. • Healthcare Innovations The healthcare sector also stands to benefit from edge AI. Portable diagnostic devices equipped with edge5 AI can analyze medical images such as X-rays, MRIs, and CT scans locally, providing rapid insights without requiring cloud connectivity. This is particularly valuable in remote areas. And wearable health devices using edge AI can analyze biometric data locally, detect anomalies and alert healthcare providers without transmitting sensitive patient information to remote servers. • Retail Transformation In retail, edge AI applications are revolutionizing operations and customer experiences. AI-powered cameras and sensors can track inventory levels in real time, optimizing stock replenishment. The same infrastructure can analyze customer behavior patterns, enabling retailers to deliver personalized recommendations and promotions. These capabilities require significant local processing power that can be provided by AI PCs to analyze video feeds and sensor data in real time. • Security and Privacy Protection Edge AI can deliver faster performance while keeping sensitive data local instead of sending it to cloud services. For example, Bufferzone NoCloud "uses local NPU resources to analyze websites for phishing scams using computer vision and natural language processing." Edge AI applications can enhance banking security by detecting unusual transactions and immediately alerting users. Recommendations For Effective AI PC and Edge AI Adoption 1. Develop edge-native AI applications for real-time decision-making. Prioritize building edge-native AI applications that leverage the NPUs in your organization's AI PCs to execute machine learning models locally. For example, manufacturing firms can deploy vision systems on AI PCs to perform real-time quality inspections directly on production lines, reducing defect rates while eliminating cloud dependency. 2. Deploy agentic AI systems for autonomous workflow optimization. Agentic AI excel at autonomously managing complex, multi-step processes. In supply chain, running agentic AI systems on AI PCs can allow you to dynamically reroute shipments based on real-time traffic data processed at the edge, reducing delivery delays. Financial institutions can also combine agentic AI with edge computing to autonomously monitor transactions for fraud patterns, triggering immediate alerts while keeping sensitive financial data localized. 3. Implement privacy-centric AI architectures for regulated industries. Consider adopting hybrid edge-cloud AI architectures to balance computational demands with regulatory compliance. For example, banks can deploy on-premise AI PC clusters to run agentic AI fraud detection systems, ensuring customer transaction data never leaves internal networks. 4. Build scalable edge AI infrastructure with modular hardware. Invest in AI-optimized hardware ecosystems that support both current and emerging workloads. For instance, consider deploying AI PCs with dedicated NPUs for employee productivity tools and pairing them with edge servers containing GPU/TPU arrays for heavy computational tasks. 5. Integrate generative AI with edge computing for adaptive systems. By fusing generative AI with edge computing, you can enable dynamic system adaptation within your company. For example, manufacturers can deploy small language models on AI PCs to generate equipment repair instructions tailored to real-time sensor data, reducing machine downtime. Conclusion While initial adoption of AI PCs has been slow due to high costs, compatibility issues and a lack of applications, the emergence of edge AI use cases is beginning to demonstrate the value of local AI processing. As developers increasingly leverage NPUs to build edge-native and agentic AI solutions, I believe the value proposition of AI PCs will become more evident, driving broader adoption across consumer and enterprise markets. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store