07-07-2025
The Key To AI Adoption Is Experimentation And Implementation
Creative startup and team webinar on laptop for professional internet communication in office. Web ... More design and diverse company people in business video conference with wifi connection.
Most leaders now understand that effective AI adoption goes beyond basic literacy and tool deployment. The real challenge now isn't access, but application. Organizations need to be comfortable with learning loops that don't end in pilots, but, rather, ones in which experimentation is the learning and implementation becomes everyone's job.
Embracing Experimentation with 'On the Job' Learning
AI adoption can't be done effectively without allowing for 'on the job' experimentation that must be coupled with 'on the job' learning. In an interview with me, Molly Nagler, Global Head of Learning at Moderna, shared that experimentation isn't a result of learning, it is the learning. Moderna gives employees the autonomy to reap AI's benefits and empower them to share their own learnings as they explore the breadth of its potential. Moderna has set up communities of AI champions in every business line and geography who know where and how to deploy AI for concrete outcomes. They are leveraging a behavioral science concept of "social proof" to showcase real employees using the custom GPTs that they built. 'People get inspired by seeing their peers building and using tools,' Nagler said.
Moderna's AI Academy is a central learning hub for all AI skill-building. Nagler said, 'We long ago left 'AI appreciation' courses in the dust, and we now use our AI Academy to quickly move people up the AI value chain. AI Academy has grown into a comprehensive suite of enablement experiences, focused on more 'on-the-job' learning. We offer different learning tracks, ranging from a quick introduction to ChatGPT to a class on developing complex AI use cases.' Moderna's strategy is to allow learners to choose their preferred track of learning based on their time availability, learning goals, and level of expertise. Employees get their hands dirty in the intro course by learning how to prompt, then move to more advanced courses on building custom GPTs and agents.
'People can't imagine what new tech and its outcomes are like if they don't use it,' says Charlene Li, author of Winning with AI, underscoring that there is no substitute for practical experimentation. Conor Grennan, Chief AI Architect at NYU Stern, also cautioned in a recent Work Lab podcast by Microsoft that simply giving employees access to AI tools is not enough. 'That's sort of like thinking that if we put a treadmill in every home, we're going to cure heart disease,' he says. 'We won't, because the problem is not learning how to use the treadmill. The problem is changing our behavior by actively incorporating AI into daily practices.'
While experimentation is encouraged at most companies, it's also important to reinforce the intended outcomes from these experiments: learning. While companies want increased productivity and effectiveness, most results from these experiments are lessons about what's working and not working with AI as a new team member. Deloitte validated this sentiment through their research when they found that 68% of surveyed organizations had moved 30% or fewer of their generative AI experiments into full production. This indicated that most GenAI initiatives were still in pilot/proof-of-concept stages. Even so, 67% of organizations were increasing GenAI investments due to early signs of value and what employees were learning throughout the process.
Experimentation Needs Governance
In 2009, Netflix's culture deck, titled "Freedom and Responsibility," emphasized individual freedom with collective accountability. Netflix employees understood this as an opportunity to make the right decisions for the business versus the freedom to do whatever they want. The culture deck codified organizational culture for the first time, clarifying what was expected from employees. That same polarity principle is true today with AI experimentation.
Nagler emphasizes that safe experimentation doesn't mean letting people loose without guardrails. At Moderna, every new user starts with a foundational course that covers not just how to use ChatGPT, but when and why. The training walks through governance policies and common concerns. Like most companies doing AI effectively, Moderna also reinforces a company-wide AI Code of Conduct, balancing experimentation with a shared sense of responsibility. That foundation is reinforced by a company-wide AI Code of Conduct rooted in integrity, quality, and respect, ensuring that the freedom to experiment comes with a shared sense of responsibility.
AI governance could mean the difference between human replacement or AI augmentation. In a recent New York Times article by Tim Wu, he emphasizes the need for augmenting AI in ways that allow us to be better at our jobs. Wu notes that doing so is not just about preserving jobs, it's about 'keeping human interests central to our future.'
Ethan Mollick, Wharton professor and author of Co-Intelligence, advises leaders to use AI for everything you legally and ethically can, from drafting memos to coding, because neither you nor even the AI's creators fully know what it's capable of in your industry.'
In their report on strategic governance, Deloitte emphasized that centralized governance is one of the most consistent factors separating companies stuck in pilot mode from those scaling AI with confidence. Nagler believes this too. When done right, governance isn't a blocker, it's an enabler.
Implementation: Turning Experiments into New Ways of Working
AI adoption depends on experimentation, but it also requires helping employees understand how those experiments translate into meaningful learning and application.
'We have a two-pronged approach,' Nagler, said. 'First, we teach AI skills through our AI Academy. Second, we use AI tools to build and deliver learning faster, better, and cheaper.' The team uses Synthesia to create training videos in minutes, Arist for microlearning in MS Teams, and ChatGPT to design outlines and course materials. A well-crafted prompt—asking ChatGPT to act like an L&D strategist, for example—turns hours of work into a quick first draft. Here is an example of a strong prompt they have used:
"Act as an L&D strategist at a fast-growing biotech company. I want you to draft a 1-page slide outline for a session introducing managers to AI tools. Keep it engaging, beginner-friendly, and avoid technical jargon. The audience is time-constrained, so the goal is to spark curiosity and give them 2–3 clear use cases. Use a concise, clear tone."
In BCG's AI at Work 2025 report, it noted that organizational support makes a huge difference in employee embrace of AI. Only 15% of frontline employees feel positive about GenAI if leadership support is weak, versus 55% feeling positive in organizations with strong leadership endorsement of AI use. Clear executive encouragement and modeling of AI-driven workflows can significantly boost morale and openness to AI.
Moderna leads with this strategy. Problem-solving at Moderna doesn't happen around AI. It happens with it. Monthly manager webinars feature 'AI Moments,' where peers share how they're using AI to tackle real challenges.
'We're not just teaching AI,' Nagler says. 'We're using it to accelerate how we learn and work.' But successful adoption also requires something deeply human: sensemaking. 'In a time of change, leaders must help employees integrate different pieces of information and separate signal from noise,' she adds.
In the race to adopt AI, the organizations that win won't be the fastest, but the ones who are most adaptive. They'll treat experimentation not as a phase but as a practice. They'll invest in tools and systems to solve problems. They will understand that implementation isn't the goal, but a new way of working.