
From Plow To Prompt: What Agri Revolution Can Teach Boards Re: AI Age
The headlines are relentless: mass layoffs, hiring freezes, and soaring anxiety in the face of AI. While the fear is real, history offers a reassuring truth: we've steered this kind of transformation before. Consider the Agricultural Revolution—a seismic shift that upended human labor, productivity, and governance. Today, as generative AI upends knowledge work, the lessons from our first great disruption offer a governance roadmap for corporate boards navigating this new frontier.
History's Playbook
Roughly 12,000 years ago, in what is now the Fertile Crescent, early humans transitioned from nomadic hunter-gatherers to settled agriculturalists. Far from an overnight leap, this transition was incremental and uneven. Archaeological digs at Abu Hureyra (modern-day Syria) reveal that societies layered new practices onto old ones, blending traditional foraging with early planting techniques (Moore et al., 2000).
Why does this matter now? Because successful adaptation required three pillars: strategy, policy, and programmatic infrastructure—the same pillars boards must now reinforce to govern through AI disruption.
Govern or Fall
Generative AI is not a tool. It's a platform shift, as highlighted in GenAI Urges A Shift From Digital Transformation To Business Transformation. Like agriculture, electricity, or the internet, it changes the fundamental contract between labor, value creation, and growth. Boards that delegate AI governance to the IT department or treat it as a cost-saver are repeating the errors of past disruption deniers.
The stakes are higher than quarterly returns. A 2024 PwC CEO Survey called Reinvention on the Edge of Tomorrow found that 34% of CEOs expect litigation due to AI bias or misuse within three years PwC, 2024.
Boards must ask: Do we have the right policies, metrics, and ethics guardrails in place? If not, governance failure is not a risk. It's a certainty. This is a reference that Boards can use to better understand how to frame questions related to GenAI and Human Capital governance: The Conference Board – Generative AI: Questions the CHRO Should Ask.
Strategy Remade
The shift from hunting to farming required new strategic assumptions: predictable yields, land use, food surplus. Similarly, AI demands a rethink of what value creation means.
According to McKinsey, generative AI could add $2.6 to $4.4 trillion in annual economic value—but only if companies reconfigure workflows and upskill labor accordingly McKinsey, 2023.
Boards must demand that AI strategy aligns with core value drivers. Are investments being made in R&D, AI governance, and human capital analytics, or is AI framed merely as a headcount-reduction tool? Firms that link AI to business model innovation—not just automation—are those that will compound returns.
Policy Infrastructure
The Code of Hammurabi, written around 1750 BCE, introduced laws to manage the complexity of agrarian society: contracts, labor terms, ownership rules. Today's boards must do the same for AI.
Key policies require immediate oversight:
These are not operational details—they are boardroom imperatives. Boards must ensure these policies are codified, aligned with risk appetite, and monitored through robust reporting channels. Frameworks like ISO 42001 for AI management systems and ISO 30414 for human capital reporting offer valuable scaffolding.
Programs for People + AI
Just as ancient societies created apprenticeships, seasonal calendars, and knowledge transfer methods for farming, today's organizations must develop AI literacy programs that protect and extend human capability.
Start with job architecture. Roles must evolve to include prompt engineering, model evaluation, and ethical oversight. Then, build scalable programs for:
High-performing companies already lead here. For example, AT&T/Udacity's nano-degree program reduced reskilling time by 35% and increased internal mobility. In AI Is Reshaping Work Faster that Companies Can Upskill, Rashidi emphasizes that while technology is accelerating, the gap in workforce readiness is widening, making structured, scalable learning programs a model for future-fit talent development.
Disclosure as Governance
Boards can no longer rely on lagging indicators. Investors, regulators, and employees want forward-looking metrics that link AI integration to strategy and human capital performance.
Use ISO 30414's human capital metrics (e.g., productivity, engagement, internal mobility) alongside ESRS S1 and S2 standards to create transparent, audit-ready disclosures. Track workforce adaptation rates, not just AI adoption rates. If you're cutting talent faster than upskilling it, expect governance questions at your next annual meeting. Boards, CEOs, CFOs and HCROs need to reframe how human capital is viewed as part of the business model. It is not simply an expense, but is an investment in an intangible asset, according to the SEC, that drives economic value-creation. Once this context shift has occurred, investments in human capital can be evaluated the same way as capital investments.
The Human Constant
Technological revolutions do not erase the need for human judgment. In ancient Mesopotamia, irrigation required engineers, not just water. Today, AI requires ethical stewards, not just algorithms.
Boards that lead this transition with strategic foresight, policy rigor, and talent investment will not only mitigate risk—they'll accelerate competitive advantage.
Because it's not the strongest that survive disruptions. It's those who govern the shift best.Postscript: This article builds on ideas originally developed in collaboration with Stela Lupushor. I'm grateful for her partnership and insight in shaping our shared perspective on the history of work, technology, and human capital.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
8 minutes ago
- Forbes
AI Guilt And Comms Professionals: Working With Expectation Overload
AI tools are changing how comms pros work—but not without emotional cost. Many feel a quiet sense of ... More AI guilt, wondering if using generative tools undermines their value. The reality: when used with discernment, AI elevates human judgment—it doesn't erase it. Recently, I spoke with Kelley Darling, a comms professional at a multi-division real estate firm in Washington, DC. She essentially does most of the comms herself. Darling has started using AI to keep up with demands across four distinct audiences. But she's wrestling with a feeling I expect is all too familiar: AI guilt among comms professionals. 'It makes my work sharper and more efficient,' Darling told me. 'But I wonder—would people still see the value I bring if they realized I have AI partners helping behind the scenes?' Darling's comment stuck with me. As someone who supports people designing and scaling thought leadership programs, I meet many communications professionals like Darling. They carry the full weight of brand voice, narrative coherence, and content strategy—often as solo contributors. The introduction of AI into their workflows was supposed to be a relief. But instead of reducing pressure, it often introduces a quiet, creeping question: Am I cheating? Let's name this feeling AI guilt. And let's unpack it. AI Guilt And The High-Wire Act of Modern Thought Leadership Communications professionals—particularly those shepherding thought leadership programs—have never had it easy. They must help surface big ideas, package them elegantly, channel them through diverse media, and measure the results. They must be both the wellspring of creativity and the guardrails of brand integrity. In many organizations, these professionals are not just the engine of thought leadership—they're its lone mechanic, driver, and GPS. It's not unusual for one person to play the role of ghostwriter, editor, strategist, and project manager across multiple teams and initiatives. Some also shoulder the emotional labor of working with subject-matter experts who don't quite understand the invisible lift that creating strategic content requires. It's no wonder the promise of AI is so tempting. AI tools like ChatGPT can offer relief: a sounding board for ideas, a fast draft, a rewriter, a tone checker. Used wisely, they multiply capacity and preserve energy for higher-order thinking. And in the world of thought leadership, where ideation can take time, and every sentence must pull its weight, that's no small gift. But what happens when the relief is tinged with guilt? Some of the guilt stems from old narratives: Real writers don't need help. If you were good at your job, you'd do it all yourself. Or worse: If the AI can do this, why do we need you? These beliefs ignore a simple truth—AI is not ideation. It's not judgment. It's not discernment or audience intuition or editorial strategy. Those are human strengths. AI assists with execution, not invention. For thought leadership professionals, the ideas are the value. The clarity and courage to frame an idea in a way that moves a market or sparks a conversation is still uniquely human. AI can help shape or smooth or structure, but it cannot originate with the same insight born from years of study, client work, and editorial rigor. Another source of guilt is the fear of being 'found out'—as if using AI is a shortcut or a crutch. But in a communications environment where you're expected to 'do more with less' year after year, it's not cheating to use the best tools. It's survival. And smart leadership will recognize that. AI Guilt And Transparency In fact, those building thought leadership functions inside organizations should be leading the charge in adopting AI—not hiding it. AI enables faster content iteration, testing of different angles for different audiences, and more frequent publishing without burnout. For firms investing in a thought leadership culture, that matters. Research by Bob Buday and others has shown that thought leadership is no longer a niche marketing function—it's a competitive strategy. Companies with strong thought leadership engines gain more traction with buyers, more trust in the marketplace, and more influence with clients. If comms professionals are tasked with building this strategic muscle, they deserve to use the best available tools to do so. Darling described how she is experimenting with AI to write a single newsletter differently for four employee personas. That's exactly the kind of work that moves content from noise to nuance. It requires understanding what matters to each audience, testing language, and being able to consider different variations quickly. AI supports her judgment—it doesn't replace it. To Conquer AI Guilt, Embrace the Tools And Own the Process The way forward is not to pretend you're not using AI. And leaders in companies should not be putting their employees in a position where they can't be open about it. The way forward is to use generative AI transparently, wisely, and strategically. As thought leadership becomes central to brand identity and differentiation, comms pros need space to think—not just execute. They need time to ideate, to frame, to test. AI can free up that space—but only if we stop apologizing for it. Let's rewrite the narrative: You're not less valuable because you use AI. You're more valuable because you use it well. In thought leadership, the real measure of value isn't how fast you write or how many words you produce. It's how clearly and originally you think—and how well you help others do the same. Using AI thoughtfully is not just acceptable—it's strategic. The best thought leadership professionals I know treat generative AI as a partner in the creative process, not a threat to their credibility. They use it to test their assumptions, to sharpen their hooks, to find new metaphors, and to get out of ruts faster. And yet, even those leading the charge may feel the tension between innovation and authenticity. That tension is a sign of professional integrity. It means you care about the quality of your work. It means you haven't outsourced your standards. Thought leadership, at its best, is a disciplined form of meaning-making. It's about surfacing ideas that aren't just smart, but useful—ideas that can reshape how people think, work, and lead. If AI can help you bring those ideas into the world with more precision and less burnout, I say you should use it. Comms and thought-leadership professionals need to stop whispering about the tools we rely on and start focusing on the value we create with them. Again, thought leadership is about thinking well and helping others do the same. If you're doing that with the help of AI, you're not falling short—you're showing the way forward. And there's no need for AI guilt in that scenario.


WIRED
8 minutes ago
- WIRED
Big Tech Asked for Looser Clean Water Act Permitting. Trump Wants to Give It to Them
Jul 29, 2025 12:39 PM New AI regulations suggested by the White House mirror changes to environmental permitting suggested by Meta and a lobbying group representing firms like Google and Amazon Web Services. An Amazon Web Services data center in Manassas, Virginia. Photograph:All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Last week, the Trump administration announced a set of sweeping AI policy recommendations to 'usher in a new golden age of human flourishing.' Among the suggested environmental rollbacks laid out in both an executive order and a corresponding AI Action Plan is a set of specific recommendations to essentially loosen Clean Water Act permitting processes for data centers. They mirror suggestions made to regulators earlier this year by both a major data center lobbying group and Meta. In March, the Data Center Coalition (DCC), a lobbying group for the industry whose members include tech giants like Google and Amazon Web Services, wrote a public comment to the Office of Science and Technology Policy in response to its request for information to develop the AI Action Plan. In the comment, the DCC suggested smoothing the permitting process for data centers under a specific section of the Clean Water Act. A similar ask around this section of the law was also made by DCC member Meta in its own separate response. The part of the Clean Water Act specifically named in these comments and in the recommendations from the White House deals with how projects like data centers could impact federally protected waters during construction or use, and what materials are discharged into those waters or dredged from them. Activities that range from building a bridge or road to filling marshland to construct a building or redirecting a stream require a permit under Section 404 of the law. Getting these types of permits, known as 404 permits, for individual projects can be expensive and time-consuming. But the government allows exemptions for a list of specific activities and industries, creating a nationwide permit that results in a process with less public participation and federal review. It's this type of nationwide permit that the new Trump AI agenda seeks to gain for data centers. The AI Action Plan also recommends exempting data centers from what's known as pre-construction notification, an additional form that helps regulators understand the impacts of a project before it begins—another proposal that was in the DCC public comment. 'The data center industry takes compliance and accountability seriously and works closely with the many local, state, regional, and federal bodies responsible for permitting and project approvals, regulation in environmental, safety, and other key areas, and oversight,' Cy McNeill, the director of federal affairs at DCC, told WIRED in an emailed comment. Environmental lawyers who spoke with WIRED stressed that direct impacts from data centers depend entirely on the specifics of each individual project. Many data centers have relatively low environmental profiles for the buildings themselves. Filling in a marshy corner of a vacant lot is a practice done all over the country for all types of construction. 'For a while there was a joke that Walmarts were being built on wetlands, because it's like, well, where's the land that hasn't already been developed?' says Jim McElfish, a senior adviser at the Environmental Law Institute, a research nonprofit. There are currently more than 50 issued nationwide 404 permits—some of which still require pre-construction notifications—which are renewed once every five years. Many of those exemptions are for agricultural activities, like cranberry harvesting and constructing ponds for farms, or ecosystem and scientific services like surveying and soil maintenance. Some types of coal mining and oil and gas activity are also included in the program. Buildings like stores, restaurants, hospitals, and schools currently have their own nationwide permit, which some data centers fall under. However, the permit requires a more in-depth, individual analysis if the project impacts more than half an acre of protected water. The DCC in its March comment recommended the creation of a nationwide permit with 'robust notification and coverage thresholds' and argued that 'lengthy timelines for the approvals are not consistent with other national permits that have higher or no limits or have a threshold where a PCN is not needed, which allows immediate action.' Meta, which has announced its intent to build massive data centers across multiple states and is currently developing a 2,250-acre data center in Louisiana, also asked for a nationwide permit in its comment and suggested that the federal government further 'streamline' the 404 permitting process. Meta's chief global affairs officer Joel Kaplan posted on X last week that the AI Action Plan 'is a bold step to create the right regulatory environment for companies like ours to invest in America,' and that Meta is 'investing hundreds of billions of dollars in job-creating infrastructure across the US, including state-of-the-art data centers.' Meta declined to comment further for this article through a spokesperson. Environmental lawyers aren't so sure that a nationwide permit for data centers, regardless of their size, would follow the intent of the Clean Water Act. 'What makes [a blanket data center exemption] a little bit tricky is that the impacts are gonna differ quite a bit depending on where these are,' McElfish says. While one data center may impact just a 'fraction of an acre,' he says, by rebuilding a stream crossing or filling in a wetland, other data centers in different areas of the country may have much larger impacts to local waterways during their construction. Hannah Connor, a senior attorney at the Center for Biological Diversity, agrees. 'What we're seeing here is an attempt to expand the 404 nationwide permitting program so that it goes through this much reduced regulatory review outside of the intention of why [the permitting] program was created,' she says. 'There's much reduced regulatory review to kind of literally speed along the paving of wetlands.' There are some data center projects in development today that have run into significant issues with federally protected waters. In Indiana, Amazon is currently galvanizing local opposition as it attempts to fill in nearly 10 acres of wetland and more than 5,000 streams to build a massive data center. In Alabama, environmentalists caution that the water footprint from a proposed data center could have serious impacts on local waterways and cause the possible extinction of a species of fish. In a response to a request to comment from WIRED, Amazon spokesperson Heather Layman sent several details via email on the company's global water replenishment projects and its efforts to conserve water at its Indiana data centers. 'To maintain global leadership in AI, the US must prioritize the deployment of energy generation and infrastructure to support data center growth,' she wrote. 'We are also constantly working to optimize our water consumption across Amazon's operations.' The proposed changes from the White House are no surprise to lawyers like Connor: 'The 404 permitting program has had a developer target on its back for a pretty long time,' she says. Sackett v. EPA , the 2023 Supreme Court case that dealt a major blow to the reach of the Clean Water Act, was based on a 404 permitting issue. This ruling, Connor suspects, is partly why we may be seeing so many companies choosing to build data centers in dry states like Arizona. 'They have a lot of waters that have lost their jurisdictional reach within the Clean Water Act,' she says. 'It's just easier to pave over the desert, which is the saddest thing to say out loud.' The coal industry has also battled for more than a decade to get nationwide 404 permits for mountaintop mining. Meanwhile, 404 permits for gas pipelines like the Mountain Valley Pipeline have been locked in years of litigation. The executive order signed Wednesday calls for a review of 404 permits not just for data centers, but for a host of other 'covered components' that are used to build data centers, ranging from transmission lines to gas pipelines to coal and nuclear power equipment. 'The energy objectives of the administration are baked into [the new AI policies],' says Connor. Trump officials, she says, are 'trying to create more dexterity in the 404 program for all the kinds of means that [they] want to be reflected within the administration's priorities. That includes fossil fuels, that includes coal, and that includes data centers.'
Yahoo
35 minutes ago
- Yahoo
Retailers Are All-In on AI - But Disconnected Data Threatens Progress
New Amperity report finds that while 45% of retailers use AI weekly, only 11% are prepared to scale it; CDPs are the clear differentiator SEATTLE, July 29, 2025--(BUSINESS WIRE)--Amperity, the leading AI-powered customer data cloud, today released its 2025 State of AI in Retail report, a new survey highlighting how retail businesses are rapidly embracing AI, but often lack the foundational data infrastructure needed to unlock its full potential. The survey of 1,000 retail professionals across marketing, IT, data and executive leadership reveals that 45% of retailers are already using AI daily or several times per week, and 97% plan to maintain or increase their AI investment in the coming year. However, only 11% say they're fully prepared to deploy AI tools at scale. "Retailers believe in AI's potential to drive loyalty and lifetime value — but belief alone won't close the gap between ambition and execution," said Tony Owens, CEO of Amperity. "What's needed is unified, actionable customer data - regardless of where it resides. With Amperity's patented identity resolution, retailers can unify data without moving it, transforming fragmented records into complete customer profiles that fuel AI and measurable business results." Customer-Facing AI: The Missed Opportunity While 63% of retailers believe AI will improve customer loyalty and 65% expect it to increase customer lifetime value, only 43% are currently using AI in customer-facing applications such as personalization, chatbots or tailored marketing experiences. In fact, just 23% are using AI in production to resolve customer identities or prepare data for marketing use, indicating significant untapped potential in customer engagement strategies. CDPs Are Accelerating AI Maturity Retailers with a customer data cloud (CDC) aren't just slightly ahead - they're moving at a different pace entirely: 60% of CDP-equipped retailers use AI daily or several times per week (vs. 29% without a CDP) 35% use AI in production to prepare data for marketing or analytics (vs. 9% without a CDP) 22% report full AI adoption across multiple business units (vs. 10%) These retailers are also far more confident in their ability to understand and act on customer behavior, reinforcing the CDP's role as a critical enabler of enterprise AI strategy. "This survey highlights a pivotal shift in retail: AI is moving decisively from experimentation to scaled execution. Retailers are leveraging AI to deliver more adaptive, personalized customer experiences powered by high-quality, unified customer data," said Tapan Patel, research director at IDC. "Retailers who integrate AI into core workflows—across supply chain, merchandising, and engagement—are setting the new standards for customer experience in an increasingly dynamic market." Data Gaps and Technical Barriers Persist Despite high levels of optimism, the report reveals several key barriers: 58% of respondents say their customer data is fragmented or incomplete 46% cite high costs of AI tools as a top challenge 35% note limited technical expertise Only 21% are "very confident" in their ability to understand and act on customer data. Amperity helps retailers close the gap between AI ambition and execution. By unifying fragmented customer data, resolving identities with precision, and making insights accessible across the business, Amperity gives retailers the AI-ready foundation they need to move faster and compete smarter. Built for enterprise scale and privacy-conscious activation, Amperity's Customer Data Cloud turns data into a growth engine—powering personalized experiences, smarter decisions, and measurable ROI. Get the full 2025 State of AI in Retail report to learn how leading retailers are using customer data to close the gap between AI potential and business results. About Amperity Amperity's Customer Data Cloud empowers brands to transform raw customer data into strategic business assets with unprecedented speed and accuracy. Through AI-powered identity resolution, customizable data models, and intelligent automation, Amperity helps technologists eliminate data bottlenecks and accelerate business impact. More than 400 leading brands worldwide, including Virgin Atlantic, Dr Martens, Wyndham Hotels & Resorts and Reckitt rely on Amperity to drive customer insights and revenue growth. Founded in 2016, Amperity operates globally with offices in Seattle, New York City, London, and Melbourne. For more information, visit or follow us on LinkedIn, X, Facebook and Instagram. View source version on Contacts Press Contact Andrea Mochermanpress@ +1 (206) 432-8302 Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data