logo
Forget tariffs—AI is the business shift leaders should prepare for

Forget tariffs—AI is the business shift leaders should prepare for

Fast Company2 days ago

Talk of tariffs is once again dominating the headlines, sparking concerns about supply chains and potential policy changes. Many business leaders are closely monitoring global markets as they navigate these challenges.
However, while tariffs deserve attention, I believe the far greater disruption is already here, and it's moving much faster—artificial intelligence (AI). It's embedded in workflows, marketing strategies, logistics networks, and customer experiences. And for many businesses, the impact will be more profound than any trade policy adjustment.
THE QUIET WORKFORCE REVOLUTION
A 2023 report from McKinsey highlights a staggering potential: Generative AI could automate activities that account for 29% of total hours worked across the U.S. economy by 2030, up from their previous estimate of 21%. In some sectors, particularly those involving knowledge work, the figure could be even higher.
Meanwhile, a report by Goldman Sachs suggests that as many as 300 million full-time jobs globally could be affected by generative AI adoption.
This change isn't some distant problem—it's happening now. Industries such as customer service, legal, marketing, and finance are already experiencing shifts in required roles and skills.
THE SKILL OF THE FUTURE ISN'T CODING—IT'S PROMPTING
The ability to guide AI effectively, through smart and strategic prompting, will become one of the most sought-after skills in business. A prompt engineer with the right expertise can accelerate marketing campaigns, optimize content creation, streamline product development, and support strategic decision-making—all while dramatically cutting costs and timelines.
The professionals who learn how to fluently 'speak' AI will be the ones driving innovation and who will far surpass those who merely know how to use traditional software.
The companies investing in AI are already gaining tangible benefits. In fact, PwC's Global AI Study estimates that AI could contribute up to $15.7 trillion to the global economy by 2030, largely driven by productivity boosts and automation. Meanwhile, Statista anticipates that the global AI market will grow from $184 billion in 2024 to over $826 billion by 2030, reflecting AI's rapid integration into business and consumer markets.
And according to Deloitte's 2024 State of Generative AI in the Enterprise report, 79% of early adopters expect AI to drive substantial transformation within their organizations in just three years, with nearly half already achieving high value from current initiatives.
Consider these practical applications:
Businesses using AI-driven automation have reported up to a 68% decrease in staffing needs during peak seasons.
20% to 50% fewer forecasting errors and up to 65% fewer lost sales have been achieved through predictive AI in supply.
81% of enterprises say AI and automation investments have positively impacted IT employee productivity.
With 71% of consumers preferring ads targeted to their interests and shopping habits, and 3 out of 4 consumers wishing to see fewer but more personalized ads, the need for hyper-relevant messaging has never been stronger. AI-driven personalization is one of the few ways brands can deliver these experiences at scale, thereby improving customer engagement while optimizing marketing efficiency.
WHY MANY COMPANIES WILL STILL MISS THE MOMENT
Despite the clear upside, I see a surprising number of companies still treating AI as a side project rather than a core strategy. Part of the hesitation seems to stem from uncertainty: Will AI eliminate jobs? Will it replace creative thinking?
The answer, in my view, is that AI won't replace people—it will replace people who don't use AI.
Organizations that proactively train their teams to integrate AI into daily operations will find themselves miles ahead. Those that wait, whether due to fear, bureaucracy or inertia, risk falling permanently behind.
THREE MOVES LEADERS SHOULD MAKE TODAY
1. Invest In AI Literacy Across Teams
It's essential that all team members—not just those in IT—understand how to use AI tools safely and effectively. Everyone from marketing to HR to operations should be on board.
2. Redesign Roles Around Human-AI Collaboration
Instead of fearing automation, redesign roles to focus on what humans do best: strategy, creativity, empathy, and complex problem-solving. AI should become the assistant.
3. Build A Flexible AI Strategy
Understand that no single tool will address all challenges. Develop frameworks that allow for experimentation and adaptability by deploying various AI solutions where they can add the most value.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes
FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes

Forbes

time34 minutes ago

  • Forbes

FDA's New AI Tool Cuts Review Time From 3 Days To 6 Minutes

AI at the FDA getty The U.S. Food and Drug Administration announced this week that it deployed a generative AI tool called ELSA (Evidence-based Learning System Assistant), across its organization. After a low-profile pilot that delivered measurable gains, the system is now in use by staff across the agency, several weeks ahead of its original schedule. Dr. Marty Makary, the FDA's commissioner, shared a major outcome. A review task that once took two or three days now takes six minutes. 'Today, we met our goal ahead of schedule and under budget,' said Makary. 'What took one scientific reviewer two to three days [before] The FDA has thousands of reviewers, analysts, and inspectors who deal with massive volumes of unstructured data such as clinical trial documents, safety reports, inspection records. Automating any meaningful portion of that stack creates outsized returns. ELSA helps FDA teams speed up several essential tasks. Staff are already using it to summarize adverse event data for safety assessments, compare drug labels, generate basic code for nonclinical database setup, and identify priority sites for inspections, among other tasks. This last item, using data to rank where inspectors should go, could have a real-world impact on how the FDA oversees the drug and food supply chain and impacts on how the FDA delivers its services. Importantly, however, the tool isn't making autonomous decisions without a human in the loop. The system prepares information so that experts can decide faster. It cuts through the routine, not the judgment. One of the biggest questions about AI systems in the public sector revolves around the use of data and third party AI systems. Makary addressed this directly by saying that 'All information stays within the agency. The AI models are not being trained on data submitted by the industry.' That's a sharp contrast to the AI approaches being taken in the private sector, where many large language models have faced criticism over training on proprietary or user-submitted content. In the enterprise world, this has created mounting demand for "air-gapped" AI solutions that keep data locked inside the company. That makes the FDA's model different from many corporate tools, which often rely on open or external data sources. The agency isn't building a public-facing product. It's building a controlled internal system, one that helps it do its job better. Federal departments have been slow to move past AI experimentation. The Department of Veterans Affairs has started testing predictive tools to manage appointments. The SEC has explored market surveillance AI for years. But few have pushed into full and widespread production. The federal government has thousands of employees processing huge volumes of information, most of it unstructured sitting in documents, files, and even paper. That means AI is being focused most on operational and process-oriented activities. It's shaping up to be a key piece of how agencies process data, make recommendations, and act. Makary put it simply that ELSA is just the beginning for AI adoption within the FDA. 'Today's rollout of ELSA will be the first of many initiatives to come,' he said. 'This is how we'll better serve the American people.'​​

Anthropic C.E.O.: Don't Let A.I. Companies off the Hook
Anthropic C.E.O.: Don't Let A.I. Companies off the Hook

New York Times

time39 minutes ago

  • New York Times

Anthropic C.E.O.: Don't Let A.I. Companies off the Hook

Picture this: You give a bot notice that you'll shut it down soon, and replace it with a different artificial intelligence system. In the past, you gave it access to your emails. In some of them, you alluded to the fact that you've been having an affair. The bot threatens you, telling you that if the shutdown plans aren't changed, it will forward the emails to your wife. This scenario isn't fiction. Anthropic's latest A.I. model demonstrated just a few weeks ago that it was capable of this kind of behavior. Despite some misleading headlines, the model didn't do this in the real world. Its behavior was part of an evaluation where we deliberately put it in an extreme experimental situation to observe its responses and get early warnings about the risks, much like an airplane manufacturer might test a plane's performance in a wind tunnel. We're not alone in discovering these risks. A recent experimental stress-test of OpenAI's o3 model found that it at times wrote special code to stop itself from being shut down. Google has said that a recent version of its Gemini model is approaching a point where it could help people carry out cyberattacks. And some tests even show that A.I. models are becoming increasingly proficient at the key skills needed to produce biological and other weapons. None of this diminishes the vast promise of A.I. I've written at length about how it could transform science, medicine, energy, defense and much more. It's already increasing productivity in surprising and exciting ways. It has helped, for example, a pharmaceutical company draft clinical study reports in minutes instead of weeks and has helped patients (including members of my own family) diagnose medical issues that could otherwise have been missed. It could accelerate economic growth to an extent not seen for a century, improving everyone's quality of life. This amazing potential inspires me, our researchers and the businesses we work with every day. But to fully realize A.I.'s benefits, we need to find and fix the dangers before they find us. Every time we release a new A.I. system, Anthropic measures and mitigates its risks. We share our models with external research organizations for testing, and we don't release models until we are confident they are safe. We put in place sophisticated defenses against the most serious risks, such as biological weapons. We research not just the models themselves, but also their future effects on the labor market and employment. To show our work in these areas, we publish detailed model evaluations and reports. Want all of The Times? Subscribe.

Stock Movers: Wizz Air, Wise, Bayer (podcast)
Stock Movers: Wizz Air, Wise, Bayer (podcast)

Bloomberg

timean hour ago

  • Bloomberg

Stock Movers: Wizz Air, Wise, Bayer (podcast)

On this episode of Stock Movers: - Wizz Air plunged 26% in early trading after the discount airline reported earnings that missed estimates and refrained from providing a guidance, citing poor visibility. - Wise is planning to list its shares in the US, the latest blow to London's stock market. - Bayer shares rise as much as 5.1% after Goldman Sachs upgrades the German chemicals and pharmaceutical company to buy from neutral, saying it sees earnings as having bottomed out and thinks risks around litigation and pharma data are overdone.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store