
3 Strategies For Building An AI-Literate Organization
Findings from recent SAP research indicates those with higher literacy were far more likely to expect positive outcomes from AI, and far less likely to feel fear, distress, or apprehension.
By Dr. Autumn D. Krauss, Chief Scientist, Market Insights & Customer Engagement, SAP SuccessFactors
The rapidly advancing nature of artificial intelligence presents a challenge for organizations and their workforces that want to embrace it. Everyone knows they need to adopt AI, but with AI-enabled tools and technology changing on a daily basis, it's hard to figure out how to jump in and start making sense of it all. This prompts the primary question of who is more likely to catch the AI wave by successfully building the right type of AI knowledge and skills and how they can best go about gaining them.
To investigate this topic, my team of organizational scientists at SAP first conducted a global study in October 2024 to understand the AI attitudes and behaviors of workers across industries, gathering responses from 4,023 employees and managers. The questions were broad: Had respondents used AI tools at work? How optimistic—or anxious—did they feel about AI's growing role in the workplace? Were they confident in their own ability to work with these tools?
Findings from that study showed that the biggest factor shaping how employees felt about AI at the time—whether they were hopeful, fearful, or somewhere in between—was their level of AI literacy. Did they know how to apply AI to achieve goals? Could they detect when they were interacting with AI? Could they assess the capabilities and limitations of the technology? These are the qualities of AI literacy.
Those with higher literacy were far more likely to expect positive outcomes from AI, and far less likely to feel fear, distress, or apprehension. They were also more likely to express nuanced, mature views on how AI use should (or shouldn't) influence workplace decisions like promotion and compensation.
It was too early to draw a straight line from an employee's AI literacy to business performance, but it made sense that workers most comfortable experimenting with new tools and spotting their practical value would also be the ones to help drive meaningful returns.
How do you build that kind of AI-literate workforce? Our recent follow-up study of 4,030 employees and managers globally makes clear that even though different people require different kinds of support, three core strategies yield the strongest effect: experiential exposure, structured training, and the influence of an AI-literate organizational culture. More on each approach follows.
The most effective way to build AI literacy is to let people get their hands dirty. For many, comfort with AI is like comfort behind the wheel when you're learning to drive. Manuals and even simulators are simply not a substitute.
For organizations, this means giving employees low-stakes ways to experiment with AI. Let them use it to draft e-mails, summarize documents, or mockup project plans. The key is to keep the setting contained—such as internal communications or intramural projects—where mistakes are low-impact, quickly forgiven, and unlikely to reach customers or damage the company's reputation.
While best practices are emerging, it is clear that AI training works best when it's specific to the tools people use, the jobs they hold, and the tasks they perform.
Many employees don't realize that AI is already embedded in their applications—suggesting Outlook replies or auto-summarizing meetings in Zoom and Microsoft Teams. Helping them spot those features—while showing how much faster a task gets done with AI versus without—can build confidence.
At the job-level, a good AI training program lifts workers' performance. For some employees, this may eventually involve learning how models are trained, tuned, and maintained. But for many others, practical essentials will suffice, such as how to craft effective prompts, where to find the right data inputs, and how to integrate AI outputs into their work.
A strong training program also should help employees develop a feel for which parts of their work still call for a human touch. When training helps an employee work through these specifics, they can more effectively identify the uses of AI that will most benefit their work.
Organizational science has long shown the power of company culture to influence employee attitudes and behaviors. Now there exists an opportunity to use these social dynamics to foster collective AI literacy. Specifically, leaders can and should:
When it comes to AI, workers already know it matters and are already thinking seriously about how their jobs will change as a result of it. What they're looking for is help getting started, and AI literacy is the first step.
A version of this story appears on SAP.com
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
21 minutes ago
- Business Insider
How Digital Realty is upgrading its data centers for AI — and trying to stay green
This article is part of " Build IT: Connectivity," a series about tech powering better business. Digital Realty, a data center operator, is scaling up its infrastructure to keep up with AI's growth. But the tricky part is to do so in an environmentally friendly way. The International Energy Agency found that in 2024, data centers accounted for 1.5% of global electricity use. By 2030, that number could nearly double, reaching levels just above Japan's current annual energy consumption. In addition, AI infrastructure is cooled with enormous amounts of water. A report from the University of Tulsa found that a single facility can use up to 5 million gallons per day, enough to supply thousands of homes. With governments and companies pouring billions into AI infrastructure, those resource demands are only expected to grow. Through close collaboration between its sustainability, engineering, and design teams, Digital Realty, which operates more than 300 data centers worldwide, is working to reduce carbon emissions. That means sourcing more renewable energy, upgrading cooling systems, and rethinking where — and how — new sites are built from the ground up. To understand how Digital Realty is preparing for that future, Business Insider spoke with Aaron Binkley, vice president of sustainability, and Shea McKeon, global head of design and engineering, about the company's sustainability strategy. In a roundtable conversation, Binkley and McKeon shared how their teams are working together to cut emissions, balance business demands with sustainability goals, and stay ahead of AI trends in the data center world. The following has been edited for length and clarity. Business Insider: How do your roles and teams at Digital Realty contribute to the company's overall decarbonization strategy? Aaron Binkley: My role is global. I oversee sustainability efforts across the company, including work around renewables, decarbonization, development and construction, operations of in-service data centers, and collaboration on green finance, clean energy, energy performance, water, and more. A big part of my job is acting as a convener, bringing people together to ensure we're working collaboratively and surfacing the best ideas. Shea McKeon: I sit in the design, engineering, and construction department, which oversees all new developments and major renovation projects around the world. We're responsible for integrating sustainability into our new builds and working with operations and energy management to bring existing facilities up to the latest standards. Digital Realty has ambitious targets to cut direct and indirect emissions per square foot by 60% and supply chain emissions by 24% — each by 2030. How are your teams collaborating to hit those goals? Binkley: We think about how we build, power, and operate sustainable data centers. That starts with understanding emissions across the data center lifecycle. Design and construction impact embodied carbon, or the total amount of emissions associated with the data center lifecycle, from metal extraction to construction, all the way to equipment disposal. Operationally, it's all about electricity use. So we work closely with our energy procurement and strategy teams to decarbonize our electricity supply. For Scope 1 emissions, referring to direct emissions from sources owned by our company, we're switching from burning mainly diesel in backup generators to renewable fuels like hydrotreated vegetable oil, a diesel-like fuel which we've deployed across 17% of our operating portfolio. But 98% of our Scope 1 and 2 emissions, which include the indirect emissions from purchased energy, come from electricity, so that's the big nut we need to crack. We prioritize opportunities based on where we can make the biggest impact, designing efficient facilities and powering them with renewables. McKeon: I'd say on the Scope 3 emissions side, which are indirect emissions from the data center supply chain, that's where my team can really have an impact. We're constantly working with our design and construction partners to make sure we're specifying the right materials to help bring those emissions down. It's always at the forefront of our designs. We also partner with Aaron's team during our annual business reviews with key suppliers. We just wrapped those up recently, and Scope 3 emissions were one of the topics we discussed — how suppliers are performing and what we can do to improve. It's a never-ending, iterative process, but collaboration is key to making progress. AI models are creating a demand for computing power. How are you balancing this growth with your sustainability targets? Binkley: We've seen AI coming. It's front of mind now, and a significant portion of our bookings are AI-related. Even as our portfolio grows, we haven't pulled back on any sustainability commitments. We've made strong progress on sourcing renewables and decarbonization. We plan for that, and as AI pushes greater demand, we adjust our plans: rethink sourcing, get more integrated with acquisitions, and get involved earlier in planning and design. We're even part of early utility conversations when acquiring land, asking for clean energy solutions before we've started moving dirt. We're also using AI internally to improve energy and water efficiency. We developed an in-house program called Apollo AI to optimize building management systems across our portfolio. The platform helps our facility engineers find hidden anomalies like clogged filters and leaky valves and suggests improvements that can help drive energy savings. We also have AI tools focused on water systems, helping us fine-tune cooling performance and water chemistry to reduce waste. We really try to squeeze every last drop of productivity out of the energy and water we consume. McKeon: We're planning for 100% of our future buildings to have the capability to deploy liquid cooling directly to the chip, where coolant is circulated through metal plates attached to graphics processing units to remove heat. For those that don't, our engineering team is building roadmaps so we're ready if customers want to use energy-intensive technologies like generative AI that require high levels of compute power. Our modular design approach helps us learn and adapt quickly. And with liquid cooling, you don't need as much square footage per megawatt anymore. That's going to change how buildings are designed moving forward. What's the biggest challenge in aligning technical engineering demands and sustainability goals? Binkley: Our sustainability standards are part of our building codes, but the maximum amount of emissions that can be reduced in our facilities still varies based on customer usage. Some customers move into the data centers fully and operate at high intensity; others ramp up slowly. Modularity helps us handle those variations. The speed of growth is also a challenge — we need to stay ahead of customer demand, line up renewables, and anticipate equipment needs that take a long time to procure. We're building physical infrastructure, which takes years. You can't just flip a switch. McKeon: We're a multi-tenant facility. We lease out space to our customers, so while we control the infrastructure, the customer ultimately controls how they operate within that space. We can design proactively with energy efficiency in mind, and we encourage best practices like airflow containment and optimal temperature settings. But at the end of the day, we don't dictate how customers use their equipment. That creates a bit of a disconnect. Our engineering team can build in sustainability features, but our operations team has to be reactive depending on how each tenant deploys. Some customers come in and run at high utilization, which is great from an efficiency standpoint. Others move in slowly or use a mix of equipment that can affect how well the facility runs. So there's a line between what we can control and what we can influence. Luckily, our operations team is very sophisticated. They use automation, data, and AI to adapt in real time, dialing in temperature and managing airflow, all to run as efficiently as possible. Looking ahead, how are you evolving your decarbonization strategy over the next few years? Binkley: We're not pulling back on our commitments. We'll stay the course, and perhaps even get more aggressive. Clean energy is harder to source now, but still available. We've been able to secure renewables that offer real value and reduce costs. We're also going deeper into Scope 3 with our supplier engagement program, working with vendors to reduce the carbon footprint of the materials and products we buy. McKeon: I'd echo that. Sustainability is embedded in our design process. It's not just a benchmark — it's part of our culture. Our local teams are empowered to innovate project by project, and our global teams constantly share best practices. What works in France might be relevant in Chicago. It's a contagious, exciting environment to be in.


Business Insider
32 minutes ago
- Business Insider
Inside KPMG's $100 million AI investment: How Google Cloud's partnership is fueling the firm's new AI services
KPMG is a professional services company and one of the Big Four accounting firms in the US. It offers audit, tax, and advisory services to organizations in multiple sectors, including healthcare, finance, banking, and more. KPMG has more than 90 offices and 36,000 employees in the US. It also operates in more than 140 countries. Situation analysis Steve Chase, vice chair of artificial intelligence and digital innovation at KPMG, said part of the company's business involves helping organizations across industries modernize their operations with technology, including their accounting systems and customer service. Recently, Chase said more clients have sought assistance in incorporating artificial intelligence and cloud services into their digital transformation strategies. To help, KPMG announced an expansion of its partnership with Google Cloud in November to advance GenAI, data analytics, and cybersecurity for its clients. The expansion includes a $100 million investment in KPMG's Google Cloud practice. Chase said the goal is to tailor AI services to specific customers, business models, and industries so that these organizations can use AI to improve their businesses, such as by speeding up data analysis. The expanded Google Cloud partnership will initially focus on clients in the retail, healthcare, and financial services industries. Key staff and partners Chase said KPMG has been using AI for several years and has had a long-standing relationship with Google. In 2024, KPMG created the Google Cloud Center of Excellence to combine Google's AI technologies with its own expertise to help clients use AI to boost their businesses. Its latest partnership expansion involves creating new AI tools. KPMG also works with Microsoft, Amazon Web Services, and other tech companies on other AI-related projects. AI in Action KPMG has been using Google Cloud's Vertex AI Search, an AI development platform for building and using GenAI, internally to connect and analyze its vast amount of data. Chase said the company is using this information to develop GenAI agents for clients, such as chatbots to answer questions or tools to gather and analyze data, to address various business challenges and expand capabilities. For example, Chase said KPMG is using Vertex AI and Gemini, a Google Cloud AI-powered assistant, to help financial services companies automate tasks that have been cumbersome for humans, including fraud detection and loan applications. Chase added that KPMG also built an AI "store performance analyzer" for a large retailer. The tool allows the company to use automation to speed up and combine information from store locations, such as inventory levels, sales data, and details about the location, to determine how it performs compared to other stores. "It's able to actually do a detailed analysis in a fast way," which used to be completed by a team of people and take longer, Chase said. "Now, the people involved are actually reviewing the results, as opposed to doing all the manual work of pulling all the data together." For healthcare clients, KPMG is using Google Cloud's Healthcare API to develop AI tools that help doctors improve disease detection, treatment, and overall patient care. Did it work, and how did leaders know? Chase said that KPMG's partnership with Google Cloud could drive $1 billion incremental growth for the firm. "We've been super pleased with how it's going," he said. While he said the company couldn't disclose specifics on how it'll reach this figure, he said it will be a multi-year initiative that involves adding new clients and expanding the AI services it offers to existing companies. KPMG continues to roll out new AI initiatives. In April, the company announced another expansion of its collaboration with Google Cloud on AI tools for the legal and banking industries. KPMG also announced that it's joining the Google Cloud Security Partner Program to enhance cybersecurity for its clients.


CBS News
36 minutes ago
- CBS News
Reddit sues Anthropic over alleged "scraping" of user comments to train Claude
Social media platform Reddit sued the artificial intelligence company Anthropic on Wednesday, alleging that it is illegally "scraping" the comments of millions of Reddit users to train its chatbot Claude. Reddit claims that Anthropic has used automated bots to access Reddit's content despite being asked not to do so, and "intentionally trained on the personal data of Reddit users without ever requesting their consent." Anthropic said in a statement that it disagreed with Reddit's claims "and will defend ourselves vigorously." Reddit filed the lawsuit Wednesday in California Superior Court in San Francisco, where both companies are based. "AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data," said Ben Lee, Reddit's chief legal officer, in a statement Wednesday. Reddit licensing agreements Reddit has previously entered licensing agreements with Google, OpenAI and other companies that are paying to be able to train their AI systems on the public commentary of Reddit's more than 100 million daily users. Those agreements "enable us to enforce meaningful protections for our users, including the right to delete your content, user privacy protections, and preventing users from being spammed using this content," Lee said. The licensing deals also helped the 20-year-old online platform raise money ahead of its Wall Street debut as a publicly traded company last year Among those who stood to benefit was OpenAI CEO Sam Altman, who accumulated a stake as an early Reddit investor that made him one of the company's biggest shareholders. Claude and Alexa Anthropic was formed by former OpenAI executives in 2021 and its flagship Claude chatbot remains a key competitor to OpenAI's ChatGPT. While OpenAI has close ties to Microsoft, Anthropic's primary commercial partner is Amazon, which is using Claude to improve its widely used Alexa voice assistant. Much like other AI companies, Anthropic has relied heavily on websites such as Wikipedia and Reddit that are deep troves of written materials that can help teach an AI assistant the patterns of human language. In a 2021 paper co-authored by Anthropic CEO Dario Amodei — cited in the lawsuit — researchers at the company identified the subreddits, or subject-matter forums, that contained the highest quality AI training data, such as those focused on gardening, history, relationship advice or thoughts people have in the shower. Anthropic in 2023 argued in a letter to the U.S. Copyright Office that the "way Claude was trained qualifies as a quintessentially lawful use of materials," by making copies of information to perform a statistical analysis of a large body of data. It is already battling a lawsuit from major music publishers alleging that Claude regurgitates the lyrics of copyrighted songs. But Reddit's lawsuit is different from others brought against AI companies because it doesn't allege copyright infringement. Instead, it focuses on the alleged breach of Reddit's terms of use, and the unfair competition, it says, was created. —— The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.