
5 Metrics Every Business Should Track to Maximise AI Investments
As the European AI landscape evolves, so too must the standards to measure success.
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur Europe, an international franchise of Entrepreneur Media.
The Office of National Statistics (ONS) updates its shopping basket every year to reflect how consumers spend, adding new items like VR headsets or yoga mats as habits evolve. Businesses need to do the same with performance metrics. Artificial intelligence is now a central force in driving growth, yet many companies still measure success using outdated KPIs.
With nearly half (42%) of European businesses now regularly using artificial intelligence (AI) — a 27% increase in just one year — the urgency is clear: if you don't measure what matters, you can't manage it. To truly maximise AI investments, C-suite leaders must update their own shopping baskets and rethink the benchmarks used to judge value.
Here are five metrics that every business should be tracking to ensure AI success.
1. Data quality
Even the most advanced AI models produce untrustworthy results if they're trained on inaccurate or irrelevant information. At best, this shortcoming is a temporary inconvenience that drains money and time. At worst, entrusting unsatisfactory data to AI systems leads to costly mistakes in user-level applications — all of which can damage an organisation's reputation and profit.
With the success of AI hinging on high-quality data, it's important to perform regular data audits focused on improving accuracy. Routine reviews like this are a way to patrol data pipelines, checking that they're free of inconsistencies that could otherwise undermine AI outputs.
2. Data coverage
Clean data is one priority; complete data is another. The AI models without access to every dataset are more vulnerable to blind spots, causing limitations in their ability to detect trends and identify key opportunities.
For instance, insurers that automate their risk assessment processes with AI typically ingest data from operational logs, market patterns and even independent sources like weather forecasts. Accidentally neglecting just one of these could result in the misinterpretation of costly payout claims.
To counter similar risks, conducting regular assessments of your data landscape to uncover overlooked data points. Eliminating visibility gaps allows businesses to paint a full picture of their digital environment, ensuring all data channels are readily available for AI usage.
3. Operational efficiency gains
The clearest way to measure the success of a new initiative is to see how much time or money it saves compared to the previous approach. Put simply: a factory that installs a faster conveyor belt should see an increase in productivity. AI is no exception to that logic.
From accelerating loan approvals to automating data entry, the long-term objective of AI in any industry is to reduce turnaround times and cut costs. Failure to gauge operational impact makes it difficult to justify ongoing investment.
As such, it's sensible to measure process durations before and after AI integration — a benchmarking method DHL deployed to recognise that its AI-powered robots had delivered a 40% increase in sorting capacity, quantifying their investment's active contribution to business KPIs.
4. Adoption rate across teams
Just because a solution successfully goes live, it doesn't mean adoption is fully guaranteed. Really, true value comes when AI is embedded into workflows across the whole company — not just the IT department.
Some teams will immediately embrace the AI tools presented to them, whereas others need more support. To assess where training or change management might be necessary, it's helpful to track departmental usage data and run regular employee feedback surveys.
This approach works for high-performing organisations, who are more likely to bring employees with them on their AI journey by providing extensive AI training. In this context, understanding digital behaviour is the starting point for extracting more engagement from AI.
5. Return on investment (ROI)
Naturally, businesses leaders need to understand the financial return they're getting back from investment. However, the ROI generated from AI initiatives is often complex, involving both tangible and intangible benefits.
Take the Berlin-based online retailer Zalando, which recently shared that it uses generative AI to produce digital imagery at a rapid rate. Not only has that directly reduced costs by 90%, but the faster turnaround in editorial campaigns also indirectly boosted the company's competitiveness in the fast fashion market.
Every possible performance metric must be considered when curating a digital strategy. That's why it's important to develop a well-rounded ROI framework for AI — factoring in both the direct and indirect consequences of any planned change.
Measure what matters, scale what works
AI is already demonstrating its ability to reshape organisations, but the reality is that many still struggle to prove its concrete value. Without establishing the right criteria for success, businesses will lack accountability and struggle to align tech performance with financial gains. To maximise ROI on AI, you must clarify the standards that you wish your digital growth to be founded on. This will unlock the insights needed to safely course-correct, scale success, and build long-term trust in your AI strategy.
As the AI landscape evolves, so too must the standards to measure success. Just like the ONS shopping basket reflects changing habits, businesses must ensure performance metrics reflect the realities of AI-driven operations. By focusing on data quality, coverage, efficiency, adoption, and ROI, leaders can ensure AI investments aren't just tracked but transformed into long-term value.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
25 minutes ago
- Business Insider
How Digital Realty is upgrading its data centers for AI — and trying to stay green
This article is part of " Build IT: Connectivity," a series about tech powering better business. Digital Realty, a data center operator, is scaling up its infrastructure to keep up with AI's growth. But the tricky part is to do so in an environmentally friendly way. The International Energy Agency found that in 2024, data centers accounted for 1.5% of global electricity use. By 2030, that number could nearly double, reaching levels just above Japan's current annual energy consumption. In addition, AI infrastructure is cooled with enormous amounts of water. A report from the University of Tulsa found that a single facility can use up to 5 million gallons per day, enough to supply thousands of homes. With governments and companies pouring billions into AI infrastructure, those resource demands are only expected to grow. Through close collaboration between its sustainability, engineering, and design teams, Digital Realty, which operates more than 300 data centers worldwide, is working to reduce carbon emissions. That means sourcing more renewable energy, upgrading cooling systems, and rethinking where — and how — new sites are built from the ground up. To understand how Digital Realty is preparing for that future, Business Insider spoke with Aaron Binkley, vice president of sustainability, and Shea McKeon, global head of design and engineering, about the company's sustainability strategy. In a roundtable conversation, Binkley and McKeon shared how their teams are working together to cut emissions, balance business demands with sustainability goals, and stay ahead of AI trends in the data center world. The following has been edited for length and clarity. Business Insider: How do your roles and teams at Digital Realty contribute to the company's overall decarbonization strategy? Aaron Binkley: My role is global. I oversee sustainability efforts across the company, including work around renewables, decarbonization, development and construction, operations of in-service data centers, and collaboration on green finance, clean energy, energy performance, water, and more. A big part of my job is acting as a convener, bringing people together to ensure we're working collaboratively and surfacing the best ideas. Shea McKeon: I sit in the design, engineering, and construction department, which oversees all new developments and major renovation projects around the world. We're responsible for integrating sustainability into our new builds and working with operations and energy management to bring existing facilities up to the latest standards. Digital Realty has ambitious targets to cut direct and indirect emissions per square foot by 60% and supply chain emissions by 24% — each by 2030. How are your teams collaborating to hit those goals? Binkley: We think about how we build, power, and operate sustainable data centers. That starts with understanding emissions across the data center lifecycle. Design and construction impact embodied carbon, or the total amount of emissions associated with the data center lifecycle, from metal extraction to construction, all the way to equipment disposal. Operationally, it's all about electricity use. So we work closely with our energy procurement and strategy teams to decarbonize our electricity supply. For Scope 1 emissions, referring to direct emissions from sources owned by our company, we're switching from burning mainly diesel in backup generators to renewable fuels like hydrotreated vegetable oil, a diesel-like fuel which we've deployed across 17% of our operating portfolio. But 98% of our Scope 1 and 2 emissions, which include the indirect emissions from purchased energy, come from electricity, so that's the big nut we need to crack. We prioritize opportunities based on where we can make the biggest impact, designing efficient facilities and powering them with renewables. McKeon: I'd say on the Scope 3 emissions side, which are indirect emissions from the data center supply chain, that's where my team can really have an impact. We're constantly working with our design and construction partners to make sure we're specifying the right materials to help bring those emissions down. It's always at the forefront of our designs. We also partner with Aaron's team during our annual business reviews with key suppliers. We just wrapped those up recently, and Scope 3 emissions were one of the topics we discussed — how suppliers are performing and what we can do to improve. It's a never-ending, iterative process, but collaboration is key to making progress. AI models are creating a demand for computing power. How are you balancing this growth with your sustainability targets? Binkley: We've seen AI coming. It's front of mind now, and a significant portion of our bookings are AI-related. Even as our portfolio grows, we haven't pulled back on any sustainability commitments. We've made strong progress on sourcing renewables and decarbonization. We plan for that, and as AI pushes greater demand, we adjust our plans: rethink sourcing, get more integrated with acquisitions, and get involved earlier in planning and design. We're even part of early utility conversations when acquiring land, asking for clean energy solutions before we've started moving dirt. We're also using AI internally to improve energy and water efficiency. We developed an in-house program called Apollo AI to optimize building management systems across our portfolio. The platform helps our facility engineers find hidden anomalies like clogged filters and leaky valves and suggests improvements that can help drive energy savings. We also have AI tools focused on water systems, helping us fine-tune cooling performance and water chemistry to reduce waste. We really try to squeeze every last drop of productivity out of the energy and water we consume. McKeon: We're planning for 100% of our future buildings to have the capability to deploy liquid cooling directly to the chip, where coolant is circulated through metal plates attached to graphics processing units to remove heat. For those that don't, our engineering team is building roadmaps so we're ready if customers want to use energy-intensive technologies like generative AI that require high levels of compute power. Our modular design approach helps us learn and adapt quickly. And with liquid cooling, you don't need as much square footage per megawatt anymore. That's going to change how buildings are designed moving forward. What's the biggest challenge in aligning technical engineering demands and sustainability goals? Binkley: Our sustainability standards are part of our building codes, but the maximum amount of emissions that can be reduced in our facilities still varies based on customer usage. Some customers move into the data centers fully and operate at high intensity; others ramp up slowly. Modularity helps us handle those variations. The speed of growth is also a challenge — we need to stay ahead of customer demand, line up renewables, and anticipate equipment needs that take a long time to procure. We're building physical infrastructure, which takes years. You can't just flip a switch. McKeon: We're a multi-tenant facility. We lease out space to our customers, so while we control the infrastructure, the customer ultimately controls how they operate within that space. We can design proactively with energy efficiency in mind, and we encourage best practices like airflow containment and optimal temperature settings. But at the end of the day, we don't dictate how customers use their equipment. That creates a bit of a disconnect. Our engineering team can build in sustainability features, but our operations team has to be reactive depending on how each tenant deploys. Some customers come in and run at high utilization, which is great from an efficiency standpoint. Others move in slowly or use a mix of equipment that can affect how well the facility runs. So there's a line between what we can control and what we can influence. Luckily, our operations team is very sophisticated. They use automation, data, and AI to adapt in real time, dialing in temperature and managing airflow, all to run as efficiently as possible. Looking ahead, how are you evolving your decarbonization strategy over the next few years? Binkley: We're not pulling back on our commitments. We'll stay the course, and perhaps even get more aggressive. Clean energy is harder to source now, but still available. We've been able to secure renewables that offer real value and reduce costs. We're also going deeper into Scope 3 with our supplier engagement program, working with vendors to reduce the carbon footprint of the materials and products we buy. McKeon: I'd echo that. Sustainability is embedded in our design process. It's not just a benchmark — it's part of our culture. Our local teams are empowered to innovate project by project, and our global teams constantly share best practices. What works in France might be relevant in Chicago. It's a contagious, exciting environment to be in.


Business Insider
35 minutes ago
- Business Insider
Inside KPMG's $100 million AI investment: How Google Cloud's partnership is fueling the firm's new AI services
KPMG is a professional services company and one of the Big Four accounting firms in the US. It offers audit, tax, and advisory services to organizations in multiple sectors, including healthcare, finance, banking, and more. KPMG has more than 90 offices and 36,000 employees in the US. It also operates in more than 140 countries. Situation analysis Steve Chase, vice chair of artificial intelligence and digital innovation at KPMG, said part of the company's business involves helping organizations across industries modernize their operations with technology, including their accounting systems and customer service. Recently, Chase said more clients have sought assistance in incorporating artificial intelligence and cloud services into their digital transformation strategies. To help, KPMG announced an expansion of its partnership with Google Cloud in November to advance GenAI, data analytics, and cybersecurity for its clients. The expansion includes a $100 million investment in KPMG's Google Cloud practice. Chase said the goal is to tailor AI services to specific customers, business models, and industries so that these organizations can use AI to improve their businesses, such as by speeding up data analysis. The expanded Google Cloud partnership will initially focus on clients in the retail, healthcare, and financial services industries. Key staff and partners Chase said KPMG has been using AI for several years and has had a long-standing relationship with Google. In 2024, KPMG created the Google Cloud Center of Excellence to combine Google's AI technologies with its own expertise to help clients use AI to boost their businesses. Its latest partnership expansion involves creating new AI tools. KPMG also works with Microsoft, Amazon Web Services, and other tech companies on other AI-related projects. AI in Action KPMG has been using Google Cloud's Vertex AI Search, an AI development platform for building and using GenAI, internally to connect and analyze its vast amount of data. Chase said the company is using this information to develop GenAI agents for clients, such as chatbots to answer questions or tools to gather and analyze data, to address various business challenges and expand capabilities. For example, Chase said KPMG is using Vertex AI and Gemini, a Google Cloud AI-powered assistant, to help financial services companies automate tasks that have been cumbersome for humans, including fraud detection and loan applications. Chase added that KPMG also built an AI "store performance analyzer" for a large retailer. The tool allows the company to use automation to speed up and combine information from store locations, such as inventory levels, sales data, and details about the location, to determine how it performs compared to other stores. "It's able to actually do a detailed analysis in a fast way," which used to be completed by a team of people and take longer, Chase said. "Now, the people involved are actually reviewing the results, as opposed to doing all the manual work of pulling all the data together." For healthcare clients, KPMG is using Google Cloud's Healthcare API to develop AI tools that help doctors improve disease detection, treatment, and overall patient care. Did it work, and how did leaders know? Chase said that KPMG's partnership with Google Cloud could drive $1 billion incremental growth for the firm. "We've been super pleased with how it's going," he said. While he said the company couldn't disclose specifics on how it'll reach this figure, he said it will be a multi-year initiative that involves adding new clients and expanding the AI services it offers to existing companies. KPMG continues to roll out new AI initiatives. In April, the company announced another expansion of its collaboration with Google Cloud on AI tools for the legal and banking industries. KPMG also announced that it's joining the Google Cloud Security Partner Program to enhance cybersecurity for its clients.


CBS News
40 minutes ago
- CBS News
Reddit sues Anthropic over alleged "scraping" of user comments to train Claude
Social media platform Reddit sued the artificial intelligence company Anthropic on Wednesday, alleging that it is illegally "scraping" the comments of millions of Reddit users to train its chatbot Claude. Reddit claims that Anthropic has used automated bots to access Reddit's content despite being asked not to do so, and "intentionally trained on the personal data of Reddit users without ever requesting their consent." Anthropic said in a statement that it disagreed with Reddit's claims "and will defend ourselves vigorously." Reddit filed the lawsuit Wednesday in California Superior Court in San Francisco, where both companies are based. "AI companies should not be allowed to scrape information and content from people without clear limitations on how they can use that data," said Ben Lee, Reddit's chief legal officer, in a statement Wednesday. Reddit licensing agreements Reddit has previously entered licensing agreements with Google, OpenAI and other companies that are paying to be able to train their AI systems on the public commentary of Reddit's more than 100 million daily users. Those agreements "enable us to enforce meaningful protections for our users, including the right to delete your content, user privacy protections, and preventing users from being spammed using this content," Lee said. The licensing deals also helped the 20-year-old online platform raise money ahead of its Wall Street debut as a publicly traded company last year Among those who stood to benefit was OpenAI CEO Sam Altman, who accumulated a stake as an early Reddit investor that made him one of the company's biggest shareholders. Claude and Alexa Anthropic was formed by former OpenAI executives in 2021 and its flagship Claude chatbot remains a key competitor to OpenAI's ChatGPT. While OpenAI has close ties to Microsoft, Anthropic's primary commercial partner is Amazon, which is using Claude to improve its widely used Alexa voice assistant. Much like other AI companies, Anthropic has relied heavily on websites such as Wikipedia and Reddit that are deep troves of written materials that can help teach an AI assistant the patterns of human language. In a 2021 paper co-authored by Anthropic CEO Dario Amodei — cited in the lawsuit — researchers at the company identified the subreddits, or subject-matter forums, that contained the highest quality AI training data, such as those focused on gardening, history, relationship advice or thoughts people have in the shower. Anthropic in 2023 argued in a letter to the U.S. Copyright Office that the "way Claude was trained qualifies as a quintessentially lawful use of materials," by making copies of information to perform a statistical analysis of a large body of data. It is already battling a lawsuit from major music publishers alleging that Claude regurgitates the lyrics of copyrighted songs. But Reddit's lawsuit is different from others brought against AI companies because it doesn't allege copyright infringement. Instead, it focuses on the alleged breach of Reddit's terms of use, and the unfair competition, it says, was created. —— The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.