
How To Save Your Business $443,000 Through The Good Data Dividend
At a time when organizations are dedicating vast resources and staff hours to exploring the applications of artificial intelligence in the workplace, data integrity has become a make-or-break factor for business success. AI has the potential to transform the workplace experience and drive significant improvements in day-to-day business operations. However, the true value of these AI tools can only be realized if the data feeding these systems is accurate, reliable and responsibly sourced. Yet research from Iron Mountain and FT Longitude, as well as insights from Prosper Insights & Analytics, reveals most organizations are falling short, resulting in major costs.
The Data Integrity Gap in the U.S.
With the increased focus on AI, leaders around the world are beginning to recognize the impact of poor data integrity on achieving their AI goals. The United States, in particular, faces unique challenges due to a lack of robust data. According to Iron Mountain's report, 'Responsibly Sourced Data: AI's Crucial Ingredient,' Forty percent of U.S. respondents say they experienced misguided strategic decisions due to data integrity flaws over the past 12 months – making this the most cited impact in the country. This considerably surpasses the global average of 27%, indicating U.S. organizations are at greater risk of poor decision-making from data flaws.
This issue is not just a technical problem; it is a strategic one as well. Poor data integrity leads to flawed decisions that can reverberate throughout an organization, undermining business outcomes and eroding trust internally and among customers, partners and regulators. According to a recent Prosper Insights & Analytics survey, 40% of U.S. adults and 43% of executives worry that AI systems can produce incorrect or misleading information. These so-called 'hallucinations' can dramatically affect decision-making, leading to far-reaching business, compliance and reputational losses.
Prosper - Concerns About Recent Developments in AI
The High Cost of Bad Data
These gaps in data integrity can also significantly impact earnings by creating inefficiencies, compliance failures, and poor AI outputs. According to the research from Iron Mountain and FT Longitude, data integrity flaws cost U.S. organizations, on average, $443,550 over the last year.
However, many organizations are already addressing these challenges and implementing strategies to mitigate the risk and monetary losses associated with data integrity gaps. According to McKinsey & Co.'s report, 'The State of AI: How Organizations Are Rewiring to Capture Value,' companies are actively managing risks related to inaccuracy, cybersecurity and intellectual property infringement. These three generative AI-related vulnerabilities are the most cited reasons for negative consequences within respondents' organizations.
The Good Data Dividend Is Real
On the other side of these challenges is what Iron Mountain has identified as the 'good data dividend.' Organizations and business leaders investing in robust information management systems are realizing extraordinary results. The research from Iron Mountain and FT Longitude found that U.S. organizations reported a 10.8% revenue increase over the last 12 months, equating to $2.2 billion — a direct result of their information management systems and strategies. This figure surpasses the global average of 10.5%.
'With the rise of open-source and specialized AI models, data integrity, transparency and trust are more critical than ever,' said Narasimha Goli, chief technology officer at Iron Mountain. 'At Iron Mountain, we are investing in solutions such as our InSight Digital Experience Platform (DXP) to help our customers get their information ready for use in generative AI and other AI-powered applications. By ensuring their data is being sourced responsibly, organizations can harness the full potential of their information to drive intelligent decision-making and unlock new growth opportunities.'
The 'good data dividend' equated to a total global average revenue gain of $72 trillion, or average revenue growth of $1.9 billion per organization. These benefits are being realized across sectors such as finance, retail and manufacturing. Through these AI tools, organizations are achieving higher productivity, improving customer experience and creating new revenue streams as a result of trustworthy AI.
What sets successful leaders apart is their ability to actively manage the quality, security and traceability of the data they collect. They regularly audit data streams, eliminate redundant or obsolete data, set up automated validation checkpoints and embed compliance and security mechanisms at the core of every workflow.
Turning Information Management Into a Competitive Advantage
How can organizations close the data integrity gap and make the most of the good data dividend? Iron Mountain has identified several key information management practices that every organization should implement to drive superior AI outcomes. These include:
The Bottom Line
Organizations looking to drive successful AI integration must focus on data integrity. In the United States, businesses must ensure they are bridging the data integrity gap or risk falling behind their global peers. By practicing best-in-class information management strategies, companies can avoid the monetary costs associated with bad data and potentially unlock billions in new value.
Invest in your data, and the good data dividend will follow. In an era of immense data creation, the real winners will be those who manage it best.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
27 minutes ago
- Yahoo
Famed market bear Albert Edwards warns of an 'everything bubble' in US stocks and home prices that could soon pop
Albert Edwards warns of a potential US stock and housing market bubble. Rising interest rates and Japan's fiscal concerns could trigger market corrections, he said. Edwards publishes his notes under Société Générale's "alternative view." Société Générale's Albert Edwards, famed for calling the dot-com bubble leading up to the 2000 crash, is again warning investors of a potentially painful plunge ahead. In his latest note to clients this week, Edwards said US stocks and home prices are in an "everything bubble" that he thinks could soon pop. Stock valuations are indeed steep. The Shiller cyclically-adjusted price-to-earnings ratio sits at 38, one of its highest levels ever, and both the trailing and forward 12-month PE ratios of the S&P 500 are historically high. To Edwards, this doesn't sit well with the fact that long-term interest rates have been on the rise. Rising long-end government bond yields tend to weigh on stock-market valuations as investors can find attractive returns without taking on the high level of risk in the stock market. Yet US stocks have seen a robust rally in recent years, gaining 78% since October 2022 lows. The market's high valuations have kept future estimated equity-market yields low. When stocks are more cheaply valued, they can expect higher future returns, and vice versa. "It is notable how the US equity market has been able to sustain nose-bleed high valuations despite longer bond yields grinding higher," Edwards wrote. "I don't expect it'll be able to ignore it much longer." On housing, Edwards said that the home price-to-income ratio in the US has been virtually flat over the last few years following the pandemic bump, while the ratio has dropped in countries like the UK and France. "The US is the only market in which house price/income ratios have NOT de-rated since 2022 as bond yields have risen. Is the US housing market also exceptional relative to Europe? No, it's nonsense and, in time, investors will come to claim they knew that all along," Edwards wrote. As for what could cause the potential bubbles in US stocks and home prices to burst, Edwards said to watch Japan. "In the wake of the ruling party coalition losing its Upper House majority, concerns in the bond market about the risks of further fiscal easing and high inflation are growing," he wrote. Higher inflation in Japan could mean higher interest rates and a further unwinding of the Japanese yen carry trade, in which foreign investors borrowed cheaply in yen and converted to dollars to buy higher-yielding US assets. In 2024, the Bank of Japan unexpectedly hiked rates, roiling global markets as investors liquidated assets they had bought with borrowed yen. In May, Edwards warned rising interest rates in Japan could cause a "global financial Armageddon." Edwards publishes his notes, which regularly express a bearish outlook, under Société Générale's "alternative view," separate from the bank's house view. "A lot of clients who totally disagree with me like to read my stuff," he told Business Insider in May. "It's a reality check." Read the original article on Business Insider


The Hill
28 minutes ago
- The Hill
Tech companies building massive AI data centers should pay to power them
The projected growth in artificial intelligence and its unprecedented demand for electricity to power enormous data centers present a serious challenge to the financial and technical capacity of the U.S. utility system. Appreciation for the sheer magnitude of that challenge has gotten lost as forecast after forecast projects massive growth in electric demand over the coming decade. The idea of building a data center that will draw 1 gigawatt of power or more, an amount sufficient to serve over 875,000 homes, is in the plans of so many data center developers and so routinely discussed that it no longer seems extraordinary. The challenge, when viewed in the aggregate, may be overwhelming. A recent Wood Mackenzie report identified 64 gigawatts of confirmed data center related power projects currently on the books with another 132 gigawatts potentially to be developed. 64 gigawatts are enough to power 56 million homes — more than twice the population of the 15 largest cities in America. The U.S. electric utility system is struggling to meet the projected energy needs of the AI industry. The problem is that many utilities do not have the financial and organizational resources to build new generating and transmission facilities at the scale and on the data center developers' desired timeline. The public policy question now on the table is who should pay for and bear the risk for these massive mega-energy projects. Will it be the AI developers such as Amazon, Microsoft, Meta and Alphabet — whose combined market value is seven times that of the entire S&P 500 Utility Sector — or the residential and other customers of local electric utilities? The process to answer this and related questions is underway in the hallways of the U.S. Congress, at the Federal Energy Regulatory Commission and other federal agencies, in tariff proceedings before state regulatory authorities and in public debate at the national, state and local levels. Whether they are developed at the federal, state or local level, the following values and objectives should form the core of public policy in this area: Data centers developers that require massive amounts of electric power (e.g. above 500MW or another specified level) should be required to pay for building new generating and transmission facilities. The State of Texas recently enacted legislation that requires data centers and other new large users to fund the infrastructure necessary to serve their needs. Although it is customary to spread the cost of new facilities across the user base of a utility, the demands that data center developers are placing on utility systems across the country are sufficiently extraordinary to justify allocating the costs of new facilities to those developers. Moreover, data center developers have the financial resources to cover those costs and incorporate them into the rates charged to users of their AI services. The developers of large data centers should bear the risk associated with new utility-built generating and transmission facilities, not the utility. As an example of such a policy, the Public Utility Commission of Ohio just approved a compromise proposed by American Electric Power of Ohio that would require data centers with loads greater than 1 gigawatt and mobile data centers over 25 megawatts to commit to 10-year electric service contracts and pay minimum demand charges based on 85 percent of their contract capacity, up from 60 percent under the utility's current general service tariff. Another option included in the Texas legislation requires significant up-front payments early in the planning process and mandates that data center developers disclose where they may have simultaneously placed demands for power. It is not unusual for data center requests for service to be withdrawn once they decide on the best location and package of incentives. Data center developers have the financial capacity and ability to manage this risk, utilities do not. Generating facilities that are co-located at large data centers should be integrated with the local utility electric grid, with appropriate cost allocation. Although a few projects have examined the option of a co-located power generation 'island' fully independent of the grid, most projects intend to interconnect with the grid system for back-up power and related purposes. Properly managed, this interconnection could be advantageous for both the data center and the utility system, provided that costs are appropriately allocated across the system. The U.S. government should continue to support the development of nuclear technology, including small modular reactors. U.S. utilities do not have the financial resources to assume the risk of building new nuclear-powered generating facilities. The emergence of a new set of customers, data center developers with enormous needs for electric power and deep pockets, changes the equation. The U.S. government has provided billions of dollars of support for new nuclear technologies and should continue to do so for the purpose of bringing their costs down. The U.S. government should continue to support energy efficiency improvements at data centers. Data centers use massive amounts of power for running servers, cooling systems, storage systems, networking equipment, backup systems, security systems and lighting. The National Renewable Energy Laboratory has developed a 'handbook' of measures that data centers can implement to reduce energy usage and achieve savings. In addition, there now are strong market forces to develop new super-efficient chips that will lower the unit costs of training and using AI models. The U.S. government should help accelerate the development of these chips given their leverage on U.S. electricity demand. The stakes in this public policy debate over our energy future could not be higher. If we get these policies right, AI has the potential to remake the U.S. economy and the energy infrastructure of this country. If we get it wrong, the push to build new generating and transmission facilities to provide gigawatts of power has the potential to overwhelm the financial and operational capacity our electric utility system, impose burdensome rate increases on homeowners and businesses, undercut efforts to reduce the use of fossil fuels to meet climate-related goals and compromise the reliability of our electricity grid for years to come. David M. Klaus is a consultant on energy issues who served as deputy undersecretary of the U.S. Department of Energy during the Obama administration and as a political appointee to two other Democratic presidents. Mark MacCarthy is the author of 'Regulating Digital Industries' (Brookings, 2023), an adjunct professor at Georgetown University's Communication, Culture & Technology Program, a nonresident senior fellow at the Institute for Technology Law and Policy at Georgetown Law and a nonresident senior fellow at the Brookings Institution.


Forbes
29 minutes ago
- Forbes
7 Business Lessons For AI
From above photo of an anonymous African-American woman analyzing business graph on a laptop ... More computer while sitting at restaurant desk with notebook, pen and eyeglasses. When considering any implementation of AI in a business, leadership teams have a weighty responsibility. This is an approach that people want to get right. They face a few challenges – that the technology is so nascent, that there doesn't seem to be a lot of road maps available for companies, and that many people instinctively distrust large language models to automate processes. So what's to be done? A Leader's Perspective Here's where I recently got some insights from a paper written by Lidiane Jones, who was previously head of Slack, and CEO of Bumble, a major dating platform. Jones breaks down some of the aspects of AI implementation that C-suite people are looking at. Data Transfers and Governance Jones points out that transformations like ETL (extract, transform, load) and ELT (extract, load, transform) predated AI, but data is still siloed in many cases. One solution Jones touts is an 'omnichannel data strategy' – this, she writes, 'will ensure privacy and security of your data, ease of access for business applications, offer real time capabilities and can integrate with your everyday tools.' Compliance with Financial Data Rules For example, Jones speaks about the need to focus on compliance in some areas. 'Every company has critical financial data, subject to audit, regulation and compliance that must be carefully protected,' she writes. 'Normally, for more scaled companies, this data sits on an ERP system. Every CEO, CFO, COO and CRO needs critical real-time insight from these systems, to determine how the business is performing against plans, how expenses are tracking against the budget or how a change in employee investment … will affect the overall cost structure and velocity of the business, among numerous other capital allocation considerations.' Business Intelligence for the Win In terms of general business intelligence, Jones spins a story to illustrate: 'Imagine a Sales Executive who develops a multi-year high trust relationship with one of a company's most important large customer, and she decides to leave the company for a better career opportunity,' she writes. 'Historically, though there will be good information about that customer and notes from this leader, much of her institutional knowledge leaves with her. Corporate human knowledge exists within organizations, and is shaped by the culture, people and business processes.' She then addresses the role of workflow tools and other platform resources. 'Collaboration software of all kinds like Slack, Google Workspace and Teams … have a lot of people's knowledge embedded in them that is hardly ever nurtured,' she adds. 'Unstructured data like this is highly effective in training LLMs, and can provide opportunities that haven't existed before - like capturing the sentiment of what this large customer loved the most about their relationship with this Sales Executive.' She also gave a nod to the potential difficulties, conceding that ' it might feel daunting to expand data strategy planning to be as broad as this,' but notes that partnering with vendors and other firms can help. 'Phasing and prioritizing how you bring more of your data into a single system is key to making progress and capturing business value along the way,' she writes. Agents do the Agenting Jones also makes an important point about the use of AI agents. It goes sort of like this: we're used to computers doing calculations, and digesting and presenting information, but these new systems can actually brainstorm on their own to change processes. 'In many instances, agents can optimize workflows themselves as they determine more effective ways to get the work done,' she writes. A Story of Implementation Citing ChatGPT's meteoric rise, Jones talked about using these technologies in the context of her work at Slack, which is, after all, a business communication tool. She chronicled the firm's connection with companies like OpenAI circa 2017. 'At the time, when I was leading Slack, it was exciting to collaborate with OpenAI, Cohere and Anthropic to use their LLMs to help our customers with some of the most challenging productivity challenges at Slack,' she writes. The challenges she enumerates: 'finding a conversation they knew they had but couldn't remember in what channel, or help customers manage the large amount of messages they received with summaries and prioritization, optimize search for information discovery and so much more.' Then, too, the company created tools. 'We introduced Slack Canvas based templates to help our customers quickly create content based on their corporate information, and captured Huddles' meeting notes and action items, and that was just the beginning,' she explains. 'The capabilities of LLMs gave us the opportunity to solve real-world customer challenges in a pleasant and insightful way, while maintaining the experience of the Slack brand.' Calling this kind of thing the 'table stakes' of the new corporate world, Jones walks us through a lot of the way stations on the path to what she calls 'co-intelligence.' That includes workflow automation, agentic AI, multi-agent systems, and new interfaces. Our AI Brethren Here's one way that Jones characterizes managing an AI: 'Considering autonomous agents as truly 'digital workers' can be a helpful framing for questions we already think of today with 'human workers' like: how does the employer track the quality of the work done? What systems does the digital worker have access to? If the company is audited, how do we track what steps and actions were taken by the digital worker? If the digital worker's actions turn malicious, how do we terminate the agent?' As for the extent of agent autonomy, Jones suggests that fully autonomous agents will be able to handle a complex or 'scoped' job on their own, conceding, though, that 'even an autonomous agent, like a human, needs a job scope and definition - or a set of instructions - on the job at hand.' This new world is one we will have to reckon with soon. Four Principles of Leadership Jones finished with a set of ideas for those who are considering these kinds of deployments. 1. Be hands-on: as a leader, stay close to what's happening 2. This one goes back to prior points: working with vendors and partners is a plus 3. Build an AI-first culture with AI-native projects 4. Find the value for your company I found this to be pretty useful for someone who is contemplating a big move in the age of AI. Some of the best ideas for leadership can be gleaned from TED talks, conferences, and these kinds of personal papers on experience with the industry.