Latest news with #benchmarking


Fast Company
31-07-2025
- Business
- Fast Company
Pursue performance over benchmarking for ultimate success
Regression to the mean is the unfortunate outcome of most benchmarking today. The Fast Company Impact Council is an invitation-only membership community of top leaders and experts who pay dues for access to peer learning, thought leadership, and more. BY Listen to this Article More info 0:00 / 6:35 'How are we doing against the industry benchmark?' This question made me recoil, and I felt my body tense up. I heard it twice in a short period of time at two different organizations. In both cases, the question led toward the same outcome: justifying current performance and not seeking true excellence. Ever since customer satisfaction and employee engagement surveys began, survey companies have been selling benchmark data to their clients. I have personally been involved with surveys for over three decades and I consistently object to the practice. In short, survey providers who cannot help their clients truly address the issues discovered sell them excuses about why they are good enough. Yes, it makes me mad. Why? Because it is the ultimate deception. As long as there is a lengthy list of bottom feeders in your industry, you are safe. You are not that bad. And with this false sense of security, you will make the biggest mistake: understanding your customers and creating exceptional value that will be worth the price and profit margins you charge. Two cases exemplify the issue Let me explain using two examples. In the first, the organization was celebrating an improved Net Promoter Score (NPS). Their 10-point jump sounded very respectable. But it was based on a 9% sample of its customers. 'Where are the other 91% of your customers,' I asked. 'If they refuse to respond to a 5 minute survey, how committed are they to a long-term profitable relationship?' Stop celebrating relative successes and instead examine the true essence of your customer relationships. In the second case, the company enjoyed close to a 90% response rate from its employees and celebrated beating the benchmark by double digit points. Cause for celebration, right? Think again. The company is a top industry performer and should never compare itself to the average. This company's expectations are very different than the expectations of number 50 on the industry list. It ought to determine how to lead the industry, not follow it! Benchmarking—the wrong perspective Using industry benchmarks has its flaws: Regression to the mean: Benchmarking directs the attention to the industry average, not the leaders. Imitation over innovation: When focusing on competitors' perspectives, one imitates their practices, not creating new one to delight customers. Competitors over customers: Focusing on competitors provides the wrong reference point for business value creation. Instead, focus on the customers. Blind context: Companies factor many decisions into creating and delivering value, which is then reflected in their customer scores. Profitability, market segments, and cost positioning are a few factors that can direct different strategies and therefore different customer engagement. Lowering standards: Benchmarking often results in relaxed performance standards. If the mean is acceptable, so is the justification to ignore the performance of competitors exceeding it. Future backwards: Customer scores reflect past performance and past value delivery. Companies may already be innovating, but those new developments are not part of the overall discussion as they are not reflected in the scores. But beyond these arguments, there is simple trust. The business truth we ought to consider above all. Is 50% customer satisfaction a good or bad number? If the industry benchmark is 22%, you are killing it. Keep it up If the industry benchmark is 83%, you are in trouble. Do you understand how ridiculous this conclusion is? Fifty percent of your customers are not happy with the value you deliver. They are at risk of switching to the competition. Should that a big enough red flag for immediate attention? In the empowered-customer era, we can no longer afford to compromise and play with the number. We live in a world where every customer is a segment of one. A personal brand using our product or solutions to promote their brand. They create content publicly and share their opinions. No, 50% customer satisfaction should not be considered good results. Unless, of course, you decide to get rid of the other 50% of your customer base. The pursuit of absolute performance In the late 1970s, benchmarking was first introduced by Xerox Corporation to compare themselves to their intense Japanese competitors and ensure they raise their performance bar. The original purpose was constant improvements and adopting best practices. But like many great concepts, it has become a method to relax the pursuit of excellence. Even for the bottom feeders in any benchmark, the goal was to become average, not exceptional. In a time when we are concerned about the impact of AI on organizations and performance, we can be confident that average work will be automated and taken over by AI. True exceptional value will be the survivor of the fittest. If your response level is low, it means your customers do not believe the sincerity of the dialogue you proclaim to conduct. If your employees do not take the time to respond, your situation is dire. The current state of surveys does not seem to produce the sincere dialogue necessary to truly improve relationships (customers or employees). We need to rethink this system. The addiction to the false sense of security peddled by survey companies with benchmark data should end. It is the pursuit of relative performance over ultimate performance. Relative performance focuses on the good enough vis a vis the competition. Ultimate performance is delightful and surprising customers, so they have no reason to consider any other provider. The former creates a false sense of security the latter ensures business continuity. To achieve ultimate performance, we ought to establish a true dialogue with our customers. A dialogue in which customers take the time to provide the insights and organizations address it sincerely, completely, and on time. This is not a one-time event but rather an ongoing dialogue about the performance process, ensuring that we are always ahead, not with the industry average, but with our customers' expectations. In a world of ultimate performance, the real question we ought to ask is 'How can we outdo the current performance and value?' Lior Arussy is chairperson of ImprintCX and author of Dare to Author! The early-rate deadline for Fast Company's Most Innovative Companies Awards is Friday, September 5, at 11:59 p.m. PT. Apply today.


Bloomberg
31-07-2025
- Business
- Bloomberg
Singapore's Keppel Seeks Comparisons to KKR in Push to Divest
Singapore's Keppel Ltd. wants to benchmark itself against leading global asset managers, as it accelerates efforts to divest a significant portion of S$14.4 billion ($11 billion) non-core assets. The company is planning to substantially trim those holdings by 2030, following wider losses from the segment and its attempt to be seen more like an asset manager.
Yahoo
16-07-2025
- Business
- Yahoo
Datamaran Launches Datamaran Suite to Power Always-On ESG Strategy
New IROs Benchmarking Module and Multiple Reviewers Feature Deliver Greater Oversight and Strategic Clarity NEW YORK and LONDON, July 16, 2025 /CNW/ -- Today, Datamaran, the market leader in AI-powered risk and governance tools, announced the launch of Datamaran Suite – a powerful, modular platform designed to equip companies with everything they need to run an always-on ESG strategy. Built on the foundation of Datamaran Core, Suite introduces new tools for benchmarking, governance, and decision-making, including the highly anticipated IROs Benchmarking module. Additionally, Datamaran has enhanced its Materiality workflow with a new Multiple Reviewers feature. "Datamaran Suite brings together audit-ready materiality analysis with powerful benchmarking and collaboration tools – enabling cross-functional teams to make boardroom-ready decisions backed by insights into best practices, regulatory developments, and peer trends," said Marjella Lecourt-Alma, CEO and co-founder at Datamaran. Introducing IROs Benchmarking: The Ultimate Peer Insights At the center of the Suite's expanded capabilities is IROs Benchmarking, a new module that allows companies to compare their disclosed impacts, risks, and opportunities (IROs) against those of peers across industries and geographies. This intelligence enables alignment with industry norms, highlights disclosure gaps, and strengthens internal governance by equipping executives with evidence-backed insights. "Your IRO disclosures are a reflection of your strategic priorities," added Lecourt-Alma. "With Datamaran Suite, companies gain a clear view of how their disclosures stack up and, just as importantly, they get a scalable system that reflects the complexity of ESG today. Our clients tell us they value the efficiency of having materiality, regulatory monitoring, and benchmarking in one place. At the same time, we're listening closely to those who want even more tailored insights and specialized support, and we're building with those in mind." New Feature: Multiple Reviewers for Enhanced IRO Evaluation Alongside IROs Benchmarking, Datamaran has also rolled out a major upgrade to its Materiality Analysis workflow, which is part of Core and Suite: the Multiple Reviewers feature. This enhancement allows companies to assign up to 25 reviewers per topic when evaluating IROs. By incorporating diverse internal perspectives, organizations benefit from more thorough and consistent assessments. The platform's built-in scoring logic calculates average scores automatically once all assessments are complete, providing a transparent and robust output for audit and governance purposes. The Advantage of Suite: One Platform, Total Oversight Datamaran Suite integrates Core's AI-powered materiality capabilities with new governance tools, including: Topics Benchmarking – Understand how peers frame ESG issues to refine strategy Target Setting – Develop evidence-based ESG goals rooted in stakeholder expectations Harbor+ Access – Join an exclusive global community of ESG professionals, with access to regulatory updates, peer roundtables, and more All modules operate within a single secure platform, giving sustainability, risk, legal, and compliance teams total oversight and strategic clarity. Datamaran Suite is available today for all new and existing clients. To learn more or request a demo, visit Contact: Helen Skeen, Senior PR and Content Manager: Logo - View original content to download multimedia: SOURCE Datamaran Limited View original content to download multimedia:


Forbes
08-07-2025
- Business
- Forbes
Why Human Evaluation Matters When Choosing The Right AI Model For Your Business
Ryan Kolln, CEO and Managing Director of Appen. As enterprises increasingly integrate AI across their operations, the stakes for selecting the right model have never been higher and many technology leaders lean heavily on standard industry benchmarks to guide their decisions. While these metrics are useful for early filtering, they don't tell the whole story. A model's leaderboard rank doesn't guarantee it will meet business needs. What's often missing is human evaluation—and, in many cases, customized, enterprise-specific benchmarks that reflect real-world usage and deployment requirements. In today's AI landscape, human insight is a necessary complement to automated benchmarking—essential not in isolation, but as part of a structured evaluation strategy. The Limits Of Standard Benchmarking Standard benchmarks—like MMLU, Humanity's Last Exam and MMMU—were designed to measure general model performance in controlled settings. When combined with metrics like F1 score (classification accuracy), BLEU (for translation tasks) or perplexity (for language models), standard benchmarks are useful for comparing general model performance in lab settings. But these benchmarks have limits. The complexity and diversity of business AI use cases are rapidly outgrowing the information reflected in standard benchmarking. As models approach saturation—where many achieve near-max scores—the value of standard benchmarks further diminishes. Standard benchmarking doesn't account for: • Context And Nuance: A model can perform well on a math Olympiad dataset and still fail to retrieve relevant insights from an enterprise knowledge base. • Alignment With Company Values: Standard benchmarks don't measure brand voice, regulatory compliance or cultural appropriateness. • Usability And Robustness: Metrics typically don't capture how users experience outputs—or how models perform under ambiguous or adversarial inputs. High scores on public leaderboards don't guarantee business success. Standard benchmarks are most valuable for filtering potential candidates in initial model selection; however, these metrics should be complemented by human evaluation to select the best model for your unique use cases. The Role Of Human Evaluation Human evaluation fills the gaps left by automated benchmarking. Through structured assessments, human reviewers—especially domain experts—can judge model outputs on critical dimensions that standard tests miss. Developing custom benchmarks tailored to your business' unique requirements can further enhance the accuracy of your model evaluation process. • Coherence: Are outputs logical, complete and contextually appropriate? • Bias And Fairness: Does the model treat different demographics equitably? • Task suitability: Can the model handle the complexity of business-specific tasks? Common human evaluation approaches include side-by-side comparisons (ranking two model outputs), rating scales for specific quality metrics (such as helpfulness or accuracy) and real-world task testing, where models are evaluated on actual business workflows. By embedding human judgment into model evaluation—and aligning it with custom benchmarks—companies gain a richer, more practical understanding of how an AI system will perform after deployment. Practical Approaches To Human Evaluation For organizations looking to implement human evaluation efficiently, several best practices can help: • Design structured review processes. Use standardized rubrics to assess outputs across key dimensions like accuracy, safety and tone. • Involve domain experts. Engage reviewers who understand your industry-specific language, compliance requirements and customer expectations. • Adopt hybrid evaluation models. Combine quantitative benchmark filtering with qualitative human review to balance scalability and depth. • Prioritize real-world tasks. Build custom test sets that mirror the scenarios your users will encounter, rather than relying solely on abstract prompts. • Leverage evaluation platforms. Deploy tooling that supports A/B testing, red teaming and rubric-based scoring to scale human evaluation across models. For example, a healthcare company evaluating AI for medical documentation might prioritize output accuracy, sensitivity to patient data privacy and alignment with clinical terminology—factors best judged by humans, not benchmarks alone. When Human Evaluation Is Mission-Critical Human evaluation is particularly vital in high-risk, high-compliance scenarios such as: • Financial decision support • Legal summarization • Customer service in regulated industries • Healthcare documentation These are domains where even subtle model failures can trigger outsized operational, financial, legal or reputational risks. Rethinking AI Evaluation In an environment where AI models are powerful but complex, human evaluation is no longer optional—it's essential. Business leaders must recognize that while public benchmarks help narrow model options, they are not definitive answers. A robust model selection strategy, complemented with human evaluation and enterprise-specific benchmarks, ensures that AI models meet business needs, align with brand and regulatory standards and deliver sustainable value. As AI adoption deepens, companies that integrate human-centred evaluation into their selection and monitoring processes will be better equipped to unlock AI's full potential while mitigating risks others may overlook. When choosing the right AI model for your enterprise, don't just ask how well it scores. Ask how well it works for your people, your customers and your mission. Human insight is the bridge between technical promise and real-world performance. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Hospitality Net
11-06-2025
- Business
- Hospitality Net
Cloudbeds to empower hoteliers with an automated data exchange agreement with STR
San Diego, CA – Cloudbeds, the premier hospitality management software platform, has today announced a data automation agreement with STR, a CoStar Group company – the leading provider of premium data benchmarking, analytics, and marketplace insights for the global hospitality industry. Through a seamless data exchange from Cloudbeds to STR, users will be able to track trends in occupancy, average daily rate (ADR), revenue per available room (RevPAR), and other essential metrics directly within their STR dashboard, facilitating data-driven decision-making. Adam Harris, CEO of Cloudbeds, added: 'This collaboration with STR is a game-changer for the hospitality industry. By integrating real-time data from Cloudbeds to STR, we are providing hoteliers with the tools they need to stay ahead of the competition, drive profitability, and enhance the guest experience.' Key benefits of the partnership: Seamless data exchange: Through an API integration, Cloudbeds customers will gain direct access to STR's extensive market data, empowering them to benchmark their hospitality performance. Through an API integration, Cloudbeds customers will gain direct access to STR's extensive market data, empowering them to benchmark their hospitality performance. Enhanced market insights: Hoteliers can harness STR's robust benchmarking data to better understand market trends. Hoteliers can harness STR's robust benchmarking data to better understand market trends. Data-driven decisions: The integration will enable hoteliers to make more informed decisions regarding operations, marketing, and revenue management through real-time performance insights. The integration will enable hoteliers to make more informed decisions regarding operations, marketing, and revenue management through real-time performance insights. Optimized Revenue Strategies: By utilizing comprehensive market data, hoteliers can adjust pricing strategies dynamically, capitalize on demand spikes, and improve their competitive position. The integration is available today to shared customers of STR and Cloudbeds. Both Cloudbeds and STR are committed to providing ongoing support and education to help hoteliers fully leverage the data automation. To learn more, visit Cloudbeds at HSMAI Commercial week and HITEC in Indianapolis, June 16th to June 19th, 2025. About Cloudbeds Cloudbeds is the leading platform redefining the concept of PMS for the hospitality industry, serving tens of thousands of properties in more than 150 countries worldwide. Built from the ground up to be masterfully unified and scalable, the award-winning Cloudbeds Platform brings together built-in and integrated solutions that modernize hotel operations, distribution, guest experience, and data & analytics. Founded in 2012, Cloudbeds has been named a top PMS, Hotel Management System and Channel Manager (2021-2024) by Hotel Tech Report, World's Best Hotel PMS Solutions Provider (2022) by World Travel Awards, and recognized in Deloitte's Technology Fast 500 in 2023. For more information, visit About STR STR is the global leader in hospitality data benchmarking, analytics and marketplace insights. Founded in 1985, STR maintains a robust global presence with regional offices strategically located in Nashville, London, and Singapore. In October 2019, STR was acquired by CoStar Group, Inc. (NASDAQ: CSGP), a leading provider of online real estate marketplaces, information and analytics in the commercial and residential property markets. For more information, please visit and View source