logo
Mixed Sustainability Messages Create Uncertain Path For Businesses

Mixed Sustainability Messages Create Uncertain Path For Businesses

Forbes12-05-2025

Life seemed much easier for business leaders when they could just complain about what they perceived to be excessive and inconsistent sustainability regulations. Complex, global disclosure requirements – some mandatory, some voluntary – elaborate supply chain due diligence processes, greenhouse gas emissions reporting: they all seemed challenging enough. But when the regulators themselves started changing, editing, simplifying and clarifying already agreed upon sustainability mandates, and adjusting course on associated policy, that's when things got really complicated.
In the European Union (EU), for example, recently leaked documents have revealed continued disagreement among EU Member States over key details of the Omnibus Simplification Package, an initiative launched earlier this year to harmonize and simplify existing sustainability mandates. The mere introduction of the simplification initiative caused a stir as most of the world thought the rules were already finalized. Now that they've been reopened, the drama has only intensified. Meanwhile, the European Financial Reporting Advisory Group (EFRAG), the organization appointed to develop the original technical guidance and now working on the revision of the European Sustainability Reporting Standards (ESRS) as part of this Omnibus Simplification Package, recently had its work plan rejected as some Council members have expressed a lack of confidence in its implementation plans.
In the U.S., where the Environmental Protection Agency (EPA) has launched its biggest ever deregulatory action and the Securities and Exchange Commission has abandoned its plans to introduce climate disclosure rules, the situation for businesses has become even more complicated. While some companies have seized on the changing political winds to distance themselves from corporate sustainability initiatives, numerous investors and other stakeholders have started to push-back and are still demanding clarity on climate goals and other sustainability commitments.
Suddenly, it seems that navigating sustainability has become less about following a straight-line path to compliance and more about avoiding the rocky outcroppings of hidden risks.
For businesses caught in the middle, the only real solution is to focus on the material bottom-line sustainability and resilience risks and opportunities that will affect the business. For example, even though the EU is still in the process of determining the best next steps for the Corporate Sustainability Reporting Directive (CSRD) and Corporate Sustainability Due Diligence Directive (CSDDD) and the U.S. looks like it will not be enforcing the proposed climate disclosure rules, those things do not mean climate and sustainability-related risks have gone away for businesses. In fact, it is important to note that in Europe, although the regulatory drivers have been delayed or are being simplified, the laws are still on the books, and in scope companies around the world will no doubt eventually need to comply with whatever the final versions stipulate.
The North Star for businesses trying to find their way through this period of uncertainty needs to be concrete data identifying real business risks stemming from climate-related extreme events, carbon emissions, employee safety, human capital and rights and environmental impact. Regardless of the specific details of any piece of legislation, these core fundamentals will affect every business' ability to be viable and profitable for the next ten, twenty or thirty years through countless different economic and political cycles.
One good guide for companies looking to keep moving forward with their disclosure reporting is the EFRAG Voluntary Sustainability Reporting Standard for Non-Listed SMEs (VSME). Although this standard was developed initially for small businesses who wanted to voluntarily align their sustainability reporting practices with the CSRD, it is now increasingly being looked upon by considerably larger enterprises as a possible playbook, offering a standardized approach. Now that the reporting threshold for the CSRD has risen to include only companies with 1,000 or more employees, the VSME has become a sort of de facto guide for those businesses that had already started preparing for CSRD before they fell out of scope as a part of the Omnibus negotiations.
While a few companies may look at the current delays, policy shifts and general sense of deregulation as an excuse to slowly roll back their sustainability initiatives, the fact is that investors, employees, consumers and other stakeholders are still demanding some level of accountability and transparency. Moreover, they are looking for that information to be reported in a standardized, consistent and most of all, comparable format from one company to the next. Those companies that recognize that need and continue to do the work now to keep their sustainability houses in ship shape order, will still be best positioned to not only anticipate and respond to real business risks but also to be sustainable in the true sense of the word. That means they will be viable, resilient and able to maintain and create value for many years to come.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Safety: Beyond AI Hype To Hybrid Intelligence
AI Safety: Beyond AI Hype To Hybrid Intelligence

Forbes

time15 minutes ago

  • Forbes

AI Safety: Beyond AI Hype To Hybrid Intelligence

Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.

Why Zumiez (ZUMZ) Stock Is Falling Today
Why Zumiez (ZUMZ) Stock Is Falling Today

Yahoo

time20 minutes ago

  • Yahoo

Why Zumiez (ZUMZ) Stock Is Falling Today

Shares of clothing and footwear retailer Zumiez (NASDAQ:ZUMZ) fell 7.9% in the afternoon session after the company reported mixed first quarter 2025 results: its EBITDA missed and its EPS guidance for next quarter fell short of Wall Street's estimates. On the other hand, Zumiez narrowly topped analysts' revenue expectations and its revenue guidance for next quarter slightly exceeded Wall Street's estimates. Still, this was a softer quarter. The shares closed the day at $11.54, down 10.3% from previous close. The stock market overreacts to news, and big price drops can present good opportunities to buy high-quality stocks. Is now the time to buy Zumiez? Access our full analysis report here, it's free. Zumiez's shares are extremely volatile and have had 32 moves greater than 5% over the last year. In that context, today's move indicates the market considers this news meaningful but not something that would fundamentally change its perception of the business. The previous big move we wrote about was 10 days ago when the stock gained 5.8% on the news that the major indices rebounded (Nasdaq +2.0%, S&P 500 +2.0%) as President Trump postponed the planned 50% tariff on European Union imports, shifting the start date to July 9, 2025. Companies with substantial business ties to Europe likely had some relief as the delay reduced near-term cost pressures and preserved cross-border demand. Zumiez is down 39% since the beginning of the year, and at $11.58 per share, it is trading 60.2% below its 52-week high of $29.11 from August 2024. Investors who bought $1,000 worth of Zumiez's shares 5 years ago would now be looking at an investment worth $383.70. Unless you've been living under a rock, it should be obvious by now that generative AI is going to have a huge impact on how large corporations do business. While Nvidia and AMD are trading close to all-time highs, we prefer a lesser-known (but still profitable) semiconductor stock benefiting from the rise of AI. Click here to access our free report on our favorite semiconductor growth story. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Why Is Byrna (BYRN) Stock Rocketing Higher Today
Why Is Byrna (BYRN) Stock Rocketing Higher Today

Yahoo

time25 minutes ago

  • Yahoo

Why Is Byrna (BYRN) Stock Rocketing Higher Today

Shares of non-lethal weapons company Byrna (NASDAQ:BYRN) jumped 20.5% in the afternoon session after the company reported strong preliminary Q2 2025 results, with sales expected to be roughly $28.5 million, representing a 41% increase from $20.3 million in the fiscal second quarter of 2024. The promising result was attributed to strong early demand for the new Byrna Compact Launcher (CL), which launched on May 1, along with meaningful channel expansion. The shares closed the day at $31.34, up 18% from previous close. Is now the time to buy Byrna? Access our full analysis report here, it's free. Byrna's shares are extremely volatile and have had 71 moves greater than 5% over the last year. But moves this big are rare even for Byrna and indicate this news significantly impacted the market's perception of the business. The previous big move we wrote about was 10 days ago when the stock gained 5.9% after the major indices rebounded (Nasdaq +2.0%, S&P 500 +1.5%) as President Trump postponed the planned 50% tariff on European Union imports, shifting the start date to July 9, 2025. Companies with substantial business ties to Europe likely had some relief as the delay reduced near-term cost pressures and preserved cross-border demand. Byrna is up 11.2% since the beginning of the year, and at $31.50 per share, it is trading close to its 52-week high of $34.19 from February 2025. Investors who bought $1,000 worth of Byrna's shares 5 years ago would now be looking at an investment worth $5,212. Unless you've been living under a rock, it should be obvious by now that generative AI is going to have a huge impact on how large corporations do business. While Nvidia and AMD are trading close to all-time highs, we prefer a lesser-known (but still profitable) semiconductor stock benefiting from the rise of AI. Click here to access our free report on our favorite semiconductor growth story.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store