
EU Financial Regulator Submits Timeline For Reduced Sustainability Reporting Standards
On April 25, the European Financial Reporting Advisory Group submitted a work plan for the rewrite of the European Sustainability Reporting Standards. The request is in response to a letter from Maria Luís Albuquerque, the EU Commissioner for Financial Services and Investments, asking for updated recommendations to comply with the Omnibus Simplification Package's proposed reductions to sustainability reporting requirements. A vote by EFRAG's Sustainability Reporting Board in mid-April failed to garner enough support, giving hope to sustainability advocates. However, the SRB met the Council's requested completion timeline of October 31.
As part of the European Green Deal, the EU proposed a series of regulations aimed at controlling businesses' climate related activities. In 2022, the Corporate Sustainability Reporting Directive was adopted to create requirements for businesses to report greenhouse gas emissions and other environmental, social, and governance actions. The CSRD called for the drafting of European Sustainability Reporting Standards to create the regulatory framework for reporting. That responsibility was delegated to EFRAG.
EFRAG released the first round of ESRS in late 2022. The European Commission officially adopted them in July 2023 and EFRAG was tasked with drafting sector specific and non-EU company ESRS. However, companies struggled with implementing the first round of ESRS. This forced the Commission to delay further development by EFRAG, shifting focus to drafting additional guidance.
By the summer of 2024, the tide had shifted on sustainability and other green initiatives. During the 2024 European Parliament elections, the regulatory burden on businesses became a major theme. During the campaigns, the European Green Deal took the majority of the blame for the EU's faltering economy. The elections resulted in a shift to the right, with environmentally focused parties losing seats.
In February, the European Commission adopted a proposal to drastically reduce sustainability reporting requirements in the European Union, including a rewrite of the ESRS. While the proposal has yet to be adopted by the European Parliament, the Commission is moving forward by requesting EFRAG begin the rewrite, in anticipation of the final passage. The instructions indicate that they expect passage of reforms by the end of 2025.
In the letter sent on March 25, Albuquerque stated, 'as you will be aware, on 26 February the Commission adopted a first 'omnibus' package of proposals to simplify EU rules, boost competitiveness, and unlock additional investment capacity. You will also have seen that, as part of this initiative, we propose to adopt a delegated act to revise and simplify the existing European Sustainability Reporting Standards (ESRS).'
The work plan adopted by EFRAG's SRB states that they will immediately start work to 'establish a vision on actionable levers for substantial simplification.' Initial input is being sought from stakeholders via an online survey open until May 6. Mid-May through July will be focused on 'drafting and approving the Exposure Drafts amending ESRS.' An Exposure Draft, or public draft of the proposal, will be published in August. EFRAG will hold a public consultation in September. The final draft will be submitted to the Commission in October.
While the work from ERGAG is being conducted at the direction of the Commission, the final proposal to reduce CSRD requirements is facing a debate in the European Parliament. However, the tight deadlines and clear direction from the Commission indicate confidence that sustainability reporting requirements will be significantly reduced by the end of the year.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
12 minutes ago
- Forbes
AI Safety: Beyond AI Hype To Hybrid Intelligence
Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.
Yahoo
an hour ago
- Yahoo
What Washington can learn from a legendary London meltdown
In a city where allegiance and proximity to power is everything, the leader's closest adviser portrayed himself as an outsider. He began the year by hiring a bunch of 'weirdos and misfits' and ordering them to rip up the entire 'rotten' system of government. The adviser loved to put noses out of joint and 'own the libs,' while building up his profile in the media as the real power behind the throne. Then, having realized that his easily-distracted and impulsive politician boss wasn't actually committed to building a tech-heavy, libertarian future, the disillusioned adviser quit — dedicating himself to publicly destroying his former employer. If you're British, watching the collapse of Donald Trump and Elon Musk's uncomfortable marriage has echoes of the end of the relationship between Prime Minister Boris Johnson and his Chief Adviser Dominic Cummings in 2020. How that psychodrama played out in the UK could have lessons for the US — not least because Cummings eventually succeeded in undermining Johnson's political career, ultimately defenestrating the prime minister through relentless briefings and leaks. When someone who was inside the room and was perceived to be central to a political project says it's all a sham, the damage can be significant. For those who don't know, Cummings was the chief strategist of the successful Brexit campaign in 2016 but then largely disappeared from view when it came to actually defining what Brexit should look like. Unlike Musk, Cummings was a lifelong political operative, albeit one who cultivated a reputation for actually reading books. Three years later, with his political standing inflated by a film in which he was portrayed by Benedict Cumberbatch as an insane genius, Cummings returned to maneuver Johnson into Downing Street. Once inside government, Cummings broke all the standard operating procedures of the British state to finally 'get Brexit done' and sever the UK's relationship with the EU in January 2020. When I look back at my occasional text exchanges with Cummings from that era, usually while trying to check stories about the funding of the Brexit campaign or his desire to defund the BBC, they mirror what he said in public. He held a seemingly sincere belief that most of the British media was fake news, that the British state was not fit for purpose, and that the political party he was nominally working for, the Conservatives, was little more than a helpful vehicle for an insurrection. One ally approvingly described the chief of staff of a Conservative government to the BBC as a 'Leninist.' Ultimately, both Musk and Cummings believed that you can run the government as a high-performance start-up and that the defining failure of past civil service reforms was that they hadn't smashed enough things quickly enough. Both also have the fatal flaws of being undisciplined, delighting in picking public fights and getting bored easily. Their independent means also meant they were not as beholden to their political masters as other advisers. Cummings might not have Musk levels of money but he was wealthy in British terms (his father-in-law Sir Humphry Tyrrell Wakefield, owner of a 13th century castle, would write letters in support of his proto-DOGE policies) and connected (his wife was deputy editor of the right-wing Spectator magazine). The overwhelming impression Cummings gave was that politicians were the useful idiots who should give him the runway to remake the state. Iconoclasm was the point. When Cummings quit he took to publishing lengthy Substack posts portraying Johnson as a broken supermarket 'trolley' who veered all over the place based on the last thing someone said to him. Even more effectively, Cummings helped to leak stories about Johnson's pandemic lockdown-busting in a scandal known as Partygate. In an echo of what's happened with Musk, left-wingers who previously thought Cummings was the devil incarnate began cheering him on as he stuck the knife into Johnson. The attacks rang true among Tory MPs and Johnson's ratings never recovered, ultimately leading to his early departure from politics. Many people leaked against Johnson and his circle, but when Cummings did, the pair's previous closeness gave it the ring of truth. Musk and Cummings got opportunities because they went in to bat for fundamentally untrustworthy but opportunistic politicians, in the hope that they would be given the freedom to enact policies with limited scrutiny. The two men have even exchanged notes and acknowledged the similarity of their programs. Ultimately, these were political shotgun marriages — the very thing that made the attachments so powerful at a particular moment in time was ultimately their undoing: In each case, the leader learned that there was no real love there. As Cummings and Musk found, if you hitch yourself to an anti-establishment hero who eschews patronage and loyalty then it's only a matter of time before you find yourself the target. There is a case that a less bellicose, less in-your-face flavour of DOGE could work better — and that such changes are easier when they're not associated with a controversial figure. In the UK, Prime Minister Keir Starmer's Labour government, elected last year, is pinning its hopes on widespread use of AI technology to improve productivity, for person. And there are even people in Downing Street who quite envies the idea of taking a Musk-style wrecking ball to parts of the state; Health Secretary Wes Streeting recently abolished one of the main administrative levels of the National Health Service in an overnight raid. Attempts by the insurgent, right-wing populist Reform party — headed by Nigel Farage, who has courted Musk's funds — to launch a 'British DOGE' and find excess spending in local government have hit the rocks. Announced on Monday, the program's first leader had quit by Thursday. Cummings said in November that he was hopeful Musk could make the US government operate like Silicon Valley. Cummings was long on diagnosis but short on prescription, the London-based Institute for Government think tank wrote in November 2021. It sought to fill the gap with ideas of its own for civil service reform.
Yahoo
2 hours ago
- Yahoo
EU Seeks Deeper G-7 Defense Ties to Offset Trump's Upheaval
(Bloomberg) -- The European Union is racing to strengthen defense ties with key Group of Seven allies as President Donald Trump pares back the US commitment to the continent's security. Next Stop: Rancho Cucamonga! ICE Moves to DNA-Test Families Targeted for Deportation with New Contract Where Public Transit Systems Are Bouncing Back Around the World US Housing Agency Vulnerable to Fraud After DOGE Cuts, Documents Warn Trump Said He Fired the National Portrait Gallery Director. She's Still There. The EU is aiming to conclude a defense and security partnership with Canada this month that will pave the way for Canadian companies to participate in a new defense fund meant to jump start a transformation of the bloc's industry, according to people familiar with the matter. The EU will also hold a summit with Japan in July when they'll seek to beef up a security agreement, said the people, who spoke on the condition of anonymity. The EU is diversifying its relationships as Trump has upended the European security order that's maintained peace on the continent for more than 70 years. During his first term, Trump threatened to leave NATO and last year he said he'd let Russia do 'whatever the hell they want' to allies not spending enough on defense. The US president's abrupt pullback has led to the EU mobilizing hundreds of billions of euros to ramp up its defense spending. The defense agreement with Canada is expected to be one of the key deliverables of a summit both allies will hold in Brussels on June 23, intended to show the bloc's alignment with like-minded nations in this turbulent period, the people said. The pact will follow the blueprint set by a similar defense accord reached with the United Kingdom in May. As with the UK, such an agreement would be a stepping stone toward allowing Canadian firms to participate in the bloc's new €150 billion ($171 billion) loan-based fund to develop the EU's defense industry. Although Canada wants a strong signal from the summit with regards to its industry's full access to the new EU instrument, a security and defense pact only grants partial access to the bloc's fund, and full access would require another bilateral deal. Canada, a long-standing EU partner in security matters, is also participating in two other initiatives aimed at strengthening defense cooperation among willing member states, covering military mobility and the development of a network of logistics hubs in Europe. Similar to other defense partnerships signed by the bloc, the draft text, still under discussion, will also note shared interests, common threats, expected joint dialogues and several areas of cooperation, the people said. The bilateral summit between the EU and Japan will aim to strengthen a defense pact signed late last year by tapping into closer naval cooperation, among other measures. A spokesperson for the European Council chief, Antonio Costa, said the twin summits aim to 'deepen even more the already very close relationship that we have with these two key partners.' The EU has previously signed security and defense partnerships on issues like missions, training or industrial cooperation, with countries such as Albania, Japan, Moldova, North Macedonia, Norway and South Korea. In addition to defense, the EU is also looking to strengthen ties with both G-7 allies around a competitiveness alliance, although in the case of Japan the work is more advanced. Such new framework would serve to group existing bilateral initiatives in the field of economic security, supply chain resilience and competitiveness, although discussions continued between negotiators to determine the content and the format, the people said. --With assistance from Andrea Palasciano. Cavs Owner Dan Gilbert Wants to Donate His Billions—and Walk Again YouTube Is Swallowing TV Whole, and It's Coming for the Sitcom The SEC Pinned Its Hack on a Few Hapless Day Traders. The Full Story Is Far More Troubling Is Elon Musk's Political Capital Spent? Trump Considers Deporting Migrants to Rwanda After the UK Decides Not To ©2025 Bloomberg L.P. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data