Latest news with #BillC-27


Scoop
5 days ago
- Business
- Scoop
Civil Society Calls For Overhaul Of Canada's Approach To Digital Policy
May 28, 2025 Today, OpenMedia and 13 other prominent Canadian civil society organizations and digital policy experts delivered a joint letter to key federal ministers, urging fundamental reform of Canada's strategy for digital policymaking. The letter calls for an end to the last government's practice of packing digital legislation into sprawling, multi-part omnibus bills such as Bill C-63, the Online Harms Act, and Bill C-27, which covered private sector privacy reform and AI regulation. The signatories agree the government must address critical issues such as online safety, privacy, and artificial intelligence, but believe separate pieces of legislation advanced to fulfill a unified digital policy vision is the best approach for our new government to regulate them. 'Canadians deserve sensible, nuanced digital policy that can comfortably pass in a minority Parliament," said Matt Hatfield, Executive Director of OpenMedia. "We've seen how omnibus legislation plays out: the most controversial portions drown out the rest, and committees spend their time debating overreaching measures instead of getting effective digital regulation done. That's why we're asking our government to work with every party to pass basic rights-respecting privacy and online safety measures that are now many years past due." The signatories observe that a fragmented approach to Canada's digital policy, split between different government agencies with competing mandates and agendas, has led to the failure of long-promised digital policy reforms to receive due study, appropriate amendments, and be adopted by Parliament. The letter's authors point to the recent appointment of Evan Solomon as Minister for AI and Digital Innovation on May 13th as a key opportunity for the government to better signal its priorities and implement a more cohesive legislative vision. Many signatories engaged the government throughout its consideration of illegal online content that informed Bill C-63, including through a 2024 letter that recommended splitting the Bill, 2023 expert letter outlining red lines and recommendations for potential legislation, and by individual submissions to the government's 2021 consultation. Many also participated in Parliament's INDU Committee consideration of Bill C-27, delivering recommendations on privacy amendments, artificial intelligence regulation amendments, or both. Through this experience, the signatories observed Parliament struggle to grapple effectively with either bill. Controversial proposals attached to both overwhelmed productive discussion, preventing amendment and passage of more substantive and widely supported sections. The letter concludes with five core recommendations for future legislation, including placing overall coordination responsibility for digital policy under a single department; advancing Canada's digital policy agenda through separate legislative proposals; and prioritizing areas of broad consensus for rapid legislative improvement first.


Asia Times
01-04-2025
- Business
- Asia Times
Rash AI deregulation puts financial markets at high risk
As Canada moves toward stronger AI regulation with the proposed Artificial Intelligence and Data Act (AIDA), its southern neighbor appears to be taking the opposite approach. AIDA, part of Bill C-27, aims to establish a regulatory framework to improve AI transparency, accountability and oversight in Canada, although some experts have argued it doesn't go far enough. Meanwhile, United States President Donald Trump is pushing for AI deregulation. In January, Trump signed an executive order aimed at eliminating any perceived regulatory barriers to 'American AI innovation.' The executive order replaced former president Joe Biden's prior executive order on AI. Notably, the US was also one of two countries — along with the UK — that didn't sign a global declaration in February to ensure AI is 'open, inclusive, transparent, ethical, safe, secure and trustworthy.' Eliminating AI safeguards leaves financial institutions vulnerable. This vulnerability can increase uncertainty and, in a worst-case scenario, increase the risk of systemic collapse. AI's potential in financial markets is undeniable. It can improve operational efficiency, perform real-time risk assessments, generate higher income and forecast predictive economic change. My research has found that AI-driven machine learning models not only outperform conventional approaches in identifying financial statement fraud, but also in detecting abnormalities quickly and effectively. In other words, AI can catch signs of financial mismanagement before they spiral into a disaster. In another study, my co-researcher and I found that AI models like artificial neural networks and classification and regression trees can predict financial distress with remarkable accuracy. Artificial neural networks are brain-inspired algorithms. Similar to how our brain sends messages through neurons to perform actions, these neural networks process information through layers of interconnected 'artificial neurons,' learning patterns from data to make predictions. Similarly, classification and regression trees are decision-making models that divide data into branches based on important features to identify outcomes. Our artificial neural networks models predicted financial distress among Toronto Stock Exchange-listed companies with a staggering 98% accuracy. This suggests AI's immense potential in providing early warning signals that could help avert financial downturns before they start. However, while AI can simplify manual processes and lower financial risks, it can also introduce vulnerabilities that, if left unchecked, could pose significant threats to economic stability. Trump's push for deregulation could result in Wall Street and other major financial institutions gaining significant power over AI-driven decision-making tools with little to no oversight. When profit-driven AI models operate without the appropriate ethical boundaries, the consequences could be severe. Unchecked algorithms, especially in credit evaluation and trading, could worsen economic inequality and generate systematic financial risks that traditional regulatory frameworks cannot detect. Algorithms trained on biased or incomplete data may reinforce discriminatory lending practices. In lending, for instance, biased AI algorithms can deny loans to marginalized groups, widening wealth and inequality gaps. In addition, AI-powered trading bots, which are capable of executing rapid transactions, could trigger flash crashes in seconds, disrupting financial markets before regulators have time to respond. The flash crash of 2010 is a prime example where high-frequency trading algorithms aggressively reacted to market signals causing the Dow Jones Industrial Average to drop by 998.5 points in a matter of minutes. Furthermore, unregulated AI-driven risk models might overlook economic warning signals, resulting in substantial errors in monetary control and fiscal policy. Striking a balance between innovation and safety depends on the ability for regulators and policymakers to reduce AI hazards. While considering the financial crisis of 2008, many risk models — earlier forms of AI — were wrong to anticipate a national housing market crash, which led regulators and financial institutions astray and exacerbated the crisis. My research underscores the importance of integrating machine learning methods within strong regulatory systems to improve financial oversight, fraud detection and prevention. Durable and reasonable regulatory frameworks are required to turn AI from a potential disruptor into a stabilizing force. By implementing policies that prioritize transparency and accountability, policymakers can maximize the advantages of AI while lowering the risks associated with it. A federally regulated AI oversight body in the US could serve as an arbitrator, just like Canada's Digital Charter Implementation Act of 2022 proposes the establishment of an AI and Data Commissioner. Operating with checks and balances inherent to democratic structures would ensure fairness in financial algorithms and stop biased lending policies and concealed market manipulation. Financial institutions would be required to open the 'black box' of AI-driven alternatives by mandating transparency through explainable AI standards — guidelines that are aimed at making AI systems' outputs more understandable and transparent to humans. Machine learning's predictive capabilities could help regulators identify financial crises in real time using early warning signs — similar to the model developed by my co-researcher and me in our study. However, this vision doesn't end at national borders. Globally, the International Monetary Fund and the Financial Stability Board could establish AI ethical standards to curb cross-border financial misconduct. Will AI still be the key to foresee and stop the next economic crisis, or will the lack of regulatory oversight cause a financial disaster? As financial institutions continue to adopt AI-driven models, the absence of strong regulatory guardrails raises pressing concerns. Without proper safeguards in place, AI is not just a tool for economic prediction — it could become an unpredictable force capable of accelerating the next financial crisis. The stakes are high. Policymakers must act swiftly to regulate the increasing impact of AI before deregulation opens the path for an economic disaster. Without decisive action, the rapid adoption of AI in finance could outpace regulatory efforts, leaving economies vulnerable to unforeseen risks and potentially setting the stage for another global financial crisis. Sana Ramzan is assistant professor in Business, University Canada West This article is republished from The Conversation under a Creative Commons license. Read the original article.