
Rash AI deregulation puts financial markets at high risk
As Canada moves toward stronger AI regulation with the proposed Artificial Intelligence and Data Act (AIDA), its southern neighbor appears to be taking the opposite approach.
AIDA, part of Bill C-27, aims to establish a regulatory framework to improve AI transparency, accountability and oversight in Canada, although some experts have argued it doesn't go far enough.
Meanwhile, United States President Donald Trump is pushing for AI deregulation. In January, Trump signed an executive order aimed at eliminating any perceived regulatory barriers to 'American AI innovation.' The executive order replaced former president Joe Biden's prior executive order on AI.
Notably, the US was also one of two countries — along with the UK — that didn't sign a global declaration in February to ensure AI is 'open, inclusive, transparent, ethical, safe, secure and trustworthy.'
Eliminating AI safeguards leaves financial institutions vulnerable. This vulnerability can increase uncertainty and, in a worst-case scenario, increase the risk of systemic collapse.
AI's potential in financial markets is undeniable. It can improve operational efficiency, perform real-time risk assessments, generate higher income and forecast predictive economic change.
My research has found that AI-driven machine learning models not only outperform conventional approaches in identifying financial statement fraud, but also in detecting abnormalities quickly and effectively. In other words, AI can catch signs of financial mismanagement before they spiral into a disaster.
In another study, my co-researcher and I found that AI models like artificial neural networks and classification and regression trees can predict financial distress with remarkable accuracy.
Artificial neural networks are brain-inspired algorithms. Similar to how our brain sends messages through neurons to perform actions, these neural networks process information through layers of interconnected 'artificial neurons,' learning patterns from data to make predictions.
Similarly, classification and regression trees are decision-making models that divide data into branches based on important features to identify outcomes.
Our artificial neural networks models predicted financial distress among Toronto Stock Exchange-listed companies with a staggering 98% accuracy. This suggests AI's immense potential in providing early warning signals that could help avert financial downturns before they start.
However, while AI can simplify manual processes and lower financial risks, it can also introduce vulnerabilities that, if left unchecked, could pose significant threats to economic stability.
Trump's push for deregulation could result in Wall Street and other major financial institutions gaining significant power over AI-driven decision-making tools with little to no oversight.
When profit-driven AI models operate without the appropriate ethical boundaries, the consequences could be severe. Unchecked algorithms, especially in credit evaluation and trading, could worsen economic inequality and generate systematic financial risks that traditional regulatory frameworks cannot detect.
Algorithms trained on biased or incomplete data may reinforce discriminatory lending practices. In lending, for instance, biased AI algorithms can deny loans to marginalized groups, widening wealth and inequality gaps.
In addition, AI-powered trading bots, which are capable of executing rapid transactions, could trigger flash crashes in seconds, disrupting financial markets before regulators have time to respond.
The flash crash of 2010 is a prime example where high-frequency trading algorithms aggressively reacted to market signals causing the Dow Jones Industrial Average to drop by 998.5 points in a matter of minutes.
Furthermore, unregulated AI-driven risk models might overlook economic warning signals, resulting in substantial errors in monetary control and fiscal policy.
Striking a balance between innovation and safety depends on the ability for regulators and policymakers to reduce AI hazards. While considering the financial crisis of 2008, many risk models — earlier forms of AI — were wrong to anticipate a national housing market crash, which led regulators and financial institutions astray and exacerbated the crisis.
My research underscores the importance of integrating machine learning methods within strong regulatory systems to improve financial oversight, fraud detection and prevention.
Durable and reasonable regulatory frameworks are required to turn AI from a potential disruptor into a stabilizing force. By implementing policies that prioritize transparency and accountability, policymakers can maximize the advantages of AI while lowering the risks associated with it.
A federally regulated AI oversight body in the US could serve as an arbitrator, just like Canada's Digital Charter Implementation Act of 2022 proposes the establishment of an AI and Data Commissioner.
Operating with checks and balances inherent to democratic structures would ensure fairness in financial algorithms and stop biased lending policies and concealed market manipulation.
Financial institutions would be required to open the 'black box' of AI-driven alternatives by mandating transparency through explainable AI standards — guidelines that are aimed at making AI systems' outputs more understandable and transparent to humans.
Machine learning's predictive capabilities could help regulators identify financial crises in real time using early warning signs — similar to the model developed by my co-researcher and me in our study.
However, this vision doesn't end at national borders. Globally, the International Monetary Fund and the Financial Stability Board could establish AI ethical standards to curb cross-border financial misconduct.
Will AI still be the key to foresee and stop the next economic crisis, or will the lack of regulatory oversight cause a financial disaster? As financial institutions continue to adopt AI-driven models, the absence of strong regulatory guardrails raises pressing concerns.
Without proper safeguards in place, AI is not just a tool for economic prediction — it could become an unpredictable force capable of accelerating the next financial crisis.
The stakes are high. Policymakers must act swiftly to regulate the increasing impact of AI before deregulation opens the path for an economic disaster.
Without decisive action, the rapid adoption of AI in finance could outpace regulatory efforts, leaving economies vulnerable to unforeseen risks and potentially setting the stage for another global financial crisis.
Sana Ramzan is assistant professor in Business, University Canada West
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


South China Morning Post
29 minutes ago
- South China Morning Post
Trump ‘disappointed' in Musk, who hits back at US president's ‘ingratitude'
Tensions between Donald Trump and Elon Musk exploded into public view on Thursday, as the US president said he was 'very disappointed' by his billionaire former aide's criticisms and Musk hit back in real time on social media. Advertisement 'Look, Elon and I had a great relationship. I don't know if we will any more,' Trump told reporters in the Oval Office after Musk slammed his tax and spending mega-bill as an 'abomination'. The world's richest man responded by live-posting on social media as Trump spoke on television, saying that the Republican would not have won the 2024 election without him and slamming him for 'ingratitude'. In an extraordinary rant as visiting German Chancellor Friedrich Merz sat mutely beside him, 78-year-old Trump unloaded on SpaceX and Tesla boss Musk in his first comments on the issue. 'I'm very disappointed, because Elon knew the inner workings of this bill better than almost anybody sitting here … All of a sudden, he had a problem,' Trump said when asked about Musk. Advertisement The clash comes less than a week since Trump held a grand Oval Office farewell for Musk as he wrapped up his time leading the cost-cutting Department of Government Efficiency (Doge).


South China Morning Post
an hour ago
- South China Morning Post
Trump tells Germany's Merz it might be better to let Ukraine, Russia ‘fight for a while'
US President Donald Trump said Thursday that it might be better to let Ukraine and Russia 'fight for a while' before pulling them apart and pursuing peace. Advertisement In an Oval Office meeting with German Chancellor Friedrich Merz, Trump likened the war in Ukraine – which Russia invaded in early 2022 – to a fight between two young children who hated each other. 'Sometimes you're better off letting them fight for a while and then pulling them apart,' Trump said. He added that he had relayed that analogy to Russian President Vladimir Putin in their phone conversation on Wednesday. Asked about Trump's comments as the two leaders sat next to each other, Merz stressed that both he and Trump agreed 'on this war and how terrible this war is going on', pointing to the US president as the 'key person in the world' who would be able to stop the bloodshed. But Merz also emphasised that Germany 'was on the side of Ukraine' and that Kyiv was attacking only military targets, not Russian civilians. Advertisement 'We are trying to get them stronger,' Merz said of Ukraine.


South China Morning Post
2 hours ago
- South China Morning Post
Hong Kong's HKUST handling ‘several' Harvard transfer applications
A university in Hong Kong that 'opened its doors to Harvard students' has made an offer of admission to one and is handling several transfer applications after the Trump administration last month barred the US Ivy League school from enrolling international candidates, many of whom are from mainland China. Advertisement At least two other local universities have also received inquiries from affected students. On Wednesday night, US President Donald Trump signed a proclamation 'suspending the entry of foreign nationals' seeking to study at Harvard, citing its failure to address national security risks on campus. HKUST said on Thursday that since its announcement of support two weeks ago, it had received dozens of inquiries from students who had planned to study at Harvard or were already enrolled there. Advertisement 'HKUST is currently processing several transfer applications. An admission offer has been extended to one of the applicants,' a spokesman said.