
Civil Society Calls For Overhaul Of Canada's Approach To Digital Policy
May 28, 2025
Today, OpenMedia and 13 other prominent Canadian civil society organizations and digital policy experts delivered a joint letter to key federal ministers, urging fundamental reform of Canada's strategy for digital policymaking. The letter calls for an end to the last government's practice of packing digital legislation into sprawling, multi-part omnibus bills such as Bill C-63, the Online Harms Act, and Bill C-27, which covered private sector privacy reform and AI regulation. The signatories agree the government must address critical issues such as online safety, privacy, and artificial intelligence, but believe separate pieces of legislation advanced to fulfill a unified digital policy vision is the best approach for our new government to regulate them.
'Canadians deserve sensible, nuanced digital policy that can comfortably pass in a minority Parliament," said Matt Hatfield, Executive Director of OpenMedia. "We've seen how omnibus legislation plays out: the most controversial portions drown out the rest, and committees spend their time debating overreaching measures instead of getting effective digital regulation done. That's why we're asking our government to work with every party to pass basic rights-respecting privacy and online safety measures that are now many years past due."
The signatories observe that a fragmented approach to Canada's digital policy, split between different government agencies with competing mandates and agendas, has led to the failure of long-promised digital policy reforms to receive due study, appropriate amendments, and be adopted by Parliament. The letter's authors point to the recent appointment of Evan Solomon as Minister for AI and Digital Innovation on May 13th as a key opportunity for the government to better signal its priorities and implement a more cohesive legislative vision.
Many signatories engaged the government throughout its consideration of illegal online content that informed Bill C-63, including through a 2024 letter that recommended splitting the Bill, 2023 expert letter outlining red lines and recommendations for potential legislation, and by individual submissions to the government's 2021 consultation. Many also participated in Parliament's INDU Committee consideration of Bill C-27, delivering recommendations on privacy amendments, artificial intelligence regulation amendments, or both. Through this experience, the signatories observed Parliament struggle to grapple effectively with either bill. Controversial proposals attached to both overwhelmed productive discussion, preventing amendment and passage of more substantive and widely supported sections.
The letter concludes with five core recommendations for future legislation, including placing overall coordination responsibility for digital policy under a single department; advancing Canada's digital policy agenda through separate legislative proposals; and prioritizing areas of broad consensus for rapid legislative improvement first.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scoop
3 hours ago
- Scoop
Why Is The New Zealand Media Not Questioning The Implications Of The Gene Technology Bill?
As the Gene Technology Bill advances through Parliament, New Zealand faces a pivotal moment in science, agriculture, and public health. The proposed legislation would significantly relax restrictions on gene technology, enabling broader research, development, and use of genetically modified organisms (GMOs) in New Zealand for the first time in nearly 30 years Yet, despite the profound ethical, environmental, and societal implications, there has been a noticeable lack of critical scrutiny from the mainstream media. 'It is plausible that political and economic factors are influencing the nature and depth of media coverage regarding the Gene Technology Bill,' says Lisa Er, author of a petition to 'halt the progress of the Gene Technology Bill and instead set up a Commission of Inquiry into the health and safety of people and the environment on behalf of citizens, to allow time for wider community and stakeholder consultation.' Key Concerns: Environmental Risks: The Bill paves the way for the release of genetically modified organisms (GMOs) into New Zealand's unique ecosystems, risking irreversible impacts on native species, biodiversity, crops, and the country's global clean, green brand. Lack of Public Consultation: The Government has failed to meaningfully consult with Māori, scientists, and the wider public, ignoring Treaty of Waitangi obligations and indigenous rights, community concerns about food safety, cultural values, and environmental protection. Threat to Export Markets: New Zealand's primary export markets, especially in Europe and Asia, have strict GM-free requirements. The Bill endangers market access and could jeopardize billions in export earnings. Undermining Precaution: The Bill abandons the precautionary principle that has underpinned New Zealand's cautious approach to gene technology, exposing the country to unknown long-term risks. Ignoring International Best Practice: Leading nations are strengthening, not weakening, their oversight of gene technologies in response to new scientific evidence and public concern. Insufficient Public Debate: The bill has generated over 1,500 public submissions, reflecting deep divisions and strong opinions across the country. The removal of labelling GE is of considerable public concern. Why has the minimal media coverage largely focused on official statements and the potential benefits, with little attention paid to the risks, opposition viewpoints, or the broader societal debate that is unfolding in submissions and community discussions? Risk Oversight and Regulatory Gaps: the bill will open the door to unintended consequences, including ecosystem disruption, cross-contamination of crops, and unclear long-term health effects Transparency and Accountability: Some have questioned whether the bill is being rushed or if consultation has been adequate, particularly given the timing of the public submission period over the summer holidays Media outlets have an essential role in holding lawmakers accountable and ensuring transparency in the legislative process, and these risks deserve deeper journalistic investigation and public explanation. A Call to Action for the Media: We urge New Zealand's journalists and editors to fulfil their democratic duty by: - Investigating the full range of concerns about the Gene Technology Bill, including those raised in public submissions. - Highlighting the ethical, cultural, and environmental questions that remain unresolved. - Providing balanced, evidence-based coverage that empowers New Zealanders to make informed decisions about the future of gene technology in their country. 'The Gene Technology Bill represents a generational shift in New Zealand's approach to biotechnology', says Er. 'The public deserves robust, critical journalism that examines not only the promises but also the very real perils of this legislation.' Lisa Er, founder of Lisa's Hummus Issued in the public interest to encourage transparent, balanced, and investigative reporting on a matter of national importance Petition with over 4,000 signatures Petition request: That the House of Representatives halt the progress of the Gene Technology Bill and instead set up a Commission of Inquiry into the health and safety of people and the environment on behalf of citizens, to allow time for wider community and stakeholder consultation. Petition reason: I consider the Gene Technology Bill has failed to follow sound and fair processes by not consulting enough with the public and other stakeholders. I believe there is inadequate consideration of Te Tiriti obligations, and insufficient requirements to protect people and the environment from the risks of GE contamination. A range of gene editing techniques would be excluded from regulation. This would mean GE products would enter the environment and food supply untested, unregistered and unlabeled.


Scoop
3 hours ago
- Scoop
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI
Article – RNZ It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? , Producer – 30′ with Guyon Espiner It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? 'My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people.' Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim. His career at the forefront of machine learning began at its inception – before the first Pacman game was released. But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence. Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril – and a potential apocalypse. The Good: 'It's going to do wonderful things for us' Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. 'It's going to do wonderful things for us,' he says. According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade. Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses. 'In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients – including quite a few with the same very rare condition you have – that knows your genome, knows all your tests, and hasn't forgotten any of them.' He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive – a human-AI synergy he believes will only improve over time. Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. 'I think that's a bit optimistic.' 'If he said 25 years I'd believe it.' The Bad: 'Autonomous lethal weapons' Despite these benefits, Hinton warns of pressing risks that demand urgent attention. 'Right now, we're at a special point in history,' he says. 'We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes.' He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war. 'This shows,' says Hinton of his former employers, 'the company's principals were up for sale.' He believes defense departments of all major arms dealers are already busy working on 'autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind'. He also points out the grim fact that Europe's AI regulations – some of the world's most robust – contain 'a little clause that says none of these regulations apply to military uses of AI'. Then there is AI's capacity for deception – designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged – in just one year – by 1200 percent. The Apocalyptic: 'We'd no longer be needed' At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet? 'I think it would be a bad thing for people – because we'd no longer be needed.' Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise. 'If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose.' 'Ask a chicken' For those who believe a rogue AI can simply be shut down by 'pulling the plug', Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive. This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off – despite being given clear instructions to do so by the research team. Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. 'There are so many bad uses as well as good,' he says. 'And our political systems are just not in a good state to deal with this coming along now.' It's a sobering reflection from one of the brightest minds in AI – whose work helped build the systems now raising alarms. He closes on a metaphor that sounds absurd as it does chilling: 'If you want to know what it's like not to be the apex intelligence, ask a chicken.' Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.


Scoop
3 hours ago
- Scoop
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI
It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? "My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people." Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim. His career at the forefront of machine learning began at its inception - before the first Pacman game was released. But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence. Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril - and a potential apocalypse. The Good: 'It's going to do wonderful things for us' Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. "It's going to do wonderful things for us," he says. According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade. Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses. "In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients - including quite a few with the same very rare condition you have - that knows your genome, knows all your tests, and hasn't forgotten any of them." He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive - a human-AI synergy he believes will only improve over time. Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. "I think that's a bit optimistic." "If he said 25 years I'd believe it." The Bad: 'Autonomous lethal weapons' Despite these benefits, Hinton warns of pressing risks that demand urgent attention. "Right now, we're at a special point in history," he says. "We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes." He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war. "This shows," says Hinton of his former employers, "the company's principals were up for sale." He believes defense departments of all major arms dealers are already busy working on "autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind". He also points out the grim fact that Europe's AI regulations - some of the world's most robust - contain "a little clause that says none of these regulations apply to military uses of AI". Then there is AI's capacity for deception - designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged - in just one year - by 1200 percent. The Apocalyptic: 'We'd no longer be needed' At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet? "I think it would be a bad thing for people - because we'd no longer be needed." Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise. "If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose." 'Ask a chicken' For those who believe a rogue AI can simply be shut down by "pulling the plug", Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive. This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off - despite being given clear instructions to do so by the research team. Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. "There are so many bad uses as well as good," he says. "And our political systems are just not in a good state to deal with this coming along now." It's a sobering reflection from one of the brightest minds in AI - whose work helped build the systems now raising alarms. He closes on a metaphor that sounds absurd as it does chilling: "If you want to know what it's like not to be the apex intelligence, ask a chicken." Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.