a day ago
AI regulation does not stifle innovation
Photo credit: Claudenakagawa / Shutterstocl
Ever since co-founding the All-Party Parliamentary Group on AI nine years ago, still ably administered by the Big Innovation Centre, I've been deeply involved in debating and advising on the implications of artificial intelligence. My optimism about AI's potential remains strong – from helping identify new Parkinson's treatments to DeepMind's protein structure predictions that could transform drug discovery and personalised medicine.
Yet this technology is unlike anything we've seen before. It's potentially more autonomous, with greater impact on human creativity and employment, and more opaque in its decision-making processes.
The conventional wisdom that regulation stifles innovation needs turning on its head. As AI becomes more powerful and pervasive, appropriate regulation isn't just about restricting harmful practices – it's key to driving widespread adoption and sustainable growth. Many potential AI adopters are hesitating not due to technological limitations but Tim Clement-Jones Liberal Democrat peer and spokesperson for the digital economy uncertainties about liability, ethical boundaries and public acceptance. Clear regulatory frameworks addressing algorithmic bias, data privacy and decision transparency can actually accelerate adoption by providing clarity and confidence.
Different jurisdictions are adopting varied approaches. The European Union's AI Act, with its risk-based framework, started coming into effect this year. Singapore has established comprehensive AI governance through its model AI governance framework. Even China regulates public-facing generative AI models with fairly heavy inspection regimes.
The UK's approach has been more cautious. The previous government held the AI Safety Summit at Bletchley Park and established the AI Safety Institute (now inexplicably renamed the AI Security Institute), but with no regulatory teeth. The current government has committed to binding regulation for companies developing the most powerful AI models, though progress remains slower than hoped. Notably, 60 countries – including Saudi Arabia and the UAE, but not Britain or the US – signed the Paris AI Action Summit declaration in February this year, committing to ensuring AI is 'open, inclusive, transparent, ethical, safe, secure and trustworthy'.
Several critical issues demand urgent attention. Intellectual property: the use of copyrighted material for training large language models without licensing has sparked substantial litigation and, in the UK, unprecedented parliamentary debate. Governments need to act decisively to ensure creative works aren't ingested into generative AI models without return to rights-holders, with transparency duties on developers.
Digital citizenship: we must equip citizens for the AI age, ensuring they understand how their data is used and AI's ethical implications. Beyond the UAE, Finland and Estonia, few governments are taking this seriously enough.
Subscribe to The New Statesman today from only £8.99 per month Subscribe
International convergence: despite differing regulatory regimes, we need developers to collaborate and commercialise innovations globally while ensuring consumer trust in common international ethical and safety standards.
Well-designed regulation can be a catalyst for AI adoption and innovation. Just as environmental regulations spurred cleaner technologies, AI regulations focusing on explainability and fairness could push developers toward more sophisticated, responsible systems.
The goal isn't whether to regulate AI, but how to regulate it promoting both innovation and responsibility. We need principles-based rather than overly prescriptive regulation, assessing risk and emphasising transparency and accountability without stifling creativity.
Achieving the balance between human potential and machine innovation isn't just possible – it's necessary as we step into an increasingly AI-driven world. That's what we must make a reality.
This article first appeared in our Spotlight on Technology supplement, of 13 June 2025.
Related