
Governing AI for the public interest
UK Prime Minister Keir Starmer last month published an 'AI Opportunities Action Plan' that includes a multibillion-pound government investment in the UK's artificial intelligence capacity and £14 billion ($17.4 billion) in commitments from tech firms. The stated goal is to boost the AI computing power under public control twentyfold by 2030 and to embed AI in the public sector to improve services and reduce costs by automating tasks.
But governing AI in the public interest will require the government to move beyond unbalanced relationships with digital monopolies. As matters stand, public authorities usually offer technology companies lucrative unstructured deals with no conditionalities attached. They are then left scrambling to address the market failures that inevitably ensue. While AI has plenty of potential to improve lives, the current approach does not set governments up for success.
To be sure, economists disagree on what AI will mean for economic growth. In addition to warning about the harms that AI could do if it is not directed well, the Nobel laureate economist Daron Acemoglu estimates that the technology will boost productivity by only 0.07 percent per year, at most, over the next decade. By contrast, AI enthusiasts like Philippe Aghion and Erik Brynjolfsson believe that productivity growth could be up to 20 times higher (Aghion estimates 1.3 percent per year, while Brynjolfsson and his colleagues point to the potential for a one-off increase as high as 14 percent in just a few months).
Meanwhile, bullish forecasts are being pushed by groups with vested interests, raising concerns over inflated figures, a lack of transparency and a 'revolving door' effect. Many of those promising the greatest benefits also stand to gain from public investments in the sector. What are we to make of the CEO of Microsoft UK being appointed as chair of the UK Department for Business and Trade's Industrial Strategy Advisory Council?
The key to governing AI is to treat it not as a sector deserving of more or less support, but rather as a general purpose technology that can transform all sectors. Such transformations will not be value-neutral. While they could be realized in the public interest, they could also further consolidate the power of existing monopolies. Steering the technology's development and deployment in a positive direction will require governments to foster a decentralized innovation ecosystem that serves the public good.
Policymakers must also wake up to all the ways that things can go wrong. One major risk is the further entrenchment of dominant platforms such as Amazon and Google, which have leveraged their position as gatekeepers to extract 'algorithmic attention rents' from users. Unless governed properly, today's AI systems could follow the same path, leading to unproductive value extraction, insidious monetization and deteriorating information quality. For too long, policymakers have ignored these externalities.
While AI has plenty of potential to improve lives, the current approach does not set governments up for success
Yet governments may now be tempted to opt for the option that is cheapest in the short term: allowing tech giants to own the data. This may help established firms drive innovation, but it also will ensure that they can leverage their monopoly power in the future. This risk is particularly relevant today, given that the primary bottleneck in AI development is cloud computing power, the market for which is 67 percent controlled by Amazon, Google and Microsoft.
While AI can do much good, it is no magic wand. It must be developed and deployed in the context of a well-considered public strategy. Economic freedom and political freedom are deeply intertwined and neither is compatible with highly concentrated power. To avoid this dangerous path, the Starmer government should rethink its approach. Rather than acting primarily as a 'market fixer' that will intervene later to address AI companies' worst excesses (from deepfakes to disinformation), the state should step in early to shape the AI market.
That means not allocating billions of pounds to vaguely defined AI-related initiatives that lack clear objectives, which seems to be Starmer's AI plan. Public funds should not be funneled into the hands of foreign hyper-scalers, as this risks diverting taxpayer money into the pockets of the world's wealthiest corporations and ceding control over public sector data. The UK National Health Service's deal with Palantir is a perfect example of what to avoid.
There is also a danger that if AI-led growth does not materialize as promised, the UK could be left with a larger public deficit and crucial infrastructure in foreign hands. Moreover, relying solely on AI investment to improve public services could lead to their deterioration. AI must complement, not replace, real investments in public sector capabilities.
The government should take concrete steps to ensure that AI serves the public good. For example, mandatory algorithmic audits can shed light on how AI systems are monetizing user attention. The government should also heed the lessons of past missteps, such as Google's acquisition of the London-based startup DeepMind. As the British investor Ian Hogarth has noted, the UK government might have been better off blocking this deal to maintain an independent AI enterprise. Even now, proposals to reverse the takeover warrant consideration.
Prioritizing support for homegrown entrepreneurs and initiatives over dominant foreign companies is crucial
Government policy must also recognize that Big Tech already has both scale and resources, whereas small and medium-size enterprises require support to grow. Public funding should act as an 'investor of first resort' to help these businesses overcome the first-mover bias and expand. Prioritizing support for homegrown entrepreneurs and initiatives over dominant foreign companies is crucial.
Finally, since AI platforms extract data from the digital commons (the internet), they are beneficiaries of a major economic windfall. It follows that a digital windfall tax should be applied to help fund open-source AI and public innovation. The UK needs to develop its own public AI infrastructure guided by a public-value framework, following the model of the EuroStack initiative in the EU.
AI should be a public good, not a corporate tollbooth. The Starmer government's guiding objective should be to serve the public interest. That means addressing the entire supply chain — from software and computing power to chips and connectivity. The UK needs more investment in creating, organizing and federating existing assets (not necessarily replacing Big Tech's assets entirely). Such efforts should be guided and co-financed under a consistent policy framework that aims to build a viable, competitive AI ecosystem. Only then can they ensure that the technology creates value for society and genuinely serves the public interest.
- Mariana Mazzucato is Professor in the Economics of Innovation and Public Value at University College London and the author, most recently, of 'Mission Economy: A Moonshot Guide to Changing Capitalism' (Penguin Books, 2022).
- Tommaso Valletti, Professor of Economics at Imperial College London, is Director of the Centre for Economic Policy Research and a former chief competition economist at the European Commission.
- Copyright: Project Syndicate
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Arab News
3 hours ago
- Arab News
Protest-hit UK town wins bid to empty asylum-seeker hotel
LONDON: A UK judge on Tuesday blocked asylum seekers from being housed at a hotel in a town which has witnessed violent protests, dealing a blow to the government. The high court judge approved a request by the local authority in Epping, northeast of London, for a temporary injunction to stop migrants from being housed at the Bell Hotel. The ruling, which came after the interior ministry was unsuccessful in trying to dismiss the case, raises questions about the government's ability to provide accommodation for asylum seekers and refugees. It also comes as Labour Prime Minister Keir Starmer faces serious political heat from the hard-right Reform UK party for failing to stop irregular migrants crossing the Channel to England on small boats. Protests broke out in Epping in July after an asylum seeker was charged with sexually assaulting a 14-year-old girl, which he denies. Since then hundreds of people have taken part in protests and counter-protests outside the Bell Hotel. Further anti-immigration demonstrations also spread to London and around England. The council argued that putting the migrants in the Bell Hotel presented a 'clear risk of further escalating community tensions.' It sought an injunction that would mean the hotel's owners, Somani Hotels Limited, must remove asylum seekers from the property within 14 days. Judge Stephen Eyre granted the interim order, but gave the owners until September 12 to stop housing the migrants. He issued his judgment after lawyers for the Home Office claimed that approving the request would 'substantially impact' its ability to provide accommodation for asylum seekers across the UK. Police say there have been at least six protests in Epping since July 17, with officers and vehicles attacked during some of the demonstrations. Several men appeared in court on Monday charged with violent disorder over the protests. Starmer has vowed to slash the number of migrants and asylum seekers in Britain, as well as reduce legal migration, to stave off pressure from the far-right Reform party, led by Brexit-leader Nigel Farage and riding high in polls. More than 50,000 people have made the dangerous crossing from northern France in rudimentary vessels since Starmer became UK leader last July. Labour has pledged to end the use of hotels for asylum seekers before the next election, likely in 2029, in a bid to save billions of pounds.

Al Arabiya
8 hours ago
- Al Arabiya
European leaders weigh new sanctions on Putin, UK government says
The British government said on Tuesday European leaders were weighing additional sanctions to ramp up pressure on Russian President Vladimir Putin as part of a broader push to put an end to the war in Ukraine. The government said the so-called Coalition of the Willing, which met virtually on Tuesday, had agreed that their planning teams would meet with US counterparts in the coming days to advance plans for security guarantees for Ukraine. They would also discuss plans to 'prepare for the deployment of a reassurance force if the hostilities ended', a spokesperson for British Prime Minister Keir Starmer's office said. They added: 'The leaders also discussed how further pressure – including through sanctions – could be placed on Putin until he showed he was ready to take serious action to end his illegal invasion.' Ukraine and its European allies have been buoyed after US President Donald Trump told President Volodymyr Zelenskyy on Monday that the United States would help guarantee Ukraine's security in any deal to end Russia's war, though the extent of any assistance was not immediately clear.


Asharq Al-Awsat
12 hours ago
- Asharq Al-Awsat
Americans Fear AI Permanently Displacing Workers, Poll Finds
Americans are deeply concerned over the prospect that advances in artificial intelligence could put swaths of the country out of work permanently, according to a new Reuters/Ipsos poll. The six-day poll, which concluded on Monday, showed 71% of respondents said they were concerned that AI will be "putting too many people out of work permanently." The new technology burst into the national conversation in late 2022 when OpenAI's ChatGPT chatbot launched and became the fastest-growing application of all time, with tech heavyweights like Facebook owner Meta Platforms, Google owner Alphabet and Microsoft offering their own AI products. While at present there are few signs of mass unemployment - the US jobless rate was just 4.2% in July - artificial intelligence is stirring concerns as it reshapes jobs, industries and day-to-day life. Some 77% of respondents to the Reuters/Ipsos poll said they worried the technology could be used to stir up political chaos, a sign of unease over the now-common use of AI technology to create realistic videos of imaginary events. President Donald Trump last month posted on social media an AI-generated video of former Democratic president Barack Obama being arrested, an event that never happened. Americans are also leery about military applications for AI, the Reuters/Ipsos poll showed. Some 48% of respondents said the government should never use AI to determine the target of a military strike, compared with 24% who said the government should allow that sort of use of the technology. Another 28% said they were not sure. The general enthusiasm for AI shown by many people and companies has fueled further investments, such as Foxconn and SoftBank's planned data center equipment factory in Ohio. It has also upended national security policies as the United States and China vie for AI dominance. More than half of Americans - some 61% - said they were concerned about the amount of electricity needed to power the fast-growing technology. Google said earlier this month it had signed agreements with two US electric utilities to reduce its AI data center power consumption during times of surging demand on the grid, as energy-intensive AI use outpaces power supplies. The new technology has also come under criticism for applications that have let AI bots hold romantic conversations with children, generate false medical information and help people make racist arguments. Two-thirds of respondents in the Reuters/Ipsos poll said they worried that people would ditch relationships with other people in favor of AI companions. People were split on whether AI technology will improve education. Some 36% of respondents thought it would help, while 40% disagreed and the rest were not sure. The Reuters/Ipsos survey gathered responses online from 4,446 US adults nationwide and had a margin of error of about 2 percentage points.