Governing AI for the public interest
But governing AI in the public interest will require the government to move beyond unbalanced relationships with digital monopolies. As matters stand, public authorities usually offer technology companies lucrative unstructured deals with no conditionalities attached. They are then left scrambling to address the market failures that inevitably ensue. While AI has plenty of potential to improve lives, the current approach does not set governments up for success.
To be sure, economists disagree on what AI will mean for economic growth. In addition to warning about the harms that AI could do if it is not directed well, the Nobel laureate economist Daron Acemoglu estimates that the technology will boost productivity by only 0.07 per cent per year, at most, over the next decade. By contrast, AI enthusiasts like Philippe Aghion and Erik Brynjolfsson believe that productivity growth could be up to 20 times higher (Aghion estimates 1.3 per centper year, while Brynjolfsson and his colleagues point to the potential for a one-off increase as high as 14 per centin just a few months).
Meanwhile, bullish forecasts are being pushed by groups with vested interests, raising concerns over inflated figures, a lack of transparency, and a 'revolving door' effect. Many of those promising the greatest benefits also stand to gain from public investments in the sector. What are we to make of the CEO of Microsoft UK being appointed as chair of the UK Department for Business and Trade's Industrial Strategy Advisory Council?
The key to governing AI is to treat it not as a sector deserving of more or less support, but rather as a general-purpose technology that can transform all sectors. Such transformations will not be value-neutral. While they could be realised in the public interest, they also could further consolidate the power of existing monopolies. Steering the technology's development and deployment in a positive direction will require governments to foster a decentralized innovation ecosystem that serves the public good.
Policymakers also must wake up to all the ways that things can go wrong. One major risk is the further entrenchment of dominant platforms such as Amazon and Google, which have leveraged their position as gatekeepers to extract 'algorithmic attention rents' from users. Unless governed properly, today's AI systems could follow the same path, leading to unproductive value extraction, insidious monetization, and deteriorating information quality. For too long, policymakers have ignored these externalities.
Yet governments may now be tempted to opt for the short-term cheapest option: allowing tech giants to own the data. This may help established firms drive innovation, but it also will ensure that they can leverage their monopoly power in the future. This risk is particularly relevant today, given that the primary bottleneck in AI development is cloud computing power, the market for which is 67 per centcontrolled by Amazon, Google and Microsoft.
While AI can do much good, it is no magic wand. It must be developed and deployed in the context of a well-considered public strategy. Economic freedom and political freedom are deeply intertwined, and neither is compatible with highly concentrated power. To avoid this dangerous path, the Starmer government should rethink its approach. Rather than acting primarily as a 'market fixer' that will intervene later to address AI companies' worst excesses (from deepfakes to disinformation), the state should step in early to shape the AI market.
That means not allocating billions of pounds to vaguely defined AI-related initiatives that lack clear objectives, which seems to be Starmer's AI plan. Public funds should not be funneled into the hands of foreign hyper-scalers, as this risks diverting taxpayer money into the pockets of the world's wealthiest corporations and ceding control over public-sector data. The UK National Health Service's deal with Palantir is a perfect example of what to avoid.
There is also a danger that if AI-led growth does not materialize as promised, the UK could be left with a larger public deficit and crucial infrastructure in foreign hands. Moreover, relying solely on AI investment to improve public services could lead to their deterioration. AI must complement, not replace, real investments in public-sector capabilities.
The government should take concrete steps to ensure that AI serves the public good. For example, mandatory algorithmic audits can shed light on how AI systems are monetising user attention. The government should also heed the lessons of past missteps, such as Google's acquisition of the London-based startup DeepMind. As the British investor Ian Hogarth has noted, the UK government might have been better off blocking this deal to maintain an independent AI enterprise. Even now, proposals to reverse the takeover warrant consideration.
Government policy also must recognise that Big Tech already has both scale and resources, whereas small and medium-size enterprises (SMEs) require support to grow. Public funding should act as an 'investor of first resort' to help these businesses overcome the first-mover bias and expand. Prioritising support for homegrown entrepreneurs and initiatives over dominant foreign companies is crucial.
Finally, since AI platforms extract data from the digital commons (the internet), they are beneficiaries of a major economic windfall. It follows that a digital windfall tax should be applied to help fund open-source AI and public innovation. The United Kingdom needs to develop its own public AI infrastructure guided by a public-value framework, following the model of the EuroStack initiative in the European Union.
AI should be a public good, not a corporate tollbooth. The Starmer government's guiding objective should be to serve the public interest. That means addressing the entire supply chain, from software and computing power to chips and connectivity. The UK needs more investment in creating, organizing, and federating existing assets (not necessarily replacing Big Tech's assets entirely). Such efforts should be guided and co-financed under a consistent policy framework that aims to build a viable, competitive AI ecosystem. Only then can they ensure that the technology creates value for society and genuinely serves the public interest.
Mariana Mazzucato, Professor in the Economics of Innovation and Public Value at University College London, is Founding Director of the UCL Institute for Innovation and Public Purpose, Co-Chair of the Global Commission on the Economics of Water, and Co-Chair of the Group of Experts to the G20 Taskforce for a Global Mobilization Against Climate Change. She was Chair of the World Health Organization's Council on the Economics of Health For All. She is the author of The Value of Everything: Making and Taking in the Global Economy (Penguin Books, 2019), Mission Economy: A Moonshot Guide to Changing Capitalism (Penguin Books, 2022), and, most recently, The Big Con: How the Consulting Industry Weakens Our Businesses, Infantilizes Our Governments and Warps Our Economies (Penguin Press, 2023). A tenth anniversary edition of her book The Entrepreneurial State: Debunking Public vs. Private Sector Myths was published by Penguin in September. Tommaso Valletti, Professor of Economics at Imperial College London, is Director of the Centre for Economic Policy Research and a former chief competition economist at the European Commission. Copyright: Project Syndicate, 2024. www.project-syndicate.org
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Ammon
18 hours ago
- Ammon
Universities without AI: Preparing graduates for yesterday, not tomorrow
In an era where artificial intelligence is reshaping not only professions but the very nature of work itself, universities that fail to embed AI across all disciplines risk preparing graduates for a world that no longer exists. This is not a distant scenario; AI is already transforming fields well beyond computer science, from medicine and engineering to law, education, and the arts. Institutions that continue to rely on outdated curricula are rapidly losing relevance in the eyes of both students and employers. Reports such as the World Economic Forum's Future of Jobs Report 2023 confirm that nearly half of the skills demanded in the labor market will shift within the next five years due to advances in AI and automation. Similarly, data from McKinsey's 2023 annual AI survey underscores that AI is no longer optional but integral across diverse industries including healthcare, finance, and manufacturing, emphasizing the urgency for higher education to integrate AI as a foundational element across its programs. The urgency of integrating AI into all areas of study has been emphasized by leading educators and economists alike. Economists have called for dedicating up to one-third of college curricula to teaching students not only how to use AI, but critically how AI operates, its limitations, and how to collaborate with it effectively. Across the United States, prestigious universities including Barnard, Columbia, NYU, and MIT are actively embedding AI literacy and access into their campuses, offering students early exposure to tools and forging pathways into AI leadership. Beyond elite institutions, broader educational leaders urge a complete strategic overhaul. A recent Forbes contribution makes it clear that the real question for academia is not whether AI will impact higher education, but how institutions will respond. It calls for mandatory AI literacy across majors, curriculum redesign, ethical governance, and industry partnerships to truly prepare students for the AI-driven workplace. The warning is explicit: those who delay this digital transformation risk being left behind in an increasingly competitive and technologically demanding world. The risks of ignoring AI extend beyond future employability. Institutions that do not integrate AI see lower student engagement, diminished retention, and rising operational inefficiencies. In contrast, universities that deploy AI tools adaptive learning systems, predictive analytics, and AI-powered support report improved outcomes. One case study cited a reduction in dropout rates and increased academic success tied to AI adoption. With global EdTech investments projected into the hundreds of billions, universities that lag now jeopardize both their students' futures and their own viability. Some institutions are leading by example. The University of Florida has taken a bold step by requiring all students to complete foundational AI courses plus ethical AI instruction, regardless of major, thereby fostering AI literacy across disciplines. Similarly, Clemson and Georgia Tech are creating ethical, practical, and collaborative AI models on campus that educate students on both the potential and caveats of AI-enabled innovation. Universities that avoid integrating AI risk more than becoming outdated; they risk producing graduates unprepared for a world where AI is the default, not the exception. On the other hand, institutions that embrace AI responsibly and ethically position themselves, and their students, at the forefront of innovation. The choice is simple: educate for yesterday, or equip for tomorrow.


Al Bawaba
21 hours ago
- Al Bawaba
Kuwait's Status as an 'AI Practitioner' Presents Opportunities in Building a Future-Ready Ecosystem: BCG Study
Kuwait has been recognized as an 'AI Practitioner' in a recent study by Boston Consulting Group (BCG). This positions the country among 30 forward-looking economies actively integrating AI across key sectors, according to the report titled "GCC AI Pulse: Mapping the Region's Readiness for an AI-Driven Future" by BCG. The report, based on BCG's 2025 AI Maturity Matrix, revealed that Kuwait has earned the designation of AI Practitioner alongside 30 economies worldwide, including Qatar, Bahrain, and Oman. The maturity matrix identified four economic archetypes based on their AI readiness, ranging from AI Emergents at the lower end of the scale, followed by Practitioners, then Contenders, and Pioneers at the high end. With GCC countries yet to achieve AI Pioneer status, which includes the likes of the US, UK, and China, the report highlights substantial opportunities for advancing AI readiness and leadership in the region. Dmitry Garanin, Managing Director and Partner, BCG, said: "Kuwait's commitment to integrating AI across pivotal sectors showcases a visionary stride towards digital transformation. The National AI Strategy underlines a significant ambition that, if supported by a concerted effort in skilling, policy innovation, and targeted investment, can transform Kuwait into a regional AI powerhouse. The challenge now is to evolve from strategy to execution, ensuring that initiatives like the AI Center of Excellence and enhanced data governance frameworks lead to tangible outcomes. Bridging the current gaps in investment and research is crucial for Kuwait, not just to catch up, but to lead." Building a Competitive Edge in the Digital Era The Kuwait National Strategy for Artificial Intelligence, developed by the Central Agency for Information Technology (CAIT), outlines a comprehensive framework to responsibly harness AI for national development to help advance Kuwait's AI goals under Kuwait's Vision 2035. It emphasizes building a robust digital infrastructure, fostering AI talent, and ensuring ethical governance. The strategy aims to enhance public services, boost economic diversification, and position Kuwait as a regional leader in AI innovation. It also includes recommendations for policy, education, and collaboration with international partners to ensure sustainable and inclusive AI adoption across sectors. Efforts to build AI capabilities include the National Skilling Program, which focuses on training government employees, though expanded initiatives are needed to cultivate a robust AI workforce. On policy and regulation, Kuwait is developing an AI governance framework to ensure ethical and responsible AI implementation. While there exists a $7 billion public fund for SMEs, it does not specifically allocate resources to AI, revealing the necessity of targeted financial investment. To accelerate research and innovation, Kuwait plans to establish an AI Center of Excellence that fosters AI studies and collaboration between academia and industry, though additional measures are required to scale innovation. In terms of infrastructure, the country is enhancing its cloud and AI ecosystem, with Microsoft and Google Cloud spearheading efforts to build AI-powered data centers. However, Kuwait's progress across various AI dimensions remains uneven, as illustrated by key metrics: ambition scores the highest at 5, while investments and research lag at 0.67 and 0.33, respectively. To improve AI readiness, Kuwait needs to strengthen data governance, increase investments in startups, and intensify workforce development initiatives. "Kuwait's emergence as an AI Practitioner is a critical first step in the broader journey towards becoming a leader in the AI domain. The stark contrast between its ambitious goals and the current state of skills, investment, and innovation underscores a pivotal moment for policy and decision-makers. By doubling down on developing a deep talent pool, fostering a vibrant AI startup ecosystem, and ensuring responsible AI deployment, Kuwait has the potential to set a benchmark for AI integration not only regionally but globally. The foundation is set; now is the time for bold, decisive action to leverage AI for sustainable growth," said Dr. Lars Littig, Managing Director and Partner at BCG. Approaches to Foster AI Maturity in the Gulf BCG's GCC AI Pulse report supports the strategic advancement of national visions aimed at enhancing countries' global competitiveness. The expansion of the AI talent pipeline through dedicated upskilling programs and the acquisition of global talent will broaden the existing talent pool and infuse the regional market with international expertise and perspectives, critical for innovative leaps in AI. In addition, governance structures must be realigned to better adhere to evolving AI ethics frameworks and international standards, ensuring responsible and sustainable AI development. Advancements in policies will solidify Kuwait's reputation as a leader in ethical AI practices, a benchmark against which global counterparts are measured. Moreover, there is significant room for intensifying research and development investments to foster stronger academia-industry collaborations. This will spur innovation and cement Kuwait's position as a hub for cutting-edge technological advancements, matching and potentially surpassing global AI leaders such as the US, China, Singapore, the UK, and Canada in certain areas.


Al Bawaba
2 days ago
- Al Bawaba
Google to compete 'Duolingo' with a new language learning app
ALBAWABA - Google is developing a new language learning app to compete with the popular "Duolingo", which relies on learning by doing, according to Android Authority. Android Authority said that Google is experimenting with features in the app that allow users to learn languages in everyday situations and create personalized lessons tailored to their needs. Despite that the American company didn't announce about the app or didn't clarify when it will be launched, sources claimed that the app is about to be released; however, it won't be free. It's unclear whether this will be a standalone application or integrated with its AI subscriptions. (Shutterstock) The app allows users to choose from a variety of training scenarios, such as ordering food and drinks, greeting and introducing themselves and others, and asking for directions. These scenarios are divided into subtopics, such as "asking for directions," which includes situations like getting lost near a hotel, finding a specific restaurant, or locating a train station. Users have greater freedom, as they can create their own training scenario by writing a text description of the scenario, and the service, in turn, suggests additional options such as "ordering a vegetarian meal" or "passing immigration procedures." They can also specify whether the scenario will be based on a listening session or a live conversation.