logo
#

Latest news with #MicrosoftUK

Fit for the future: how organisations can set themselves up to move at the speed of AI
Fit for the future: how organisations can set themselves up to move at the speed of AI

The Guardian

time11-03-2025

  • Business
  • The Guardian

Fit for the future: how organisations can set themselves up to move at the speed of AI

AI is the new baseline in how we work and collaborate. For leaders, the question is not whether to invest but how fast It's easy to forget it's only two years since generative AI (GenAI) burst into the mainstream. In less time than it takes most people to complete a university degree, we've seen it move from working quietly in the background of our devices to reinventing life and work as we know it. For many of us, the rapid nature of AI's rise is exciting. Yet, for business leaders, it's also made for a dizzying, future-defining ride. As recently as 2023, they were probably considering whether or not to invest in developing an AI strategy at all. Now in 2025, the more pressing issue for boardrooms is how quickly they can use AI to deliver impact for their organisation at scale. Small step, giant leap The answer to that question will look different for everyone, depending on the unique nature of their operating environment, workforce and stakeholder relationships. In each case, the first step to delivering true AI impact is looking beyond marginal gains in worker productivity and efficiency. Instead, the organisations that lead the AI era will be those that use it to completely transform the way they do business. 'AI is not just a tool for efficiency; it's the catalyst for a transformative leap in how organisations innovate and secure their future,' says Chris Perkins, Microsoft UK's General Manager for Enterprise Commercial. 'Taking that leap means being willing to reshape and improve entire systems and processes around AI.' The good news is it's starting to happen – Perkins points to telecoms giant Vodafone as a great case in point. The company has ambitious plans to use AI to enhance its customer relationships and has signed a strategic partnership with Microsoft, which includes a commitment to embed Microsoft GenAI into its contact centres. The aim is to help agents deliver a more personalised service to its 350 million customers worldwide, including supercharging TOBi, a multilingual online chatbot operating in 13 countries. Already, this has seen a 20-point rise in the company's net promoter score (a metric used to gauge customer loyalty and satisfaction) – a key differentiator in a highly competitive market. It's also freeing up time spent on monotonous tasks to allow employees to focus on more varied and interesting work. 'A great example of creating value by aligning your AI strategy with your overall business goals,' says Perkins. Going beyond technology Yet success with AI is not only about investing in the technology itself. Establishing the right data infrastructure to support it is key too. By giving their GenAI tools high-quality, well-organised data to reason over, leaders can use them to inform smarter, more insights-driven decisions and, ultimately, take more meaningful actions for their stakeholders and bottom line. Better still, creating this infrastructure needn't be a complex process of advanced data engineering. Instead, practical, easy-to-use solutions like Microsoft Fabric can enable organisations to consolidate their data, create centralised knowledge repositories and let staff quickly and securely access the information they need to perform at their best. Equally important is the need to focus on people. 'It may sound strange but the speed of AI advancement means it's a great time to be a human worker too,' says Perkins. 'Not just because it can make us more efficient or productive in our jobs; that's table stakes. But because it opens the door to all kinds of innovation and creativity. The fun stuff that allows us to be more ambitious and courageous in the work we do and the careers we pursue.' Delivering on this very human promise requires organisations to invest in learning and development programmes that teach employees how to harness new AI tools in their job. But more than that, it means creating a culture in which workers feel inspired and engaged enough to help shape the AI journey for themselves – regardless of their job role, level or location. Clear communication is, therefore, paramount. In fact, the more proactively and transparently leaders show staff how AI applications will generate better outcomes and experiences, the more confident and empowered workers will feel in using them to enhance their work today – and reinvent it tomorrow. A great way to do this is by starting with small, proof-of-concept projects tied to business goals that let employees experiment with AI, fail fast and continually improve. This iterative approach carries the added bonus of enabling CIOs, CDOs and their teams to demonstrate measurable outcomes to the board, which, in turn, makes it easier to inspire investment in implementing AI initiatives at scale. No standing still Of course, more change is coming. If 2024 was all about using copilots and personal productivity assistants to redefine the modern workplace, this year will see the most forward-thinking organisations shift focus to deploying autonomous digital agents capable of implementing business transformation at scale. Having a common platform like Microsoft Azure that can support this ever-expanding range of AI use cases will therefore become increasingly important. Yet, regardless of where their organisation is on its AI journey, the most important thing for any leader right now is action. Whether they are seeking to accelerate innovation or unlock new markets, strengthen customer relationships or enhance worker experiences, now is the time to ensure their AI strategy is fit to deliver the future value that they and their stakeholders want. The greatest risk isn't over-investing in AI, it's not investing fast enough. To learn more about accelerating your AI journey, please visit the Microsoft Azure UK home page. This content is paid for and supplied by the advertiser. Find out more with our

It's time to dream bigger: how organisations can improve and disrupt by building their own AI
It's time to dream bigger: how organisations can improve and disrupt by building their own AI

The Guardian

time11-03-2025

  • Business
  • The Guardian

It's time to dream bigger: how organisations can improve and disrupt by building their own AI

Businesses have the data they need to fuel advanced AI – here's what they can do with it In his seminal work the Innovator's Dilemma, Harvard Business School professor Clayton Christensen discusses how organisations that focus on sustaining innovations, such as improving existing products to meet the needs of current customers, can miss opportunities to develop more disruptive innovations. These could include new products, services or capabilities – which may have the potential to create entirely new markets. It's a classic balancing act that business leaders have to manage – and one that becomes even more urgent in the era of AI, as Soraya Scott, Chief Operating Officer of Microsoft UK, says. 'Business leaders know they need to act now to get ahead on AI, but aren't always sure how to approach effectively in the short and long term to maximise impact. As a COO, I understand this challenge. My role involves ensuring that systems are in place to enable optimal performance now, while also developing long-term strategic plans and initiatives that drive growth, innovation, and resilience. 'The good news is that a sophisticated approach to AI can achieve both.' Employees want access to AI now – and won't wait for companies to catch up. Microsoft's Work Trend Index (WTI) shows three quarters of global knowledge workers are already using generative AI, and 78% of AI users are bringing their own AI tools to work (BYOAI). However, the biggest benefits of AI are unlocked when organisations securely and responsibly combine the latest AI models with their company's unique information and expertise and build their own more advanced and strategic AI applications, for adoption at scale. 'Organisations are sitting on a treasure trove of untapped potential within their data' says Scott. 'With the right foundations, such as flexible and secure cloud infrastructure, robust data collection, and clean, diverse datasets, organisations can tap into the gold buried beneath their feet. By leveraging data as the fuel for more innovative AI development with Azure OpenAI, businesses can turn unstructured data into actionable insights, automate processes and enhance decision-making, as well as create new personalised customer experiences like never before.' So, what innovative capabilities can organisations unleash when they build their own AI tools, products and services? The opportunities and capabilities of advanced AI Vision Orbital is a groundbreaking legal tech business founded in 2018. Based in London and New York, it offers AI-powered solutions to automate the administrative burden of property-related legal work, effectively mimicking the diligence tasks a real estate lawyer performs today. Among its technologies, the business chose to develop its innovative solutions using OpenAI models including GPT-4o and o1 offered by Microsoft Azure. Orbital has built a custom AI Agent, Orbital Copilot, to speed up the process around real estate deals, enabling property professionals to analyse property documents and generate reports in seconds. This proprietary solution uses AI vision capabilities of Azure to process lengthy, often handwritten and photocopied, legal and property documents. Orbital Copilot is saving legal teams 70% of the time it usually takes to conduct property diligence work. This is just one example of how AI can augment human performance and accelerate processes by identifying, classifying and contextualising visual information. AI vision can automate an array of static image analysis and recognition tasks, carry out optical character recognition (OCR), and even real-time spatial analysis, checking for and reporting on the presence and movement of people or objects – whether that's retail items on a shelf or people in a sports stadium. Speech Mercedes-Benz uses the Azure OpenAI Service to enhance their MBUX Voice Assistant. This in-car voice control enables dynamic conversations, offering car owners a voice assistant that understands more commands and engages in interactive conversations. Based on ChatGPT4o and Microsoft Bing search, the MBUX Virtual Assistant unites the collective knowledge of the internet. For example: 'Hey Mercedes, when does the cherry blossom season start in Japan?' – 'And when does it start in Germany?'. Or 'How does a black hole work? Explain it, so that children would understand.' Unlike standard voice assistants that often require specific commands, MBUX excels at handling follow-up questions while maintaining contextual understanding. AI is incredibly useful for all kinds of speech related tasks, including transcription, language detection and translation. It can also generate human-like artificial audio for use in everything from audiobooks and announcements to podcasts. AI-powered real-time voice interaction is a game changer for customer service and call centre operations, enhancing efficiency and customer experience. Decision making In the retail sector, supermarket chain Iceland – one of Britain's fastest growing and most innovative food retailers – is using data and AI to enable 'business at the speed of thought'. To help surface the right store and business information to colleagues faster, Iceland uses Azure OpenAI to consolidate the organisation's knowledge base and create Genie, an app which employees use to find the information they need, conversationally. Genie has already made a huge difference to how in-store colleagues are trained, as they can search using natural language, rather than being limited to exact terms or fuzzy matches. The answers are immediate, targeted and concise, providing a summarised response with links to the source documentation, making the experience quicker and more streamlined. By emulating human-like reasoning and analysing vast amounts of historical and real-time data, AI can deliver new insights that help employees make smarter, better informed business decisions. This improves organisational agility by empowering employees to adapt to changing conditions faster, with immediate and intuitive access to key information. Don't wait to get started Given the range and impact of these more innovative custom-built applications, you can see why IDC's Worldwide AI and Generative AI Spending Guide forecasts that enterprise spending on AI solutions will be five times greater than worldwide IT spending through to 2027. Once solid data foundations are in place, the best place to start is by focusing on your organisation's most pressing needs. This could be improving customer service, optimising supply chains or enhancing decision making. Having clear objectives that tie back to your organisation's growth strategy is crucial for guiding AI proof of concept development. However, to avoid falling prey to the innovators' dilemma, don't be afraid to dream bigger. 'Rather than selecting a single AI use case for implementation, consider taking a diverse, portfolio approach to AI adoption – developing multiple applications in parallel,' says Scott. 'This 'AI factory' approach mitigates risk, typically achieves faster time to value, and increases the chances of those 'eureka' moments from which new products and capabilities emerge.' From enhancing customer experiences to creating entirely new solutions and services, advanced AI empowers organisations to dream bigger and achieve more. Now is the time to start unlocking the untapped value in your data and shaping a brighter future defined by innovation and growth. To learn more, download the eBook Building AI Solutions that Drive Value. This content is paid for and supplied by the advertiser. Find out more with our

Governing AI for the public interest
Governing AI for the public interest

Arab News

time19-02-2025

  • Business
  • Arab News

Governing AI for the public interest

UK Prime Minister Keir Starmer last month published an 'AI Opportunities Action Plan' that includes a multibillion-pound government investment in the UK's artificial intelligence capacity and £14 billion ($17.4 billion) in commitments from tech firms. The stated goal is to boost the AI computing power under public control twentyfold by 2030 and to embed AI in the public sector to improve services and reduce costs by automating tasks. But governing AI in the public interest will require the government to move beyond unbalanced relationships with digital monopolies. As matters stand, public authorities usually offer technology companies lucrative unstructured deals with no conditionalities attached. They are then left scrambling to address the market failures that inevitably ensue. While AI has plenty of potential to improve lives, the current approach does not set governments up for success. To be sure, economists disagree on what AI will mean for economic growth. In addition to warning about the harms that AI could do if it is not directed well, the Nobel laureate economist Daron Acemoglu estimates that the technology will boost productivity by only 0.07 percent per year, at most, over the next decade. By contrast, AI enthusiasts like Philippe Aghion and Erik Brynjolfsson believe that productivity growth could be up to 20 times higher (Aghion estimates 1.3 percent per year, while Brynjolfsson and his colleagues point to the potential for a one-off increase as high as 14 percent in just a few months). Meanwhile, bullish forecasts are being pushed by groups with vested interests, raising concerns over inflated figures, a lack of transparency and a 'revolving door' effect. Many of those promising the greatest benefits also stand to gain from public investments in the sector. What are we to make of the CEO of Microsoft UK being appointed as chair of the UK Department for Business and Trade's Industrial Strategy Advisory Council? The key to governing AI is to treat it not as a sector deserving of more or less support, but rather as a general purpose technology that can transform all sectors. Such transformations will not be value-neutral. While they could be realized in the public interest, they could also further consolidate the power of existing monopolies. Steering the technology's development and deployment in a positive direction will require governments to foster a decentralized innovation ecosystem that serves the public good. Policymakers must also wake up to all the ways that things can go wrong. One major risk is the further entrenchment of dominant platforms such as Amazon and Google, which have leveraged their position as gatekeepers to extract 'algorithmic attention rents' from users. Unless governed properly, today's AI systems could follow the same path, leading to unproductive value extraction, insidious monetization and deteriorating information quality. For too long, policymakers have ignored these externalities. While AI has plenty of potential to improve lives, the current approach does not set governments up for success Yet governments may now be tempted to opt for the option that is cheapest in the short term: allowing tech giants to own the data. This may help established firms drive innovation, but it also will ensure that they can leverage their monopoly power in the future. This risk is particularly relevant today, given that the primary bottleneck in AI development is cloud computing power, the market for which is 67 percent controlled by Amazon, Google and Microsoft. While AI can do much good, it is no magic wand. It must be developed and deployed in the context of a well-considered public strategy. Economic freedom and political freedom are deeply intertwined and neither is compatible with highly concentrated power. To avoid this dangerous path, the Starmer government should rethink its approach. Rather than acting primarily as a 'market fixer' that will intervene later to address AI companies' worst excesses (from deepfakes to disinformation), the state should step in early to shape the AI market. That means not allocating billions of pounds to vaguely defined AI-related initiatives that lack clear objectives, which seems to be Starmer's AI plan. Public funds should not be funneled into the hands of foreign hyper-scalers, as this risks diverting taxpayer money into the pockets of the world's wealthiest corporations and ceding control over public sector data. The UK National Health Service's deal with Palantir is a perfect example of what to avoid. There is also a danger that if AI-led growth does not materialize as promised, the UK could be left with a larger public deficit and crucial infrastructure in foreign hands. Moreover, relying solely on AI investment to improve public services could lead to their deterioration. AI must complement, not replace, real investments in public sector capabilities. The government should take concrete steps to ensure that AI serves the public good. For example, mandatory algorithmic audits can shed light on how AI systems are monetizing user attention. The government should also heed the lessons of past missteps, such as Google's acquisition of the London-based startup DeepMind. As the British investor Ian Hogarth has noted, the UK government might have been better off blocking this deal to maintain an independent AI enterprise. Even now, proposals to reverse the takeover warrant consideration. Prioritizing support for homegrown entrepreneurs and initiatives over dominant foreign companies is crucial Government policy must also recognize that Big Tech already has both scale and resources, whereas small and medium-size enterprises require support to grow. Public funding should act as an 'investor of first resort' to help these businesses overcome the first-mover bias and expand. Prioritizing support for homegrown entrepreneurs and initiatives over dominant foreign companies is crucial. Finally, since AI platforms extract data from the digital commons (the internet), they are beneficiaries of a major economic windfall. It follows that a digital windfall tax should be applied to help fund open-source AI and public innovation. The UK needs to develop its own public AI infrastructure guided by a public-value framework, following the model of the EuroStack initiative in the EU. AI should be a public good, not a corporate tollbooth. The Starmer government's guiding objective should be to serve the public interest. That means addressing the entire supply chain — from software and computing power to chips and connectivity. The UK needs more investment in creating, organizing and federating existing assets (not necessarily replacing Big Tech's assets entirely). Such efforts should be guided and co-financed under a consistent policy framework that aims to build a viable, competitive AI ecosystem. Only then can they ensure that the technology creates value for society and genuinely serves the public interest. - Mariana Mazzucato is Professor in the Economics of Innovation and Public Value at University College London and the author, most recently, of 'Mission Economy: A Moonshot Guide to Changing Capitalism' (Penguin Books, 2022). - Tommaso Valletti, Professor of Economics at Imperial College London, is Director of the Centre for Economic Policy Research and a former chief competition economist at the European Commission. - Copyright: Project Syndicate

Governing AI for the public interest
Governing AI for the public interest

Ammon

time13-02-2025

  • Business
  • Ammon

Governing AI for the public interest

Ammon News - By: Mariana Mazzucato, Tommaso Valletti PARIS — UK Prime Minister Keir Starmer recently published an 'AI Opportunities Action Plan' that includes a multibillion-pound government investment in the UK's AI capacity and £14 billion ($17.3 billion) in commitments from tech firms. The stated goal is to boost the AI computing power under public control 20-fold by 2030, and to embed AI in the public sector to improve services and reduce costs by automating tasks. But governing AI in the public interest will require the government to move beyond unbalanced relationships with digital monopolies. As matters stand, public authorities usually offer technology companies lucrative unstructured deals with no conditionalities attached. They are then left scrambling to address the market failures that inevitably ensue. While AI has plenty of potential to improve lives, the current approach does not set governments up for success. To be sure, economists disagree on what AI will mean for economic growth. In addition to warning about the harms that AI could do if it is not directed well, the Nobel laureate economist Daron Acemoglu estimates that the technology will boost productivity by only 0.07 per cent per year, at most, over the next decade. By contrast, AI enthusiasts like Philippe Aghion and Erik Brynjolfsson believe that productivity growth could be up to 20 times higher (Aghion estimates 1.3 per centper year, while Brynjolfsson and his colleagues point to the potential for a one-off increase as high as 14 per centin just a few months). Meanwhile, bullish forecasts are being pushed by groups with vested interests, raising concerns over inflated figures, a lack of transparency, and a 'revolving door' effect. Many of those promising the greatest benefits also stand to gain from public investments in the sector. What are we to make of the CEO of Microsoft UK being appointed as chair of the UK Department for Business and Trade's Industrial Strategy Advisory Council? The key to governing AI is to treat it not as a sector deserving of more or less support, but rather as a general-purpose technology that can transform all sectors. Such transformations will not be value-neutral. While they could be realised in the public interest, they also could further consolidate the power of existing monopolies. Steering the technology's development and deployment in a positive direction will require governments to foster a decentralized innovation ecosystem that serves the public good. Policymakers also must wake up to all the ways that things can go wrong. One major risk is the further entrenchment of dominant platforms such as Amazon and Google, which have leveraged their position as gatekeepers to extract 'algorithmic attention rents' from users. Unless governed properly, today's AI systems could follow the same path, leading to unproductive value extraction, insidious monetization, and deteriorating information quality. For too long, policymakers have ignored these externalities. Yet governments may now be tempted to opt for the short-term cheapest option: allowing tech giants to own the data. This may help established firms drive innovation, but it also will ensure that they can leverage their monopoly power in the future. This risk is particularly relevant today, given that the primary bottleneck in AI development is cloud computing power, the market for which is 67 per centcontrolled by Amazon, Google and Microsoft. While AI can do much good, it is no magic wand. It must be developed and deployed in the context of a well-considered public strategy. Economic freedom and political freedom are deeply intertwined, and neither is compatible with highly concentrated power. To avoid this dangerous path, the Starmer government should rethink its approach. Rather than acting primarily as a 'market fixer' that will intervene later to address AI companies' worst excesses (from deepfakes to disinformation), the state should step in early to shape the AI market. That means not allocating billions of pounds to vaguely defined AI-related initiatives that lack clear objectives, which seems to be Starmer's AI plan. Public funds should not be funneled into the hands of foreign hyper-scalers, as this risks diverting taxpayer money into the pockets of the world's wealthiest corporations and ceding control over public-sector data. The UK National Health Service's deal with Palantir is a perfect example of what to avoid. There is also a danger that if AI-led growth does not materialize as promised, the UK could be left with a larger public deficit and crucial infrastructure in foreign hands. Moreover, relying solely on AI investment to improve public services could lead to their deterioration. AI must complement, not replace, real investments in public-sector capabilities. The government should take concrete steps to ensure that AI serves the public good. For example, mandatory algorithmic audits can shed light on how AI systems are monetising user attention. The government should also heed the lessons of past missteps, such as Google's acquisition of the London-based startup DeepMind. As the British investor Ian Hogarth has noted, the UK government might have been better off blocking this deal to maintain an independent AI enterprise. Even now, proposals to reverse the takeover warrant consideration. Government policy also must recognise that Big Tech already has both scale and resources, whereas small and medium-size enterprises (SMEs) require support to grow. Public funding should act as an 'investor of first resort' to help these businesses overcome the first-mover bias and expand. Prioritising support for homegrown entrepreneurs and initiatives over dominant foreign companies is crucial. Finally, since AI platforms extract data from the digital commons (the internet), they are beneficiaries of a major economic windfall. It follows that a digital windfall tax should be applied to help fund open-source AI and public innovation. The United Kingdom needs to develop its own public AI infrastructure guided by a public-value framework, following the model of the EuroStack initiative in the European Union. AI should be a public good, not a corporate tollbooth. The Starmer government's guiding objective should be to serve the public interest. That means addressing the entire supply chain, from software and computing power to chips and connectivity. The UK needs more investment in creating, organizing, and federating existing assets (not necessarily replacing Big Tech's assets entirely). Such efforts should be guided and co-financed under a consistent policy framework that aims to build a viable, competitive AI ecosystem. Only then can they ensure that the technology creates value for society and genuinely serves the public interest. Mariana Mazzucato, Professor in the Economics of Innovation and Public Value at University College London, is Founding Director of the UCL Institute for Innovation and Public Purpose, Co-Chair of the Global Commission on the Economics of Water, and Co-Chair of the Group of Experts to the G20 Taskforce for a Global Mobilization Against Climate Change. She was Chair of the World Health Organization's Council on the Economics of Health For All. She is the author of The Value of Everything: Making and Taking in the Global Economy (Penguin Books, 2019), Mission Economy: A Moonshot Guide to Changing Capitalism (Penguin Books, 2022), and, most recently, The Big Con: How the Consulting Industry Weakens Our Businesses, Infantilizes Our Governments and Warps Our Economies (Penguin Press, 2023). A tenth anniversary edition of her book The Entrepreneurial State: Debunking Public vs. Private Sector Myths was published by Penguin in September. Tommaso Valletti, Professor of Economics at Imperial College London, is Director of the Centre for Economic Policy Research and a former chief competition economist at the European Commission. Copyright: Project Syndicate, 2024.

Governing AI for the public interest
Governing AI for the public interest

Jordan Times

time12-02-2025

  • Business
  • Jordan Times

Governing AI for the public interest

PARIS — UK Prime Minister Keir Starmer recently published an 'AI Opportunities Action Plan' that includes a multibillion-pound government investment in the UK's AI capacity and £14 billion ($17.3 billion) in commitments from tech firms. The stated goal is to boost the AI computing power under public control 20-fold by 2030, and to embed AI in the public sector to improve services and reduce costs by automating tasks. But governing AI in the public interest will require the government to move beyond unbalanced relationships with digital monopolies. As matters stand, public authorities usually offer technology companies lucrative unstructured deals with no conditionalities attached. They are then left scrambling to address the market failures that inevitably ensue. While AI has plenty of potential to improve lives, the current approach does not set governments up for success. To be sure, economists disagree on what AI will mean for economic growth. In addition to warning about the harms that AI could do if it is not directed well, the Nobel laureate economist Daron Acemoglu estimates that the technology will boost productivity by only 0.07 per cent per year, at most, over the next decade. By contrast, AI enthusiasts like Philippe Aghion and Erik Brynjolfsson believe that productivity growth could be up to 20 times higher (Aghion estimates 1.3 per centper year, while Brynjolfsson and his colleagues point to the potential for a one-off increase as high as 14 per centin just a few months). Meanwhile, bullish forecasts are being pushed by groups with vested interests, raising concerns over inflated figures, a lack of transparency, and a 'revolving door' effect. Many of those promising the greatest benefits also stand to gain from public investments in the sector. What are we to make of the CEO of Microsoft UK being appointed as chair of the UK Department for Business and Trade's Industrial Strategy Advisory Council? The key to governing AI is to treat it not as a sector deserving of more or less support, but rather as a general-purpose technology that can transform all sectors. Such transformations will not be value-neutral. While they could be realised in the public interest, they also could further consolidate the power of existing monopolies. Steering the technology's development and deployment in a positive direction will require governments to foster a decentralized innovation ecosystem that serves the public good. Policymakers also must wake up to all the ways that things can go wrong. One major risk is the further entrenchment of dominant platforms such as Amazon and Google, which have leveraged their position as gatekeepers to extract 'algorithmic attention rents' from users. Unless governed properly, today's AI systems could follow the same path, leading to unproductive value extraction, insidious monetization, and deteriorating information quality. For too long, policymakers have ignored these externalities. Yet governments may now be tempted to opt for the short-term cheapest option: allowing tech giants to own the data. This may help established firms drive innovation, but it also will ensure that they can leverage their monopoly power in the future. This risk is particularly relevant today, given that the primary bottleneck in AI development is cloud computing power, the market for which is 67 per centcontrolled by Amazon, Google and Microsoft. While AI can do much good, it is no magic wand. It must be developed and deployed in the context of a well-considered public strategy. Economic freedom and political freedom are deeply intertwined, and neither is compatible with highly concentrated power. To avoid this dangerous path, the Starmer government should rethink its approach. Rather than acting primarily as a 'market fixer' that will intervene later to address AI companies' worst excesses (from deepfakes to disinformation), the state should step in early to shape the AI market. That means not allocating billions of pounds to vaguely defined AI-related initiatives that lack clear objectives, which seems to be Starmer's AI plan. Public funds should not be funneled into the hands of foreign hyper-scalers, as this risks diverting taxpayer money into the pockets of the world's wealthiest corporations and ceding control over public-sector data. The UK National Health Service's deal with Palantir is a perfect example of what to avoid. There is also a danger that if AI-led growth does not materialize as promised, the UK could be left with a larger public deficit and crucial infrastructure in foreign hands. Moreover, relying solely on AI investment to improve public services could lead to their deterioration. AI must complement, not replace, real investments in public-sector capabilities. The government should take concrete steps to ensure that AI serves the public good. For example, mandatory algorithmic audits can shed light on how AI systems are monetising user attention. The government should also heed the lessons of past missteps, such as Google's acquisition of the London-based startup DeepMind. As the British investor Ian Hogarth has noted, the UK government might have been better off blocking this deal to maintain an independent AI enterprise. Even now, proposals to reverse the takeover warrant consideration. Government policy also must recognise that Big Tech already has both scale and resources, whereas small and medium-size enterprises (SMEs) require support to grow. Public funding should act as an 'investor of first resort' to help these businesses overcome the first-mover bias and expand. Prioritising support for homegrown entrepreneurs and initiatives over dominant foreign companies is crucial. Finally, since AI platforms extract data from the digital commons (the internet), they are beneficiaries of a major economic windfall. It follows that a digital windfall tax should be applied to help fund open-source AI and public innovation. The United Kingdom needs to develop its own public AI infrastructure guided by a public-value framework, following the model of the EuroStack initiative in the European Union. AI should be a public good, not a corporate tollbooth. The Starmer government's guiding objective should be to serve the public interest. That means addressing the entire supply chain, from software and computing power to chips and connectivity. The UK needs more investment in creating, organizing, and federating existing assets (not necessarily replacing Big Tech's assets entirely). Such efforts should be guided and co-financed under a consistent policy framework that aims to build a viable, competitive AI ecosystem. Only then can they ensure that the technology creates value for society and genuinely serves the public interest. Mariana Mazzucato, Professor in the Economics of Innovation and Public Value at University College London, is Founding Director of the UCL Institute for Innovation and Public Purpose, Co-Chair of the Global Commission on the Economics of Water, and Co-Chair of the Group of Experts to the G20 Taskforce for a Global Mobilization Against Climate Change. She was Chair of the World Health Organization's Council on the Economics of Health For All. She is the author of The Value of Everything: Making and Taking in the Global Economy (Penguin Books, 2019), Mission Economy: A Moonshot Guide to Changing Capitalism (Penguin Books, 2022), and, most recently, The Big Con: How the Consulting Industry Weakens Our Businesses, Infantilizes Our Governments and Warps Our Economies (Penguin Press, 2023). A tenth anniversary edition of her book The Entrepreneurial State: Debunking Public vs. Private Sector Myths was published by Penguin in September. Tommaso Valletti, Professor of Economics at Imperial College London, is Director of the Centre for Economic Policy Research and a former chief competition economist at the European Commission. Copyright: Project Syndicate, 2024.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store