Latest news with #MattClifford


Business News Wales
30-04-2025
- Business
- Business News Wales
Timeline Set Out For AI Growth Zone Scheme
The first hosts of flagship AI Growth Zones are set to be announced in the summer. Investors and local authorities were at TechUK in London as the UK Government started its formal qualifying process, giving them the opportunity to discuss their proposals and learn more about the vision for AI Growth Zones with AI Minister Feryal Clark and the Prime Minister's AI Adviser Matt Clifford. The initial Expressions of Interest (EOI) which opened earlier this year saw more than 200 responses. The UK Government says that AI Growth Zones will attract billions in private investment. Potential sites identified across the country through the EOI process include former industrial areas with land and infrastructure ready for redevelopment. Proposals should demonstrate access to large existing power connections of at least 500MW – enough energy to power 2 million homes – or set out a clear plan for how they will get there. The qualifying process will also examine other criteria, including site readiness, and local impact. Minister for AI Feryal Clark said: 'Just like coal and steam powered our past, AI is powering the future. Our AI Growth Zones will transform areas across the UK into engines of growth and opportunity – unlocking new jobs and revitalising communities across the UK. 'This is our Plan for Change in action, ensuring the benefits of AI are felt in every region and securing the UK's place as a world leader in this vital technology. The Prime Minister's AI Adviser Matt Clifford said: 'The UK has an extraordinary opportunity in AI, but speed is everything. Today's launch sends a clear signal to investors and local communities that we've already moved into high gear. 'I'm looking forward to discussing these proposals in more detail today as we continue to work alongside investors and local authorities to deliver a once-in-a-generation opportunity. To mark the launch, Minister Clark and Matt Clifford are leading a series of engagements with investors and MPs to outline the UK Government's vision, bid timelines, and qualifying criteria. The first additional sites will then be announced this summer with an ambition for building work to be underway by the end of 2025.


Telegraph
24-02-2025
- Business
- Telegraph
Starmer's AI tsar invested in firm given £2.3m government contract to snoop on social media
The Government's artificial intelligence tsar was an investor in a company awarded a £2.3 million of taxpayers' money to create a platform that trawls social media for 'concerning' posts. Matt Clifford, the Prime Minister's AI opportunities adviser, was an investor in Faculty Science until mid-January and has now closed this position. The London-based AI company was awarded a multimillion-pound contract by the Department of Science Innovation and Technology (DSIT) in November 2024 to build software which can search for 'foreign interference', detect deepfakes and 'analyse social media narratives' online. The platform is part of the Counter Disinformation Unit (CDU) which was set up in 2019 and sparked widespread criticism for amassing files on journalists, academics and MPs who challenged the government's narrative during the pandemic. The unit has since been rebranded the National Security Online Information Team (NSOIT) and its links to intelligence agencies allows it to avoid intense public scrutiny. The Telegraph last weekend revealed the government contract to create the Counter Disinformation Data Platform (CDDP) and on Monday the DSIT published an update to Mr Clifford's declaration of interests stating his position as a Faculty investor was 'now divested in full'. Mr Clifford is leading the Government's AI strategy and authored the AI Opportunities Action Plan, which was published on Jan 13. The process of divesting shares was started later that week. It is unknown how large the personal investment was of Mr Clifford and if he sold the shares for profit. Changes to his contract, available online, now state any shareholding changes or sales must be approved by DSIT's permanent secretary. He will also now have to recuse himself from any procurement decisions. Analysis by Tussel on behalf of The Telegraph reveals that Mr Clifford has 40 other private shareholdings, and 10 of these have also received taxpayer money, totalling more than £7.75 million since 2016. However, none of these companies have received money directly from the DSIT, the department for science which Mr Clifford has had a role with since 2023. Mr Clifford is a successful tech entrepreneur who the Government deliberately sought out to help spearhead their own artificial intelligence ambitions. In 2022, he was appointed as chairman of the Advanced Research and Invention Agency (ARIA), the £800 million research and development funding agency that is designed to select ambitious projects in which to invest government money. In 2023, the Conservative government brought Mr Clifford in to participate in the AI Safety Institute Advisory Board, and in 2024 the Labour government charged Mr Clifford with delivering a new AI Opportunities Plan, which saw him recommend a new unit, dubbed UK Sovereign AI to 'incubate and spin out AI companies'. But his role with Faculty Science remains unknown at a time when the DSIT is under criticism for funding the counter disinformation platform which critics say could be used to snoop on the social media posts of British people. An executive summary for the project states: 'While the CDDP has a current national security focus, the tool has the ability to be pivoted to focus on any priority area.' Jake Hurfurt, head of research and investigations at Big Brother Watch, warned that the Government was still refusing to publish 'huge swathes of information' about the platform. 'Whitehall must be transparent about how its Counter Disinformation Unit plans to use AI to monitor social media, when millions of pounds of public money have been poured into its operation,' he said at the weekend. 'NSOIT's predecessor, the Counter Disinformation Unit, was caught tracking criticism from journalists, activists and even MPs in an assault on free speech, but the Government is still trying to hide this unit in the shadows. There is a risk that the Ministry of Truth lives on. 'Sunlight is the best disinfectant, and with vast amounts of taxpayers' cash on the line, it is time for the Government to be clear on how it monitors social media, and whether it is pressuring social media companies to censor the public's lawful speech.'
Yahoo
11-02-2025
- Business
- Yahoo
The Paris AI Action Summit was a fork in the road—but whether the chosen path leads to prosperity or disaster remains unclear
Bonjour! Greetings from Paris, where the French government is currently hosting government officials from dozens of nations for what it is calling the AI Action Summit. The Summit is the successor to two prior international gatherings, the first convened by the U.K. government and held at Bletchley Park, in England, in November 2023, and the second held by the South Korean government in Seoul in May 2024. But it would be hard to overstate the difference in vibe between those previous two meetings and this one. The Bletchley Summit was a decidedly sober affair, with just 29 governments represented and top executives from the handful of AI labs, such as OpenAI, Google DeepMind, and Anthropic, at the cutting-edge of AI technology. The conversation was dominated by what some would call AI 'doomerism'—or how to head off the most catastrophic risks from powerful AI. It led to a commitment by the countries present to identify AI risks and work together to head them off. Then in Seoul, 16 leading AI companies agreed to publish frameworks for how they would seek to identify and mitigate AI safety risks, and under what circumstances they might decide not to develop models. For this Summit, France has taken, shall we say, a different approach. Matt Clifford, a tech investor turned U.K. government advisor who helped plan the Bletchley Summit, said on a panel the Tony Blair Institute hosted here on Sunday that it 'was exciting to see what [the French summit] team have done, in blowing it up.' He positioned the remark as a compliment, that France has widened the aperture of the summit to look at AI's other potential risks—around bias, inequality, and job displacement—but most importantly to highlight AI's economic opportunities. France transformed a summit originally into what could best be described as an AI festival, complete with glitzy corporate side events and even a late night dance party held amid the opulent tapestries and neo-baroque gilded mouldings of the French foreign ministry at Quai d'Orsay. That rumbling you can barely make out beneath the thumping bass line? That would be the cognitive dissonance between the party atmosphere in Paris, along with French President Emmanuel Macron's repeated exhortations to move 'faster and faster' on AI deployment, and the fact that executives at leading AI companies are predicting human-level intelligence may arrive in two to five years—with far-ranging, disruptive consequences for society and workers everywhere. For those who care about AI's potential catastrophic risks, an alternate meaning of Clifford's 'blowing it up' comes to mind. Once the main focus of the summit, AI Safety was relegated to a small subset of discussions within a broader 'Trust in AI' pillar, which itself was just one of five separate summit tracks. The word 'safety' was banished from the Summit's name, in favor of the term Action—and Anne Bouverot, French President Emmanuel Macron's special envoy for the Summit, dismissed concerns about AI's potential existential risks as 'science fiction' in her opening address. (Even though there is mounting empirical evidence that today's AI models, if used as agents that carry out actions on a user's behalf, can indeed pose a risk of loss of control—with models seeking to achieve human-assigned goals but doing so in ways the human user never intended.) Safety didn't make an appearance in the Summit's final communique either. Nor did the final declaration include any clear path forward for future international meetings to work specifically on AI risks. (India, which co-hosted the Paris Summit, said it would host the next summit in its own country, but without any promises of what it would focus on.) The Paris Summit bitterly disappointed many who work on AI Safety. Max Tegmark, the MIT physicist who is the founder and president of the Future of Life Institute, called the Summit 'a tremendous missed opportunity' and the declaration's omission of any safety steps 'a recipe for disaster.' Tegmark, in an earlier interview with Fortune, said he still held out hope that world leaders would come to recognize that uncontrollable human-level AI would pose a risk to their own power, and that when they recognized this fact, they would move to regulate it. Some AI safety experts think the effort to create international agreements to address AI's risks will have to shift to a different forum. (There are other efforts underway at the United Nations, OECD, and G7.) More than one AI safety expert told me at the Summit that they think that it may now take some sort of 'freak out moment'—when increasingly powerful AI agents cause some sort of harm, or perhaps just demonstrate how easily they could cause harm—to actually get progress on international AI governance. Some predicted that such a moment could come in the next year as more and more companies roll out AI agents and AI model capabilities continue to advance. While not mentioning 'safety,' the Summit's final declaration did include some vague language about the need to ensure AI's 'diversity,' and lots of talk about 'inclusive' and 'sustainable' AI. The use of these trigger terms guaranteed that the Trump Administration—which sent Vice President J.D. Vance to be the official U.S. representative to the Summit—wouldn't sign the meeting's final declaration. This might not have been Macron's intentions, but it did allow him to credibly claim France was leading "a third way" on AI between the two opposing camps that have been leading in the technology's development, the U.S. and China. (China did sign the statement.) And largely because the U.S. wouldn't sign, the U.K. also decided against signing—apparently to avoid any risk of antagonizing the Trump Administration—although 61 other countries did sign. (Top execs from Google, OpenAI, and Anthropic were all present, but only one company, Hugging Face, the AI model repository and open source AI champion, signed.) Anthropic released a statement from its CEO Dario Amodei in which he hinted at disappointment the Summit hadn't done more to address the looming risks of human-level artificial general intelligence. 'Greater focus and urgency is needed,' Amodei said 'given the pace at which the technology is progressing. The need for democracies to keep the lead, the risks of AI, and the economic transitions that are fast approaching—these should all be central features of the next summit.' The Summit did create a new foundation with a $400 million endowment (and a target of $2.5 billion within five years), devoted to funding projects aimed at creating datasets and small AI models designed to serve the public interest. It also created a Coalition on Sustainable AI that includes Nvidia, IBM, and SAP, as well as French energy giant EDF, but without any clear targets or road map for what the organization will do going forward, leaving climate campaigners disappointed. Union leaders also decried the lack of concrete steps to make sure workers have a clear seat at the table for discussions of AI policy. And the creation of these new organizations was eclipsed by big announcements on AI investment: Macron's own reveal of a 109 billion euro plan for AI investments in France and the European Union's unveiling of a 200 billion euro plan to speed AI adoption in European industry. Elon Musk's close ties to U.S. President Donald Trump and Trump's occasional comments about AI's potential dangers had left some in doubt about exactly where the Trump Administration would come down on AI regulation. Vance laid those doubts to rest, giving a red meat speech that said U.S. A.I. policy would be built on four pillars: the maintenance of U.S. AI technology as 'the gold standard;' a belief that excessive regulation could kill innovation and that 'pro-growth' AI policies are required; that AI must 'remain free from ideological bias, and that American AI will not be co-opted into a tool for authoritarian censorship;' and that workers will be consulted on AI policy and that the Trump Administration will 'maintain a pro-worker growth path for AI' with the belief AI will create more jobs than it displaces. With Google CEO Sundar Pichai sitting uncomfortably on stage behind him, and OpenAI CEO Sam Altman and Anthropic's Amodei in the audience, Vance also warned that companies calling for AI regulation were attempting to engage in regulatory capture, enshrining rules that would lock in their advantage to the detriment of competitors. At a time when many companies have been rushing to deploy Chineses startup DeepSeek's R1 reasoning model, Vance also used his speech to caution the countries present against partnering with Chinese companies—although he did not mention China by name. 'From CCTV to 5G equipment, we're all familiar with cheap tech in the marketplace that's been heavily subsidized and exported by authoritarian regimes,' he said. 'As some of us in this room have learned from experience, partnering with them means chaining your nation to an authoritarian master that seeks to infiltrate, dig in and seize your information infrastructure.' Chinese researchers present at the conference meanwhile bemoaned the emerging new cold war between Washington and Beijing, saying that it made the whole world less safe. 'It's difficult to hold a very optimistic view about cooperation between the China and the U.S. on AI safety in the future,' Xiao Qian, vice dean of the AI International Governance Institute at Tsinghua University, told the audience at a side event on AI safety in Paris, my Fortune colleague Vivienne Walt reported. With that, here's more AI News. Jeremy This story was originally featured on