logo
#

Latest news with #ChiOnwurah

Dismay as north of England universities miss out on share of £54m talent fund
Dismay as north of England universities miss out on share of £54m talent fund

The Guardian

time7 days ago

  • Business
  • The Guardian

Dismay as north of England universities miss out on share of £54m talent fund

Ministers have been accused of undermining innovation in the north of England after five universities from the south-east were awarded £10m to hire overseas talent – with none selected between Birmingham and the University of Strathclyde in Glasgow. Twelve universities across Britain have been handed a share of the government's £54m global talent fund, which Labour said would 'put rocket boosters' under its innovation plans. The money is intended to attract top international academics and was awarded to three 'golden triangle' universities – Oxford, Cambridge and Imperial College London – and two others from the south-east of England. MPs and researchers said it was 'deeply disappointing' that no universities in the north of England were chosen, given their strong research record and the government's stated aim to rebalance the UK economy. Chi Onwurah, the Labour chair of the Commons science, innovation and technology committee, said she was demanding an explanation from the science minister, Patrick Vallance. She said: 'I'm writing to Lord Vallance for further details on how the government selected the 12 institutions that will receive funding, and why no region north of Birmingham was prioritised. 'The process he set out in his response to me is neither evidence-based policy nor place-based policy, nor even policy-based evidence, but some kind of mishmash which does not stand up to scientific scrutiny. 'This funding decision seems to be at odds with the government's stated mission to boost opportunities in every part of the country, and I hope the minister will be able to provide clarity.' The global talent fund was administered by UK Research and Innovation (UKRI), an arms-length government body. The Northern Powerhouse Partnership, a thinktank, said the criteria for selecting the universities was 'arbitrary and inconsistently applied' and unfairly overlooked large institutions such as Manchester, Durham, and York. It obtained UKRI's selection criteria under the Freedom of Information Act and found that seven northern universities met a key threshold of spending at least £5m on international research funding. These were Manchester, Leeds, York, Newcastle, Lancaster, Sheffield and Durham. It said many of these institutions were unfairly disadvantaged by two further criteria related to their proportion of overseas staff. Larger universities are less likely to meet the 35% quota of international staff given their size, it said, although the University of Manchester missed out by just 0.3%. Henri Murison, the chief executive of the Northern Powerhouse Partnership, said it welcomed the principle behind the fund but it 'undermines innovation in the north'. 'The north isn't pleading, it's simply asking not to be overlooked in favour of special treatment for nations like Wales, which, while important, are far smaller in scale,' he said. 'The north remains the UK's only credible economic counterweight to London and the greater south-east. If the government is serious about delivering growth across every part of the country, it cannot allow UKRI, the intermediary between government and our world-class research councils, to stand in the way of that ambition.' A government spokesperson said: 'The £54m global talent fund will help some of our leading universities and research institutions to attract top researchers and their teams to the UK. Institutions across all four nations of the UK were selected based on measurable criteria. 'We are determined to maximise the opportunities in every part of the country, including the north of England, through our regional growth strategy, by building more cross-national research partnerships, and by investing our record £22.6bn funding for R&D through schemes like the local innovation partnerships fund.'

UK government urged to offer more transparency over OpenAI deal
UK government urged to offer more transparency over OpenAI deal

The Guardian

time22-07-2025

  • Business
  • The Guardian

UK government urged to offer more transparency over OpenAI deal

Ministers are facing calls for greater transparency about public data that may be shared with the US tech company OpenAI after the government signed a wide-ranging agreement with the $300m (£222m) company which critics compared to letting a fox into a henhouse. Chi Onwurah, the chair of the House of Commons select committee on science, innovation and technology, warned that Monday's sweeping memorandum of understanding between OpenAI's chief executive, Sam Altman, and the technology secretary, Peter Kyle, was 'very thin on detail' and called for guarantees that public data will remain in the UK and clarity about how much of it OpenAI will have access to. The deal paves the way for the Silicon Valley firm behind ChatGPT to explore deploying advanced AI technology in areas including justice, defence and security, and education. It includes OpenAI and the government 'partnering to develop safeguards that protect the public and uphold democratic values'. Kyle said he wants Britain to be 'front and centre when it comes to developing and deploying AI' and 'this can't be achieved without companies like OpenAI'. But the deal has also led to concerns. Onwurah said: 'We want assurance that there will be transparency over what public data OpenAI will have access to for training and that it will remain in the UK and within the UK's data protection framework and legislation. It's important for public trust that the government is more transparent about how this relationship will work. The public is certainly not convinced that the tech giants are on their side or that AI is on their side. They need to have confidence that the government is on their side.' She cited 'major failures' in public sector IT procurement including the Post Office Horizon scandal and said: 'We hope and expect that the government has learned the lessons of previous failed technology procurement in its relationship with OpenAI and other AI companies it is bringing into the public sector.' The department for science, innovation and technology has been approached for comment. The deal with OpenAI comes after an agreement this month with Google to provide free technology to the public sector from the NHS to local councils and to upskill tens of thousands of civil servants in technology, including AI. Other Silicon Valley companies already working in the UK public sector include Anduril, a US military technology company that provides AI-enabled 'kill web' systems. It has been working with the British military. Google, Amazon, Microsoft and Palantir were among technology companies that attended a meeting last month with the justice secretary, Shabana Mahmood, at which ideas were suggested to insert tracking devices under offenders' skin and assign robots to contain prisoners. The latest agreement includes OpenAI's possible participation in the government's plan for 'AI Growth Zones' which could see huge datacentres built around the UK. Altman said the agreement would enable the UK government to realise the potential of its AI policy by 'turning ambition to action and delivering prosperity for all'. But Martha Dark, the executive director of Foxglove, a campaign group for fairer technology, called the level of detail 'hopelessly vague'. 'The British government has a treasure trove of public data that would be of enormous commercial value to OpenAI in helping to train the next incarnation of ChatGPT,' she said. 'This is yet more evidence of this government's credulous approach to Big Tech's increasingly dodgy sales pitch. Peter Kyle seems bizarrely determined to put the Big Tech fox in charge of the henhouse when it comes to UK sovereignty.' Sameer Vuyyuru, the chief AI and product officer at Capita, another provider of AI services to the public sector, said there was now 'a complete acknowledgment that AI plays a role in the future of public services'. But he said there was a gap between their desire for the efficiency savings and understanding how best to procure AI services. 'The public sector is viewed as one of the most fertile areas for the implementation of AI,' he said, adding that fertility meant radically increased public sector efficiency as well as revenue growth for providers. He said AI agents would typically operate on, rather than take ownership of, public data. While AI use is now 'miniscule', he said up to 50% of often 'mind-numbing and menial' public service tasks could benefit from AI. This could mean cutting waiting times for renewing a driving licence, applying to join the army or applying for tuition subsidies by speeding up the number of cases a civil servant could process from 10 a day to 30 or even 50 with the assistance of an AI agent.

Scammers target struggling graduates with fake job ads
Scammers target struggling graduates with fake job ads

Telegraph

time15-07-2025

  • Business
  • Telegraph

Scammers target struggling graduates with fake job ads

Scammers are targeting struggling university graduates and school leavers with fake jobs adverts on social media amid a slump in entry-level roles. Fraudsters are turning to Instagram and TikTok to deceive young people by impersonating popular job boards to steal their data and money. JobsAware, a non-profit organisation that provides free employment advice, said it had received 120 reports of scammers using TikTok and Instagram to deceive graduates in the year to July, a significant surge compared with just 13 a year earlier. Keith Rosser, chairman of JobsAware, said: 'Over the past three years we've seen this explosion in the use of TikTok, WhatsApp and other [social media platforms] to really scale this fraud.' Compared with traditional jobs boards, social media sites push content in front of users rather than relying on someone seeking out information about a role. There has also been a significant rise in the number of jobs scams on WhatsApp, JobsAware said, with 412 reports of fake roles recorded on the messaging platform during 2024/5 compared with 161 reports the year prior. Dame Chi Onwurah, chairman of the parliamentary science, innovation and tech committee, said: 'It's concerning to see reports that fraudsters are using social media to scam young people who are just looking for jobs. 'My committee has found that the UK's current online safety regime is woefully insufficient to keep users safe online.' The rise in job scams comes amid a challenging labour market for university graduates and school leavers, as companies cut back on recruiting for entry-level roles following Rachel Reeves's £25bn National Insurance (NI) raid. Graduate job postings in the 12 months to June are down 33pc compared with a year earlier, according to Indeed jobs site. The UK's labour market has cooled significantly in recent months as a growing number of businesses freeze hiring in response to the Chancellor's changes to NI paid by employers, which took effect in April. The number of vacancies in the three months to May fell to 781,000, according to the Office for National Statistics, the lowest level since the pandemic. Mr Rosser of JobsAware said the fall in vacancies and a rise in the number of people seeking a second job or additional income was also impacting scams. He added: 'There's almost a greater need or desperation for people to get work and fishing in a smaller pool as it were… I think it definitely leads people to a place where the higher the need and urgency, the more susceptible they are to be scammed.' Over the past year, JobsAware had reported a rise in scammers using AI to make their fake job adverts and documents look increasingly realistic. Mr Rosser said 'It could be phoney contracts of employment … even fake ID documents to prove who the company is. What we found is that AI is being utilised by fraudsters to make the deception look more convincing.' TikTok said it only allows job adverts from companies that are officially registered with the Financial Conduct Authority. Meta, which owns Facebook, Instagram and WhatsApp, declined to comment. A government spokesman said: 'Under the Online Safety Act, all platforms must now proactively tackle illegal fraudulent material, including false representation and scam ads which accounts post directly and promote on user's feeds. 'Once further codes are in force, major platforms will also have to clamp down on traditional paid-for adverts.'

Social media incentivised spread of Southport misinformation, MPs say
Social media incentivised spread of Southport misinformation, MPs say

The Guardian

time11-07-2025

  • Politics
  • The Guardian

Social media incentivised spread of Southport misinformation, MPs say

Social media business models endangered the public by incentivising the spread of dangerous misinformation after the 2024 Southport murders, MPs have concluded, adding that current online safety laws have 'major holes'. The Commons science and technology select committee called for new multimillion-pound fines for platforms that do not set out how they will tackle the spread of harmful content through their recommendation systems. The MPs warned that rapid advances in generative artificial intelligence, which allows for the creation of convincing fake videos, could make the next misinformation crisis 'even more dangerous' than last August's violent protests after three children were killed by a man wrongly identified online as an asylum seeker who had arrived by small boat. They also called for AI-generated content to be visibly labelled and said divisive and deceptive content amplified on social media after the attacks may have been part of a foreign disinformation operation. 'It's clear that the Online Safety Act [OSA] just isn't up to scratch,' said Chi Onwurah, the committee chair, after a seven-month inquiry. 'The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn't cross the line into illegality. Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable.' Neither misinformation nor disinformation are harms that firms need to address under the OSA, which only received royal assent less than two years ago. State-sponsored disinformation can amount to an offence of foreign interference. The report examines the role of platforms including X, Facebook and TikTok, and comes after this week's opening of a public inquiry into missed opportunities to prevent the killing of Bebe King, six, Elsie Dot Stancombe, seven, and Alice da Silva Aguiar, nine, on 29 July last year. Just over two hours after the first call to the emergency services, a post on X claimed the suspect was a 'Muslim immigrant', and within five hours a false name, 'Ali al-Shakati', was circulating on the same platform, the MPs found. Within a day, these two posts had received more than 5m views. In fact, the attacker was Axel Rudakubana, a British citizen born in Cardiff. Another X post that evening calling for violence towards asylum hostels received more than 300,000 views, and the next day the false name was on X's 'Trending in the UK' list. TikTok suggested to users under its 'others searched for' function the words 'Ali al-Shakti arrested in Southport', and by the end of the day after the attack social media posts with the false name had accrued 27m impressions and violence had broken out outside Southport mosque. On 3 and 4 August a Facebook post called for violence against the Britannia hotel in Leeds, where many occupants were asylum seekers. The committee called for fines of at least £18m if platforms do not set out how they will tackle significant harms that derive from content promoted by their recommendation systems even if it is not illegal. It concluded: 'The act fails to keep UK citizens safe from a core and pervasive online harm.' It called on the government to make social media platforms 'identify and algorithmically deprioritise factchecked misleading content, or content that cites unreliable sources, where it has the potential to cause significant harm.' But it stressed: 'It is vital that these measures do not censor legal free expression.' The MPs called on ministers to extend regulatory powers to tackle social media advertising systems that allow 'the monetisation of harmful and misleading content', with penalties rising depending on severity and the proceeds used to support victims of online harms. The Department for Science, Innovation and Technology has been approached for comment. Ofcom said it held platforms to account over illegal content but stressed that the scope of laws requiring platforms to tackle legal but harmful content was a matter for the government and parliament. A spokesperson said: 'Technology and online harms are constantly evolving, so we're always looking for ways to make life online safer. We're proposing stronger protections including asking platforms to do more on recommender systems and to have clear protocols for responding to surges in illegal content during crises.' TikTok said its community guidelines prohibited inaccurate, misleading or false content that may cause significant harm and it worked with factcheckers and made any content that could not be verified as accurate ineligible for its 'for you' feed. X and Meta were approached for comment.

Social media incentivised spread of Southport misinformation, MPs say
Social media incentivised spread of Southport misinformation, MPs say

The Guardian

time10-07-2025

  • Politics
  • The Guardian

Social media incentivised spread of Southport misinformation, MPs say

Social media business models endangered the public by incentivising the spread of dangerous misinformation after the 2024 Southport murders, MPs have concluded, adding that current online safety laws have 'major holes'. The Commons science and technology select committee called for new multimillion-pound fines for platforms that do not set out how they will tackle the spread of harmful content through their recommendation systems. The MPs warned that rapid advances in generative artificial intelligence, which allows for the creation of convincing fake videos, could make the next misinformation crisis 'even more dangerous' than last August's violent protests after three children were killed by a man wrongly identified online as an asylum seeker who had arrived by small boat. They also called for AI-generated content to be visibly labelled and said divisive and deceptive content amplified on social media after the attacks may have been part of a foreign disinformation operation. 'It's clear that the Online Safety Act [OSA] just isn't up to scratch,' said Chi Onwurah, the committee chair, after a seven-month inquiry. 'The government needs to go further to tackle the pervasive spread of misinformation that causes harm but doesn't cross the line into illegality. Social media companies are not just neutral platforms but actively curate what you see online, and they must be held accountable.' Neither misinformation nor disinformation are harms that firms need to address under the OSA, which only received royal assent less than two years ago. State-sponsored disinformation can amount to an offence of foreign interference. The report examines the role of platforms including X, Facebook and TikTok, and comes after this week's opening of a public inquiry into missed opportunities to prevent the killing of Bebe King, six, Elsie Dot Stancombe, seven, and Alice da Silva Aguiar, nine, on 29 July last year. Just over two hours after the first call to the emergency services, a post on X claimed the suspect was a 'Muslim immigrant', and within five hours a false name, 'Ali al-Shakati', was circulating on the same platform, the MPs found. Within a day, these two posts had received more than 5m views. In fact, the attacker was Axel Rudakubana, a British citizen born in Cardiff. Another X post that evening calling for violence towards asylum hostels received more than 300,000 views, and the next day the false name was on X's 'Trending in the UK' list. TikTok suggested to users under its 'others searched for' function the words 'Ali al-Shakti arrested in Southport', and by the end of the day after the attack social media posts with the false name had accrued 27m impressions and violence had broken out outside Southport mosque. On 3 and 4 August a Facebook post called for violence against the Britannia hotel in Leeds, where many occupants were asylum seekers. The committee called for fines of at least £18m if platforms do not set out how they will tackle significant harms that derive from content promoted by their recommendation systems even if it is not illegal. It concluded: 'The act fails to keep UK citizens safe from a core and pervasive online harm.' It called on the government to make social media platforms 'identify and algorithmically deprioritise factchecked misleading content, or content that cites unreliable sources, where it has the potential to cause significant harm.' But it stressed: 'It is vital that these measures do not censor legal free expression.' The MPs called on ministers to extend regulatory powers to tackle social media advertising systems that allow 'the monetisation of harmful and misleading content', with penalties rising depending on severity and the proceeds used to support victims of online harms. The Department for Science, Innovation and Technology has been approached for comment. Ofcom said it held platforms to account over illegal content but stressed that the scope of laws requiring platforms to tackle legal but harmful content was a matter for the government and parliament. A spokesperson said: 'Technology and online harms are constantly evolving, so we're always looking for ways to make life online safer. We're proposing stronger protections including asking platforms to do more on recommender systems and to have clear protocols for responding to surges in illegal content during crises.' TikTok said its community guidelines prohibited inaccurate, misleading or false content that may cause significant harm and it worked with factcheckers and made any content that could not be verified as accurate ineligible for its 'for you' feed. X and Meta were approached for comment.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store