logo
#

Latest news with #ShaunDavies

YouTube, Meta, TikTok reveal misinformation tidal wave
YouTube, Meta, TikTok reveal misinformation tidal wave

The Advertiser

time6 days ago

  • Politics
  • The Advertiser

YouTube, Meta, TikTok reveal misinformation tidal wave

Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.

YouTube, Meta, TikTok reveal misinformation tidal wave
YouTube, Meta, TikTok reveal misinformation tidal wave

West Australian

time6 days ago

  • Business
  • West Australian

YouTube, Meta, TikTok reveal misinformation tidal wave

Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.

YouTube, Meta, TikTok reveal misinformation tidal wave
YouTube, Meta, TikTok reveal misinformation tidal wave

Perth Now

time6 days ago

  • Business
  • Perth Now

YouTube, Meta, TikTok reveal misinformation tidal wave

Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.

MPs call for urgent action over 'toxic' male influencers
MPs call for urgent action over 'toxic' male influencers

ITV News

time24-04-2025

  • Politics
  • ITV News

MPs call for urgent action over 'toxic' male influencers

ITV News' Political Correspondent Harry Horton speaks to Labour MPs about how to tackle the culture of toxic masculinity and give young men and boys more positive role models. A new group of Labour MPs want to pressure the government into a radical rethink of how to steer young men and boys away from the culture of toxic masculinity. Each of the eight MPs at the meeting picks out different challenges facing young men and boys. 'We have to get away from a political snobbery,' said Shaun Davies, who represents Telford in Shropshire. 'Which is to say that to talk about men's issues and boys issues is somehow anti-women or anti-girls. It absolutely is not.' Jonathan Brash, a former teacher and now Hartlepool MP said: 'I've been looking at the exclusion rates in secondary school and they're going up and up and up. 'Why are young men no longer fitting into our education system and then what happens when they are pushed out of it?' Rachel Tayler's North Warwickshire constituency is a former mining area. She believes a shift in the type of physical work men often do has had an impact: 'Now they're working in massive logistics factories, all with earpods in or on forklift trucks or operating robots. And they don't see anybody or talk to anybody all day long.' The conversations around masculinity have been sparked, in part, by the hit Netflix drama 'Adolescence', which tells the story of a 13-year-old boy accused of stabbing a female classmate. Mr Davies said Labour MPs have been pushing for a cross-government approach on issues affecting men long before the TV drama, but admits politicians have to do more. 'There's absolutely a fundamental problem that there is a generation of young boys coming through where there is not an offer for them and they do not have a sense of belonging and that's a moral outrage that we need to address.' In Bishop Auckland, the local MP Sam Rushworth wants to hear from pupils about the issues raised by Adolescence. He's invited ITV News to a conversation he's hosting at the school, and there's one name that keeps bring brought up by the pupils: Andrew Tate. 'People take him seriously,' said one girl. 'He's got such an influence on people'. One boy said Tate and other male influencers just 'popped up' on his social media feed. 'I thought this might help me learn how to make lots of money. But then when I found out what he did, I straight unfollowed him.' Some of the boys admit talking about emotions is much more taboo than it is for girls. 'We have this idea that we can't open up as much,' said one year ten boy. 'You don't speak to anyone about them,' said another. 'There's no point. Because most of the time it's someone telling you just to man up.' Away from politicians, one former teacher is trying to help navigate young men through their own adolescence. Mike Nicholson set up Progressive Masculinity to hold workshops in schools to challenge some of society's expectations of what it means to be a man. 'I noticed while I was a teacher that boys and young men really don't have safe spaces to go and discuss what it can mean to be a man, to explore the potential of masculinity without fear of judgement, without fear of shame or being ridiculed,' he said. Nicholson said the challenges facing men are not new, but believes the world is now ready to have what he calls 'difficult conversations'. 'I think social media maybe has intensified some of it, but I think these conversations are well overdue.' So what can be done? The Labour MPs we spoke to have called for a 'cultural shift' in the way the public and private sector approaches the issues faced by young men and boys. Campaigners say there needs to be a 'dedicated strategy' across government. But the challenges are broad, spanning areas such as health, education and the internet. Even the prime minister, who has taken a keen interest in the challenges raised by Adolescence, admits there 'isn't an obvious policy response'. And so the fear some gave is that despite the attention of MPs and the public, there is a risk that young men and boys could slip off the agenda.

New flood walls to be built after £708k cash award
New flood walls to be built after £708k cash award

Yahoo

time06-04-2025

  • Climate
  • Yahoo

New flood walls to be built after £708k cash award

Two permanent flood defence walls are to be installed in Ironbridge to protect vulnerable homes and businesses from the River Severn. Telford and Wrekin Council has been awarded £708,000 by the government to build the walls along Bower's Yard and Ladywood. The funding will also enable flood resilience measures, such as flood doors and non-return valves, to be installed in homes. Councillor Carolyn Healy, of the Labour-run council, said the money would "support the vital efforts" to protect the community from the "misery and devastation" that flooding caused. She added that flooding had become more frequent and the water level had risen higher. The funding is part of a two-year, nationwide government project that will see £2.65bn spent on constructing new flood schemes and maintaining existing defences. The Labour MP for Telford, Shaun Davies, said he was "thrilled" by the investment. A further £16m has also been secured from the nationwide project to improve flood protection across River Severn communities and Shropshire. Telford and Wrekin Council also plans to launch a separate flood management scheme that will use wireless sensors to monitor silt and water levels in gullies. The measurements will be provided in real time and help improve surface flood management. The new flood walls are set to be installed this year. Follow BBC Shropshire on BBC Sounds, Facebook, X and Instagram. Repeated flooding puts our future at risk - rowers Town to get new flood defences 'The worst thing is hearing a rainstorm warning' Telford and Wrekin Council

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store