
YouTube, Meta, TikTok reveal misinformation tidal wave
More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google.
Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation.
Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat.
The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads.
US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal.
TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic.
Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them.
Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide.
Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies.
Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners.
In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia.
Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations.
"I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said.
"I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit."
In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.
Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation.
More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google.
Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation.
Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat.
The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads.
US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal.
TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic.
Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them.
Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide.
Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies.
Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners.
In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia.
Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations.
"I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said.
"I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit."
In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.
Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation.
More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google.
Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation.
Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat.
The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads.
US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal.
TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic.
Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them.
Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide.
Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies.
Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners.
In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia.
Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations.
"I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said.
"I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit."
In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.
Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation.
More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google.
Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation.
Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat.
The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads.
US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal.
TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic.
Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them.
Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide.
Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies.
Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners.
In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia.
Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations.
"I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said.
"I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit."
In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Perth Now
13 minutes ago
- Perth Now
Deepfake viral video of Orca attack twists real-life tragedy
The insidious nature of deepfake content which causes us to question our own grip on reality has been proven by the latest AI hoax; a video of an orca Whale attacking its trainer. The series of TikTok videos — which have racked up more than 87 million views between them — show trainer, Jessica Radcliffe, being brutally attacked by an orca whale that is performing in a pool at Pacific Blue Marine Park. Many viewers were deceived by the footage, taking to the comments section to share their condolences and comment on the tragedy. If you'd like to view this content, please adjust your . To find out more about how we use cookies, please see our Cookie Guide. 'Why they don't help and only watch her ?! Why?,' one person asked. 'Rip to Jessica Radcliffe,' another said, echoing the opinion of many others. However, there is some tell-tale signs the footage isn't real, including the fin of the whale morphing into a human leg in one video, and the other staff members on the side of the pool moving in complete unison. One sign the videos are AI is when fin of the whale morphs into a human leg. Credit: TikTok There is no record of a whale trainer called Jessica Radcliffe online, no record of Pacific Blue Marine Park, and a story like this would have made national news. The now-viral video unearths the uncomfortable question of how these videos are allowed to be published on social media. According to Forbes, the problem can be the viewer because humans are attracted to the sensational. Dawn Brancheau was a SeaWorld Trainer who was killed by an Orca Whale on Febuary 24 in 2010. Credit: Instagram In a dark twist, the video also seems to have been inspired by the real-life tragedy of Dawn Brancheau; the SeaWorld Trainer who was killed by an orca whale, Tillikum, on February 24 in 2010. A report from the U.S. Occupational Safety and Health Commission outlines the details of what happened to Brancheau. 'Ms. Brancheau reclined on a platform located just a few inches below the surface of the water. Tilikum was supposed to mimic her behavior by rolling over onto his back, the report stated. 'Instead, Tilikum grabbed Ms. Brancheau and pulled her off the platform and into the pool. Ms. Brancheau died as a result of Tilikum's actions.' Like in the fake video, Brancheau was killed by the orca in the pool while the audience watched on in complete horror.

ABC News
14 minutes ago
- ABC News
Tech giants Apple and Google lose landmark court case as federal judge rules they engaged in anti-competitive conduct
In a landmark decision, the federal court has today ruled against tech giants Apple and Google in a major win for consumers, finding that the companies engaged in anti-competitive conduct. Judge Jonathan Beach found that both companies had broken the law by misusing their market power in the way they run their app stores which sell everything from smartphone apps to computer games. It clears the way for two class actions covering millions of Australian consumers and developers to pursue substantial compensation for the price and commissions they paid for digital content — which according to legal representatives for the class actions were heavily inflated on the app stores. Justice Beach also ruled on two cases brought by Epic Games, the developer of blockbuster online game Fortnite. He found Google and Apple breached section 46 of the competition and consumer act by misusing their market power to reduce competition but he rejected other allegations including that the companies had engaged in unconscionable conduct — behaviour so harsh it goes against good conscience. Advocates believe the judgement could have a significant impact on how digital platforms operate in Australia.


7NEWS
43 minutes ago
- 7NEWS
Prickly find in Brisbane woman's imported lingerie package
A woman has been caught red-handed after customs officers made a prickly discovery when opening packages purporting to contain lingerie and shoes. Kirsten Mae Fearn pleaded guilty to 14 biosecurity charges at Brisbane Magistrates Court in July after she repeatedly imported illegal plants to sell online. Department of Agriculture, Fisheries and Forestry officers cottoned on to the scheme after packages arrived in Sydney via air cargo from Hong Kong between February 2021 and March 2024. The packages were declared to contain lingerie and shoes — but when they were opened, biosecurity officers found 57 cacti and succulents. Investigators then found another 50 illegal succulents at Fearn's Brisbane home nursery. Deputy Secretary of Biosecurity, Operations and Compliance Justine Saunders said Fearn was repeatedly warned about the behaviour but she continued the illegal activity. 'The department elected to deal with this via criminal prosecution because of the seriousness of the matter,' Saunders said. 'Our biosecurity laws are vital to the health of Australia's economy and environment. 'Those who risk Australia's environment by deliberately trying to bypass our strict requirements will be caught and face the consequences.' During court proceedings, Fearn admitted to the ongoing illegal false declarations and importation of the plants, which she intended to sell online through her business. She faced a maximum penalty of 10 years' jail and a $600,000 fine. After pleading guilty on July 25, Fearn was sentenced to six months' imprisonment and released immediately on a Recognisance Release Order, meaning she was required to agree to certain court conditions. Saunders said this should serve as a warning to prevent others from importing biosecurity threats at Australian borders or in mailrooms. 'Australia has an enviable biosecurity record,' she said. 'We protect this through education and targeted regulation. We all need to play our part in keeping Australia safe.'