Huge holes in tech anti-terrorism checks
The tech giants have not made changes recommended after the 2019 Christchurch terror attacks, a new report from the Australian eSafety commissioner finds.
'Telegram, WhatsApp and Meta's Messenger did not employ measures to detect livestreamed terrorist and violent extremism despite the fact that the 2019 Christchurch attack was livestreamed on another of Meta's services, Facebook Live,' commissioner Julie Inman Grant said.
'Ever since the 2019 Christchurch attack, we have been particularly concerned about the role of livestreaming, recommender systems and of course now AI, in producing, promoting and spreading this harmful content and activity.'
The report has been released days after NSW police charged a West Australian teenager for allegedly making online threats towards a mosque in southwestern Sydney that directly referenced replicating the Christchurch terror attack.
In a report released on Thursday, Ms Inman Grant points to holes and inconsistencies in how the tech platforms identify violent extremist material and child sexual abuse material.
Human moderators at Reddit and WhatsApp also understand markedly fewer languages than at Meta and Google.
The gaps are as simple as being logged in; people looking at Facebook or YouTube cannot report extremist content if they are not logged in.
WhatsApp is owned by Meta, but WhatsApp does not ban all organisations that are on Meta's Dangerous Organisations and Individuals list.
Across most tech platforms, analysis called 'hash-matching' is used. Hash-matching makes a unique digital signature on an image that is then compared with other images to weed out copies of extreme material.
Ms Inman Grant said some iterations of hash-matching had error rates as low as one-in-50 billion. But YouTube owner Google only uses hash-matching to find 'exact' matches, not altered copies.
'This is deeply concerning when you consider in the first days following the Christchurch attack, Meta stated that over 800 different versions of the video were in circulation,' Ms Inman Grant said.
The New Zealand government quickly classified footage of the livestreamed attack as 'objectionable material', banning possession and distribution. Men and teenage males were convicted up and down the country for having copies, many of which had rifle crosshairs or other video game iconography digitally added.
'Telegram said while it detected hashes of terrorist and violent extremist images and videos it had previously removed from its service, it did not utilise databases of known material from trusted external sources such as the Global Internet Forum to Counter Terror or Tech Against Terrorism,' Ms Inman Grant said in the report.
The loopholes and methods for people to watch and create criminal imagery are being served up by the tech platforms.
In the 12 months to the end of February 2024, Google received hundreds of reports that its own AI tool Gemini was being used to generate terrorist and child exploitation material.
AI-generated suspected terrorist and violent extremist material reports totalled 258. There were 86 user reports of suspected AI-generated, synthetic child sexual exploitation and abuse material.
Google was unable to tell eSafety if the 344 reports were actually offensive material.
The online safety regulator conducted the research after issuing Google, Meta, WhatsApp, Reddit, Telegram and X notices to each answer questions about the steps they were taking to implement the Basic Online Safety Expectations with respect to terrorist and violent extremist material and activity. The notice is binding under Australian law.
X is challenging the notice at the Administrative Review Tribunal. Telegram has been fined more than $950,000 for responding late.
An independent inquiry into the 2019 Christchurch terrorist attack concluded New Zealand's government agencies could not have detected the shooter's plan 'except by chance'.
The report detailed how the terrorist was radicalised online and legally acquired semiautomatic weapons before the shooting; New Zealand's government quickly brought in sweeping gun reform.
The Australian terrorist responsible has been sentenced to life in prison in New Zealand without the chance of parole after pleading guilty but has an appeal against the sentence and convictions pending.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
2 hours ago
- Business Insider
Sam Altman says the energy needed for an average ChatGPT query can power a lightbulb for a few minutes
Altman was writing about the impact that AI tools will have on the future in a blog post on Tuesday when he referenced the energy and resources consumed by OpenAI's chatbot, ChatGPT. "People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes," Altman wrote. "It also uses about 0.000085 gallons of water; roughly one-fifteenth of a teaspoon," he continued. Altman wrote that he expects energy to "become wildly abundant" in the 2030s. Energy, along with the limitations of human intelligence, have been "fundamental limiters on human progress for a long time," Altman added. "As data center production gets automated, the cost of intelligence should eventually converge to near the cost of electricity," he wrote. OpenAI did not respond to a request for comment from Business Insider. This is not the first time Altman has predicted that AI will become cheaper to use. In February, Altman wrote on his blog that the cost of using AI will drop by 10 times every year. "You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period," Altman wrote. "Moore's law changed the world at 2x every 18 months; this is unbelievably stronger," he added. Tech companies hoping to dominate in AI have been considering using nuclear energy to power their data centers. In September, Microsoft signed a 20-year deal with Constellation Energy to reactivate one of the dormant nuclear plants located in Three Mile Island. In October, Google said it had struck a deal with Kairos Power, a nuclear energy company, to make three small modular nuclear reactors. The reactors, which will provide up to 500 megawatts of electricity, are set to be ready by 2035. Google's CEO, Sundar Pichai, said in an interview with Nikkei Asia published in October that the search giant wants to achieve net-zero emissions across its operations by 2030. He added that besides looking at nuclear energy, Google was considering solar energy.

Business Insider
2 hours ago
- Business Insider
Sam Altman says the energy needed for an average ChatGPT query can power a lightbulb for a few minutes
OpenAI's CEO, Sam Altman, said the energy needed to power an average ChatGPT query can keep a lightbulb on for a few minutes. Altman was writing about the impact that AI tools will have on the future in a blog post on Tuesday when he referenced the energy and resources consumed by OpenAI's chatbot, ChatGPT. "People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes," Altman wrote. "It also uses about 0.000085 gallons of water; roughly one-fifteenth of a teaspoon," he continued. Altman wrote that he expects energy to "become wildly abundant" in the 2030s. Energy, along with the limitations of human intelligence, have been "fundamental limiters on human progress for a long time," Altman added. "As data center production gets automated, the cost of intelligence should eventually converge to near the cost of electricity," he wrote. OpenAI did not respond to a request for comment from Business Insider. This is not the first time Altman has predicted that AI will become cheaper to use. In February, Altman wrote on his blog that the cost of using AI will drop by 10 times every year. "You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period," Altman wrote. "Moore's law changed the world at 2x every 18 months; this is unbelievably stronger," he added. Tech companies hoping to dominate in AI have been considering using nuclear energy to power their data centers. In September, Microsoft signed a 20-year deal with Constellation Energy to reactivate one of the dormant nuclear plants located in Three Mile Island. In October, Google said it had struck a deal with Kairos Power, a nuclear energy company, to make three small modular nuclear reactors. The reactors, which will provide up to 500 megawatts of electricity, are set to be ready by 2035. Google's CEO, Sundar Pichai, said in an interview with Nikkei Asia published in October that the search giant wants to achieve net-zero emissions across its operations by 2030. He added that besides looking at nuclear energy, Google was considering solar energy. "It was a very ambitious target, and we are still going to be working very ambitiously towards it. Obviously, the trajectory of AI investments has added to the scale of the task needed," Pichai said.


Tom's Guide
2 hours ago
- Tom's Guide
Forget ChatGPT — these are my four favorite AI research tools
The huge sea of AI tools now available to us can do some incredible things. But for me, their best use is as an infinite guide to the world around us. Whether it is a deep research project or a quick answer to a question, AI has become the ultimate research tool. This is what I use AI for every single day, both with answering my simplest of questions and helping me understand the complex. That said, some tools just do a better job at this. Out of all the options out there right now, these are my four favorite AI research tools. Yes, we know — you're tired of hearing about chatbots… but let's talk about them some more. They are the bread-and-butter of AI tools, and when it comes to research, they absolutely thrive. While you can use any of the big names like ChatGPT, Gemini, or Deepseek, my favorite for research right now is Claude. The Anthropic-owned chatbot saw a huge upgrade with its Claude 4 models and thrives in its understanding of complicated subjects. I use Claude for everything from a quick answer to a simple question all the way through to a massive deep dive into complicated concepts and tricky-to-understand processes. Claude also offers pre-built prompts to help you learn about new topics, inputting phrases automatically for you to get the best answer. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. For those who haven't used Perplexity before, the best way to describe it is as an AI-powered Google. It's the combination of a search engine and an AI chatbot. Ask it questions, and it will search the internet, using the answers of Reddit forums, research papers, and news articles to answer whatever question you might have in immense detail. While it has some flaws, specifically around the lack of ability to do things like load maps or complete purchases, I have found that it is a better option than Google in a number of situations, especially when you want a quick and detailed answer to a complicated question. Often times, Perplexity will answer a string of questions you might have to ask Google one after the other, all in one search. I recently covered Logically, saying it was like a mix of Perplexity, NotebookLM, and ChatGPT put together. So, in theory, does that mean you don't need any of the above tools? Well, no. While Logically offers the functions of all three, it doesn't do it as well as any of them. Instead, Logically consolidates all of these features into one place, offering a very specific experience. While anyone can find use in this, whether it's for school, work, or just gathering important information, it is intended for a very focused kind of research. This isn't the tool to go for when you're doing a quick dive into a topic or just want to ask a couple of questions. Instead, Logically is what you pull out when you have a 10,000-word essay or a year-long project to keep track of. Part of the Google Gemini family, NotebookLM is a fantastic research tool. Its main use is in summarization. Give it incredibly long documents, YouTube videos, news articles, or just about any kind of source of information, and NotebookLM will summarize it, picking out key points and offering study guides, timelines, and FAQs on the information you've provided. This doesn't have to just be one source of information. You can set up projects with multiple sources of information. One of the more unique features of Notebook is that you can generate a conversation between two AI voices talking through your sources of information. It's kind of like making a podcast specifically for you.