logo
It's a $US450 billion industry, and Australia is in prime position to become a player

It's a $US450 billion industry, and Australia is in prime position to become a player

The most important and sensitive part? Trump agreed that the emirates could buy the cutting-edge computer chips made by America's Nvidia that are powering the global revolution in artificial intelligence. The UAE's ambition is to become a leading global powerhouse in AI, the fastest-growing field of tech investment.
It's so sensitive that some of Trump's officials reportedly dissented, complaining that it risked the loss of one of America's few remaining technological advantages. The AI edge could 'leak' from the UAE to America's greatest rival, China, they feared. Washington bans the sale of the top-line chips to China. Trump signed the deal anyway.
The Wall Street Journal reported this deal as a 'coup' for Abu Dhabi: 'The United Arab Emirates has fewer citizens than the population of West Virginia. But an agreement to give the UAE coveted access to millions of the most advanced chips from Nvidia shows that the tiny, oil-rich Gulf monarchy knows how to play a clever economic game in the age of Trump.' The UAE's neighbour, Saudi Arabia, has announced a similar ambition to build an 'AI zone', supplied by Nvidia chips.
What does any of this have to do with Australia? First, the potential for China to conduct effortless sabotage of the US power grid shows that there is an accelerating need for nations to establish trusted supply chains for sensitive goods – especially now that a new global supply chain is about to be constructed for AI. The global market for AI infrastructure is worth around $US450 billion, according to Frank Holmes of US Global Investors. Australia is in prime position to be part of a trusted supply chain. The Biden administration listed Australia in the category of most-trusted nations.
Second, the UAE play shows that a fast-moving country can stake a claim to an industry that, to now, has been restricted largely to just three territories – the US, China and Taiwan.
And the moment is ripe for Australia, too, according to the official who led US tech security policy in Joe Biden's White House, Tarun Chhabra. He was in Australia around the time of the federal election and observes: 'I was struck by the sense of urgency and opportunity after the Australian election – 'if not now, when?' – for critical decisions at the nexus of technology policy and China policy.' The Albanese government is well-placed to seize the moment, he says.
'There's a healthy debate in Australia, as in many countries, about what sort of AI regulation to pursue. That is, of course, important. But there is also an opportunity to develop a strategy for growth and AI adoption, and attracting leading AI firms, especially US firms,' he tells me.
AI has crossed a key threshold. It was a subject of fascination in 2023 when ChatGPT was launched, he says. 'It was 'look at what the chatbot can do!'' And now? 'We are into the industrial application phase now. We could see in 2027-28 models as capable as the best humans in many fields of knowledge.'
Australian companies are alert to AI's potential for boosting productivity. The Tech Council of Australia's annual survey shows that AI is 'the defining technology trend' for 67 per cent of tech leaders. The council estimates that AI has the potential to create 200,000 jobs and $115 billion in economic value in Australia over the next five years.
'I think there's an opportunity for a national-level strategy to promote Australia as a hub for AI, to recruit talent' – especially now that the US is repelling skilled talent more than attracting it – 'as well as to build infrastructure and attract leading companies developing AI models and industrial applications,' says Chhabra, formerly the US National Security Council Coordinator for Technology and National Security.
Loading
'And then there's also the national security layer – an opportunity to adopt frontier AI in the defence and intelligence establishment, and also to attract leading defence industrial base firms that are software-centric and, increasingly, AI-centric.'
Chhabra, an adviser to the Garnaut Global consultancy founded by Australia's John Garnaut and also advising the US AI start-up firm Anthropic, cites another Australian advantage – the domestic superannuation sector with its $4 trillion in funds. 'There's an opportunity for democratic capital to seize this window as we see transformative technology emerging,' he said.
'What capital and what energy can be mobilised? Australia's energy potential is enormous, and its geopolitical risk is lower.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Mining giant looks to limit emissions by electrifying refining process
Mining giant looks to limit emissions by electrifying refining process

West Australian

timean hour ago

  • West Australian

Mining giant looks to limit emissions by electrifying refining process

A South West mining giant is looking to limit emissions by electrifying its heavily polluting refining process with help from a $4.4 million grant. South 32 received funding from the Australian Renewable Energy Agency in order to support the development of steam electrification pathways at the Worsley Alumina Refinery in the South West. The alumina refining industry is the country's biggest user of industrial process heat, collectively responsible for about 15 million metric tonnes of CO2 emissions in 2021 — 3 per cent of Australia's total green house gas emissions that year. Currently, close to 70 per cent of these emissions are produced from steam production in the alumina refining process, fuelled by fossil fuel sources such as coal and gas. With the sector identified as a hard-to-abate polluter finding a method to reduce emissions is needed. The identified options to reduce these emissions include electric boilers, which generate steam directly using an electrode, and mechanical vapour recompression, which involves capturing low-pressure waste vapour from the refining process for recompression to create pressurised steam for reuse. Paired with renewable energy these technologies have the potential to reduce the significant contribution to overall emissions alumina production entails. ARENA CEO Darren Miller said the study was a significant step towards making low emissions alumina and decarbonising Australian metals production. 'Meeting Australia's emissions reduction targets will require businesses in the most energy intensive industries to incorporate renewables in their operations,' he said. 'Funding from ARENA will help South32 investigate innovative electrification options for steam generation that enable the use of renewable energy.' South32 chief operating officer Vanessa Torres said the company had a long-term goal to achieve net zero emissions across all scopes by 2050 alongside the Federal Government's target and to halve overall emissions from the company by 2035 from their 2021 baseline. 'Decarbonising our operations is key to achieving our goals and targets,' she said. 'The pre-feasibility study that we will undertake at Worsley Alumina, with funding support from the Australian Renewable Energy Agency, builds on the work already under way to reduce Worsley Alumina's greenhouse gas emission. 'Electrification of the steam generation process at Worsley Alumina's refinery has the potential to further reduce the operation's green house gas emissions and we look forward to starting work on the project. We welcome the support from ARENA and look forward to the outcomes of the study.'

YouTube, Meta, TikTok reveal misinformation tidal wave
YouTube, Meta, TikTok reveal misinformation tidal wave

The Advertiser

timean hour ago

  • The Advertiser

YouTube, Meta, TikTok reveal misinformation tidal wave

Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.

YouTube, Meta, TikTok reveal misinformation tidal wave
YouTube, Meta, TikTok reveal misinformation tidal wave

West Australian

time2 hours ago

  • West Australian

YouTube, Meta, TikTok reveal misinformation tidal wave

Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store