logo
Westacott, Fagg lead corporate Australia's King's Birthday honours

Westacott, Fagg lead corporate Australia's King's Birthday honours

Jennifer Westacott was sitting in her office at Western Sydney University when an email dropped in her inbox informing her she'd been given Australia's highest honour: an appointment as Companion of the Order of Australia.
'It's obviously very humbling and a great honour,' said the former Business Council of Australia chief executive, who joined WSU as chancellor in 2023. Recognition like the letters AC after one's name was always a team effort, she said on Monday. 'You never do these things alone.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Westacott, Fagg lead corporate Australia's King's Birthday honours
Westacott, Fagg lead corporate Australia's King's Birthday honours

AU Financial Review

time3 hours ago

  • AU Financial Review

Westacott, Fagg lead corporate Australia's King's Birthday honours

Jennifer Westacott was sitting in her office at Western Sydney University when an email dropped in her inbox informing her she'd been given Australia's highest honour: an appointment as Companion of the Order of Australia. 'It's obviously very humbling and a great honour,' said the former Business Council of Australia chief executive, who joined WSU as chancellor in 2023. Recognition like the letters AC after one's name was always a team effort, she said on Monday. 'You never do these things alone.'

Calls for AI to remain ‘unfettered' by Labor for business ‘innovation'
Calls for AI to remain ‘unfettered' by Labor for business ‘innovation'

Sky News AU

time6 days ago

  • Sky News AU

Calls for AI to remain ‘unfettered' by Labor for business ‘innovation'

Sky News host Peta Credlin discusses the calls for artificial intelligence to be 'unfettered' by the Albanese government for 'innovation'. 'Something that will be ubiquitous in a couple of years' time, it is already here, it is going grow and grow,' Ms Credlin said. 'Business Council of Australia they are warning the government not to get too heavy-handed … they want artificial intelligence unfettered because they want innovation.'

Australia risks missing AI boost if it gets rules wrong
Australia risks missing AI boost if it gets rules wrong

The Advertiser

time6 days ago

  • The Advertiser

Australia risks missing AI boost if it gets rules wrong

Australia could miss out on vital health, entertainment and productivity breakthroughs if it introduces rigid artificial intelligence rules rather than taking a balanced approach like some of its neighbours. Google-Alphabet Senior Vice President James Manyika issued the warning while attending an AI summit in Sydney on Tuesday to discuss the technology's potential uses in Australia. The executive also warned the risks of AI misuse were real and its governance would require careful consideration in each industry. His warnings come a day after the Business Council of Australia called for greater support, rules and innovation in AI to boost the nation's productivity, and as the federal government considers regulations following an inquiry. Rules about the use of AI technology vary across the world, from the more restrictive approach of the European Union to the hands-off style adopted in the US. A "two-sided" balance between the risks and rewards of AI - similar to rules adopted by Singapore - should be Australia's goal, Mr Manyika said. "On the one hand, you want to address the risks and complexities that come with this technology, which are quite real, but on the other side, you want regulation that enables all the beneficial applications and possibilities," he told AAP. "Both are important and I'm hoping Australia takes that approach." Allowing businesses to experiment with AI technology would be crucial to boosting productivity, he said, as well as financially supporting research opportunities, as breakthroughs in health and social issues would not "happen automatically". Restrictions should be applied to individual high-risk uses, the former United Nations AI policy co-chair said, and by focusing on industries with the highest risks. "A sector-based approach is important because often underlying technology applied in one sector may be perfectly fine - but applied in a sector, like financial services or health care, it is absolutely not," he said. Google announced several AI advancements at its annual developers conference in the United States in May, including plans to build a universal AI assistant and make changes to web searches. But the internet giant's video-generating AI tool Veo 3 arguably grabbed the most attention as it created audio that appeared to come from the mouths of AI-crafted characters. The development, like others in AI video creation, had the potential to make traditional filmmakers nervous, Mr Manyika said. But it could also play an important role as a tool in designing productions rather than replacing them. "Many start with fear and concern but often, when they have actually played with the tools and also been part of ... collaborations we've done ... that's generated a lot of excitement," he said. "Scrappier filmmakers have been thrilled because (they) can do previews with a hundred possibilities and then decide which one (they're) actually going to shoot." The federal government issued voluntary AI rules in September but has not created mandatory guardrails for high-risk AI use. A Senate inquiry made 13 recommendations in November, including introducing a dedicated AI law and ensuring general AI services such as Google Gemini and Open AI's ChatGPT fall beneath it. Australia could miss out on vital health, entertainment and productivity breakthroughs if it introduces rigid artificial intelligence rules rather than taking a balanced approach like some of its neighbours. Google-Alphabet Senior Vice President James Manyika issued the warning while attending an AI summit in Sydney on Tuesday to discuss the technology's potential uses in Australia. The executive also warned the risks of AI misuse were real and its governance would require careful consideration in each industry. His warnings come a day after the Business Council of Australia called for greater support, rules and innovation in AI to boost the nation's productivity, and as the federal government considers regulations following an inquiry. Rules about the use of AI technology vary across the world, from the more restrictive approach of the European Union to the hands-off style adopted in the US. A "two-sided" balance between the risks and rewards of AI - similar to rules adopted by Singapore - should be Australia's goal, Mr Manyika said. "On the one hand, you want to address the risks and complexities that come with this technology, which are quite real, but on the other side, you want regulation that enables all the beneficial applications and possibilities," he told AAP. "Both are important and I'm hoping Australia takes that approach." Allowing businesses to experiment with AI technology would be crucial to boosting productivity, he said, as well as financially supporting research opportunities, as breakthroughs in health and social issues would not "happen automatically". Restrictions should be applied to individual high-risk uses, the former United Nations AI policy co-chair said, and by focusing on industries with the highest risks. "A sector-based approach is important because often underlying technology applied in one sector may be perfectly fine - but applied in a sector, like financial services or health care, it is absolutely not," he said. Google announced several AI advancements at its annual developers conference in the United States in May, including plans to build a universal AI assistant and make changes to web searches. But the internet giant's video-generating AI tool Veo 3 arguably grabbed the most attention as it created audio that appeared to come from the mouths of AI-crafted characters. The development, like others in AI video creation, had the potential to make traditional filmmakers nervous, Mr Manyika said. But it could also play an important role as a tool in designing productions rather than replacing them. "Many start with fear and concern but often, when they have actually played with the tools and also been part of ... collaborations we've done ... that's generated a lot of excitement," he said. "Scrappier filmmakers have been thrilled because (they) can do previews with a hundred possibilities and then decide which one (they're) actually going to shoot." The federal government issued voluntary AI rules in September but has not created mandatory guardrails for high-risk AI use. A Senate inquiry made 13 recommendations in November, including introducing a dedicated AI law and ensuring general AI services such as Google Gemini and Open AI's ChatGPT fall beneath it. Australia could miss out on vital health, entertainment and productivity breakthroughs if it introduces rigid artificial intelligence rules rather than taking a balanced approach like some of its neighbours. Google-Alphabet Senior Vice President James Manyika issued the warning while attending an AI summit in Sydney on Tuesday to discuss the technology's potential uses in Australia. The executive also warned the risks of AI misuse were real and its governance would require careful consideration in each industry. His warnings come a day after the Business Council of Australia called for greater support, rules and innovation in AI to boost the nation's productivity, and as the federal government considers regulations following an inquiry. Rules about the use of AI technology vary across the world, from the more restrictive approach of the European Union to the hands-off style adopted in the US. A "two-sided" balance between the risks and rewards of AI - similar to rules adopted by Singapore - should be Australia's goal, Mr Manyika said. "On the one hand, you want to address the risks and complexities that come with this technology, which are quite real, but on the other side, you want regulation that enables all the beneficial applications and possibilities," he told AAP. "Both are important and I'm hoping Australia takes that approach." Allowing businesses to experiment with AI technology would be crucial to boosting productivity, he said, as well as financially supporting research opportunities, as breakthroughs in health and social issues would not "happen automatically". Restrictions should be applied to individual high-risk uses, the former United Nations AI policy co-chair said, and by focusing on industries with the highest risks. "A sector-based approach is important because often underlying technology applied in one sector may be perfectly fine - but applied in a sector, like financial services or health care, it is absolutely not," he said. Google announced several AI advancements at its annual developers conference in the United States in May, including plans to build a universal AI assistant and make changes to web searches. But the internet giant's video-generating AI tool Veo 3 arguably grabbed the most attention as it created audio that appeared to come from the mouths of AI-crafted characters. The development, like others in AI video creation, had the potential to make traditional filmmakers nervous, Mr Manyika said. But it could also play an important role as a tool in designing productions rather than replacing them. "Many start with fear and concern but often, when they have actually played with the tools and also been part of ... collaborations we've done ... that's generated a lot of excitement," he said. "Scrappier filmmakers have been thrilled because (they) can do previews with a hundred possibilities and then decide which one (they're) actually going to shoot." The federal government issued voluntary AI rules in September but has not created mandatory guardrails for high-risk AI use. A Senate inquiry made 13 recommendations in November, including introducing a dedicated AI law and ensuring general AI services such as Google Gemini and Open AI's ChatGPT fall beneath it. Australia could miss out on vital health, entertainment and productivity breakthroughs if it introduces rigid artificial intelligence rules rather than taking a balanced approach like some of its neighbours. Google-Alphabet Senior Vice President James Manyika issued the warning while attending an AI summit in Sydney on Tuesday to discuss the technology's potential uses in Australia. The executive also warned the risks of AI misuse were real and its governance would require careful consideration in each industry. His warnings come a day after the Business Council of Australia called for greater support, rules and innovation in AI to boost the nation's productivity, and as the federal government considers regulations following an inquiry. Rules about the use of AI technology vary across the world, from the more restrictive approach of the European Union to the hands-off style adopted in the US. A "two-sided" balance between the risks and rewards of AI - similar to rules adopted by Singapore - should be Australia's goal, Mr Manyika said. "On the one hand, you want to address the risks and complexities that come with this technology, which are quite real, but on the other side, you want regulation that enables all the beneficial applications and possibilities," he told AAP. "Both are important and I'm hoping Australia takes that approach." Allowing businesses to experiment with AI technology would be crucial to boosting productivity, he said, as well as financially supporting research opportunities, as breakthroughs in health and social issues would not "happen automatically". Restrictions should be applied to individual high-risk uses, the former United Nations AI policy co-chair said, and by focusing on industries with the highest risks. "A sector-based approach is important because often underlying technology applied in one sector may be perfectly fine - but applied in a sector, like financial services or health care, it is absolutely not," he said. Google announced several AI advancements at its annual developers conference in the United States in May, including plans to build a universal AI assistant and make changes to web searches. But the internet giant's video-generating AI tool Veo 3 arguably grabbed the most attention as it created audio that appeared to come from the mouths of AI-crafted characters. The development, like others in AI video creation, had the potential to make traditional filmmakers nervous, Mr Manyika said. But it could also play an important role as a tool in designing productions rather than replacing them. "Many start with fear and concern but often, when they have actually played with the tools and also been part of ... collaborations we've done ... that's generated a lot of excitement," he said. "Scrappier filmmakers have been thrilled because (they) can do previews with a hundred possibilities and then decide which one (they're) actually going to shoot." The federal government issued voluntary AI rules in September but has not created mandatory guardrails for high-risk AI use. A Senate inquiry made 13 recommendations in November, including introducing a dedicated AI law and ensuring general AI services such as Google Gemini and Open AI's ChatGPT fall beneath it.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store