Latest news with #JamesManyika


South China Morning Post
4 days ago
- Business
- South China Morning Post
Who will win the AI contest of the century?
The arrival of artificial intelligence, robotics and new technology has been heralded as a game changer. But how will it unfold, and who will be able to take advantage of AI to win the contest of the century: the United States, China or some other country? Advertisement Writing in Foreign Affairs in 2023, James Manyika, senior vice-president of technology and society at Google, and Nobel laureate economist Michael Spence argued that 'by the beginning of the next decade, the shift to AI could become a leading driver of global prosperity'. These gains would come from not just the rapid advances in AI in creating new content and applications in daily life, but also its rapid spread through the democratisation of innovation. But for it to happen, Manyika and Spence say, we need a new policy framework as well as a new mindset towards AI, and that 'AI technologies must be embraced as tools that can enhance, rather than undermine, human potential and ingenuity'. I think the AI revolution has arrived faster than expected. So far, it looks like the US and China are in a two-horse race, with the others still struggling to catch up for various reasons. My thesis is that no country can afford not to encourage AI adoption to enhance national productivity and avoid the digital knowledge divide. Those who do not will become marginalised. The contest of the century is thus between all countries. In this cutthroat race, with Big Tech seeking to dominate the 'pay-by-subscription' game, we may end up being mentally and financially colonised in different tech domains. Advertisement This is where the Global South can innovate its way through open-access systems that meet individual or local needs, without giving away valuable data by choosing to follow one algorithm platform or becoming locked into 'pay for upgrade' contracts.


The Advertiser
03-06-2025
- Business
- The Advertiser
Australia risks missing AI boost if it gets rules wrong
Australia could miss out on vital health, entertainment and productivity breakthroughs if it introduces rigid artificial intelligence rules rather than taking a balanced approach like some of its neighbours. Google-Alphabet Senior Vice President James Manyika issued the warning while attending an AI summit in Sydney on Tuesday to discuss the technology's potential uses in Australia. The executive also warned the risks of AI misuse were real and its governance would require careful consideration in each industry. His warnings come a day after the Business Council of Australia called for greater support, rules and innovation in AI to boost the nation's productivity, and as the federal government considers regulations following an inquiry. Rules about the use of AI technology vary across the world, from the more restrictive approach of the European Union to the hands-off style adopted in the US. A "two-sided" balance between the risks and rewards of AI - similar to rules adopted by Singapore - should be Australia's goal, Mr Manyika said. "On the one hand, you want to address the risks and complexities that come with this technology, which are quite real, but on the other side, you want regulation that enables all the beneficial applications and possibilities," he told AAP. "Both are important and I'm hoping Australia takes that approach." Allowing businesses to experiment with AI technology would be crucial to boosting productivity, he said, as well as financially supporting research opportunities, as breakthroughs in health and social issues would not "happen automatically". Restrictions should be applied to individual high-risk uses, the former United Nations AI policy co-chair said, and by focusing on industries with the highest risks. "A sector-based approach is important because often underlying technology applied in one sector may be perfectly fine - but applied in a sector, like financial services or health care, it is absolutely not," he said. Google announced several AI advancements at its annual developers conference in the United States in May, including plans to build a universal AI assistant and make changes to web searches. But the internet giant's video-generating AI tool Veo 3 arguably grabbed the most attention as it created audio that appeared to come from the mouths of AI-crafted characters. The development, like others in AI video creation, had the potential to make traditional filmmakers nervous, Mr Manyika said. But it could also play an important role as a tool in designing productions rather than replacing them. "Many start with fear and concern but often, when they have actually played with the tools and also been part of ... collaborations we've done ... that's generated a lot of excitement," he said. "Scrappier filmmakers have been thrilled because (they) can do previews with a hundred possibilities and then decide which one (they're) actually going to shoot." The federal government issued voluntary AI rules in September but has not created mandatory guardrails for high-risk AI use. A Senate inquiry made 13 recommendations in November, including introducing a dedicated AI law and ensuring general AI services such as Google Gemini and Open AI's ChatGPT fall beneath it. Australia could miss out on vital health, entertainment and productivity breakthroughs if it introduces rigid artificial intelligence rules rather than taking a balanced approach like some of its neighbours. Google-Alphabet Senior Vice President James Manyika issued the warning while attending an AI summit in Sydney on Tuesday to discuss the technology's potential uses in Australia. The executive also warned the risks of AI misuse were real and its governance would require careful consideration in each industry. His warnings come a day after the Business Council of Australia called for greater support, rules and innovation in AI to boost the nation's productivity, and as the federal government considers regulations following an inquiry. Rules about the use of AI technology vary across the world, from the more restrictive approach of the European Union to the hands-off style adopted in the US. A "two-sided" balance between the risks and rewards of AI - similar to rules adopted by Singapore - should be Australia's goal, Mr Manyika said. "On the one hand, you want to address the risks and complexities that come with this technology, which are quite real, but on the other side, you want regulation that enables all the beneficial applications and possibilities," he told AAP. "Both are important and I'm hoping Australia takes that approach." Allowing businesses to experiment with AI technology would be crucial to boosting productivity, he said, as well as financially supporting research opportunities, as breakthroughs in health and social issues would not "happen automatically". Restrictions should be applied to individual high-risk uses, the former United Nations AI policy co-chair said, and by focusing on industries with the highest risks. "A sector-based approach is important because often underlying technology applied in one sector may be perfectly fine - but applied in a sector, like financial services or health care, it is absolutely not," he said. Google announced several AI advancements at its annual developers conference in the United States in May, including plans to build a universal AI assistant and make changes to web searches. But the internet giant's video-generating AI tool Veo 3 arguably grabbed the most attention as it created audio that appeared to come from the mouths of AI-crafted characters. The development, like others in AI video creation, had the potential to make traditional filmmakers nervous, Mr Manyika said. But it could also play an important role as a tool in designing productions rather than replacing them. "Many start with fear and concern but often, when they have actually played with the tools and also been part of ... collaborations we've done ... that's generated a lot of excitement," he said. "Scrappier filmmakers have been thrilled because (they) can do previews with a hundred possibilities and then decide which one (they're) actually going to shoot." The federal government issued voluntary AI rules in September but has not created mandatory guardrails for high-risk AI use. A Senate inquiry made 13 recommendations in November, including introducing a dedicated AI law and ensuring general AI services such as Google Gemini and Open AI's ChatGPT fall beneath it. Australia could miss out on vital health, entertainment and productivity breakthroughs if it introduces rigid artificial intelligence rules rather than taking a balanced approach like some of its neighbours. Google-Alphabet Senior Vice President James Manyika issued the warning while attending an AI summit in Sydney on Tuesday to discuss the technology's potential uses in Australia. The executive also warned the risks of AI misuse were real and its governance would require careful consideration in each industry. His warnings come a day after the Business Council of Australia called for greater support, rules and innovation in AI to boost the nation's productivity, and as the federal government considers regulations following an inquiry. Rules about the use of AI technology vary across the world, from the more restrictive approach of the European Union to the hands-off style adopted in the US. A "two-sided" balance between the risks and rewards of AI - similar to rules adopted by Singapore - should be Australia's goal, Mr Manyika said. "On the one hand, you want to address the risks and complexities that come with this technology, which are quite real, but on the other side, you want regulation that enables all the beneficial applications and possibilities," he told AAP. "Both are important and I'm hoping Australia takes that approach." Allowing businesses to experiment with AI technology would be crucial to boosting productivity, he said, as well as financially supporting research opportunities, as breakthroughs in health and social issues would not "happen automatically". Restrictions should be applied to individual high-risk uses, the former United Nations AI policy co-chair said, and by focusing on industries with the highest risks. "A sector-based approach is important because often underlying technology applied in one sector may be perfectly fine - but applied in a sector, like financial services or health care, it is absolutely not," he said. Google announced several AI advancements at its annual developers conference in the United States in May, including plans to build a universal AI assistant and make changes to web searches. But the internet giant's video-generating AI tool Veo 3 arguably grabbed the most attention as it created audio that appeared to come from the mouths of AI-crafted characters. The development, like others in AI video creation, had the potential to make traditional filmmakers nervous, Mr Manyika said. But it could also play an important role as a tool in designing productions rather than replacing them. "Many start with fear and concern but often, when they have actually played with the tools and also been part of ... collaborations we've done ... that's generated a lot of excitement," he said. "Scrappier filmmakers have been thrilled because (they) can do previews with a hundred possibilities and then decide which one (they're) actually going to shoot." The federal government issued voluntary AI rules in September but has not created mandatory guardrails for high-risk AI use. A Senate inquiry made 13 recommendations in November, including introducing a dedicated AI law and ensuring general AI services such as Google Gemini and Open AI's ChatGPT fall beneath it. Australia could miss out on vital health, entertainment and productivity breakthroughs if it introduces rigid artificial intelligence rules rather than taking a balanced approach like some of its neighbours. Google-Alphabet Senior Vice President James Manyika issued the warning while attending an AI summit in Sydney on Tuesday to discuss the technology's potential uses in Australia. The executive also warned the risks of AI misuse were real and its governance would require careful consideration in each industry. His warnings come a day after the Business Council of Australia called for greater support, rules and innovation in AI to boost the nation's productivity, and as the federal government considers regulations following an inquiry. Rules about the use of AI technology vary across the world, from the more restrictive approach of the European Union to the hands-off style adopted in the US. A "two-sided" balance between the risks and rewards of AI - similar to rules adopted by Singapore - should be Australia's goal, Mr Manyika said. "On the one hand, you want to address the risks and complexities that come with this technology, which are quite real, but on the other side, you want regulation that enables all the beneficial applications and possibilities," he told AAP. "Both are important and I'm hoping Australia takes that approach." Allowing businesses to experiment with AI technology would be crucial to boosting productivity, he said, as well as financially supporting research opportunities, as breakthroughs in health and social issues would not "happen automatically". Restrictions should be applied to individual high-risk uses, the former United Nations AI policy co-chair said, and by focusing on industries with the highest risks. "A sector-based approach is important because often underlying technology applied in one sector may be perfectly fine - but applied in a sector, like financial services or health care, it is absolutely not," he said. Google announced several AI advancements at its annual developers conference in the United States in May, including plans to build a universal AI assistant and make changes to web searches. But the internet giant's video-generating AI tool Veo 3 arguably grabbed the most attention as it created audio that appeared to come from the mouths of AI-crafted characters. The development, like others in AI video creation, had the potential to make traditional filmmakers nervous, Mr Manyika said. But it could also play an important role as a tool in designing productions rather than replacing them. "Many start with fear and concern but often, when they have actually played with the tools and also been part of ... collaborations we've done ... that's generated a lot of excitement," he said. "Scrappier filmmakers have been thrilled because (they) can do previews with a hundred possibilities and then decide which one (they're) actually going to shoot." The federal government issued voluntary AI rules in September but has not created mandatory guardrails for high-risk AI use. A Senate inquiry made 13 recommendations in November, including introducing a dedicated AI law and ensuring general AI services such as Google Gemini and Open AI's ChatGPT fall beneath it.

9 News
03-06-2025
- Business
- 9 News
Google's new AI tool creates the most realistic videos yet
Your web browser is no longer supported. To improve your experience update it here BREAKING Major Sydney tunnel closed after crane truck rolls over Google's new AI video generator Veo 3 - announced just last month - is causing shockwaves online with its photorealistic content. The AI video generator can create eight-second videos within moments after receiving a prompt. Content like short films, fake street interviews, sci-fi, action and other prompts are appearing all over social media; you may have seen them already without even knowing it's AI. Google's own prompt of an "old sailor" showcases just how impressive the quality of the AI is. (Google) Google's "state-of-the-art video generation model" claims it stands out from other AI video generators due to its innovations that were "designed for greater control". Notably, the tool now produces realistic soundscapes featuring audio and dialogue and can be fine-tuned to keep consistent characters in different video clips. Users can also precisely adjust framing and movement. Veo 3 can even use videos of yourself as a reference point for animating facial expressions and emotions. The tool has been available through Google's paid AI plans and is accessible through its AI chatbot Gemini and new creative tool Flow. The technology is set to cause an even greater shift within the creative industry as filmmakers experiment with the technology . Some social media users noted the "uncanny" nature of the AI videos. (Google) AI and its future use were the topic of the Australian Financial Review 's AI summit held today, with Google Australia and New Zealand managing director Melanie Silva saying the technology will be critical to lift Australia's productivity. "Everything we know of can be faster and easier," she told the AFR summit. "If we put a productivity lens around it and think about how Australia might solve the 10-year productivity slump that we are in, this is by far one of the biggest solutions." Google's senior vice president for research, labs, technology and society James Manyika said Australia would need to capitalise on opportunities provided by AI. "It's fundamentally important in our minds to have a very vibrant research and AI ecosystem," Manyika told the summit. "I think Australia has a great starting point." "The CSIRO is an extraordinary entity ... but more may actually be needed." Technology Artificial Intelligence Google Tech World CONTACT US Property News: 'Stressful': Perth mum's dilemma after rental mix-up.


Bloomberg
10-02-2025
- Business
- Bloomberg
How Google, DeepMind View AI, Regulation, Productivity
Demis Hassabis, CEO and co-founder of DeepMind Technologies, and James Manyika, senior VP for technology and society at Google, speak to Bloomberg's Tom Mackenzie from the sidelines of the Paris AI summit at the Grand Palais. They discuss artificial intelligence, the impact of DeepSeek on the broader industry, productivity and regulation in the US and the European Union. (Source: Bloomberg)
Yahoo
05-02-2025
- Business
- Yahoo
Google drops pledge not to use artificial intelligence for weapons
Google has removed a pledge from its artificial intelligence (AI) principles that had said the company would not use the technology to develop weapons. The technology giant has rewritten the principles that guide its development and use of AI – which are published online – but a section pledging not to develop tech 'that cause or are likely to cause harm' has now been removed. That section had said the firm would not pursue applications in the areas of weapons or 'that gather or use information for surveillance violating internationally accepted norms'. Instead, the newly streamlined principles now feature a section on 'responsible development and deployment' which says the tech giant will implement 'appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.' In a blog post, Google senior vice president James Manyika and Demis Hassabis, who leads the firm's AI lab, Google DeepMind, said the company needed to update its AI principles as they had been first published in 2018 and the technology has 'evolved rapidly' since then. 'Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications,' they said. 'It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers.' They said this had meant increased international collaborative efforts on common principles, which the blog post said Google was 'encouraged' by. But Manyika and Hassabis said 'global competition' for AI leadership was taking place within an 'increasingly complex geopolitical landscape'. 'We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,' they said. 'And we believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.' There is an ongoing debate among AI experts, governments, regulators, tech firms and academics about how the development powerful emerging technology should be monitored or regulated. Previous international summits have seen countries and tech firms sign non-binding agreements to develop AI 'responsibly', but no binding international law on the issue is yet in place. In the past, Google's contracts to provide technology, such as cloud services, to the US and Israeli military have sparked internal protests from employees. James Fisher, chief strategy officer at AI firm Qlik, said Google's decision was concerning, and highlighted the need for countries such as the UK to push for more international governance. 'Changing or removing responsible AI policies raises concerns about how accountable organisations are for their technology, and around the ethical boundaries of AI deployment,' he told the PA news agency. 'AI governance will of course need to flex and evolve as the technology develops, but adherence to certain standards should be a non-negotiable. 'For businesses, this decision shows we are likely to face a complex AI landscape going forwards, where ethical considerations are weighed up against industry competition and geopolitics. 'For the UK, which has attempted to position itself as a leader in AI safety and regulation, this decision only makes it more important to put robust, enforceable AI governance frameworks in place. 'The UK's ability to balance innovation with ethical safeguards could set a global precedent, but it will require collaboration between government, industry and international partners to ensure AI remains a force for good.'