logo
It's vital to balance AI agility with workplace stability

It's vital to balance AI agility with workplace stability

The Australian19-05-2025

Artificial intelligence's adoption has rapidly moved beyond the pilot phase as the technology is implemented at scale at many organisations in a way that is fundamentally changing how they operate.
That change is bringing tangible benefits that leaders are quick to recognise and celebrate – and so they should. But in my experience, some leaders are failing to consider and address the workplace tensions that AI implementation can bring.
Navigating these tensions is fundamental to the long-term success of an AI strategy, which is in turn fundamental to most organisations' competitive edge: stand still and you'll be left behind. Deloitte's 2025 Global Human Capital Trends Report contains a few key questions leaders should be asking themselves so they can begin to tackle the issues at hand.
The first question addresses the primary tension of AI implementation: how do you provide workplace stability while remaining agile enough to stay competitive? How do we adapt the structures in our organisations and in individual teams to provide stability, agility and productivity? In a word (and please forgive the neologism), how do you combine stability and agility to create stagility?
Getting the balance right is vital as constant organisational change can impact employee morale. According to the Human Capital Trends Report, 75 per cent of the thousands of workers surveyed globally feel they need greater stability at work in the future.
Meanwhile, the business leaders surveyed for the report feel pressure to adapt: only 19 per cent feel traditional business models are suitable for creating value for employees and the organisation, while 85 per cent say they need to create more agile ways of organising work to swiftly adapt to market changes.
Navigating this tension between stability and agility raises a follow-up question: what do you do with the efficiency dividend AI generates? Depending on the organisation, there are a few ways the tangible gains AI produces can be deployed to drive benefits for the business and for employees.
Amanda Flouch is Human Capital National Lead Partner Deloitte Australia
For example, you could reskill and retrain workers whose flow of work has been impacted by AI, helping them develop skills that have the greatest potential to create value for both the organisation and individual. This question is especially pertinent given the potential of agentic AI technologies to autonomously perform tasks. Humans and agents will need to work alongside one another, which will allow humans to focus on higher value tasks.
What skills to focus on is a question that human capital leaders in particular will need to consider in the context of their industry. But broadly speaking, premiums are being placed on soft skills that support critical thinking, innovation, collaboration, and meaningful human interaction – the things that AI can't do.
According to our report, companies that focus on enhancing human capabilities are almost twice as likely to have employees who believe their work is meaningful and twice as likely to achieve stronger financial and business outcomes.
In addition, AI literacy is a core skill that even workers who do not use the technology every day need to have. Many employers are taking the initiative to directly upskill their workforce through formal training but it's equally important to ask: how can I empower employees to play and experiment with AI?
Encouraging experimentation drives innovation and helps employees build trust in the AI as well as confidence in their own ability to navigate a world increasingly defined by it, enhancing feelings of stability.
Similarly, leaders need to think about this question: how can you use AI to treat everyone like a high performer? Respondents to Human Capital Trends highlighted that among all workplace practices, performance development is ranked as the second most important area for change following learning and development.
By focusing on a worker's individual skills and abilities rather than just their job description, organisations can incentivise growth, learning and innovation. Measuring and rewarding performance based on human and machine outcomes could also help workers share in the rewards of AI.
Where practicable, it might make sense for organisations to tie performance outcomes to employee development of AI proficiency, or to evaluate how their AI use is driving better outcomes for the business.
Leaders who can answer these key questions have a much higher chance of navigating the tensions between agility and stability, increasing the chances of successfully implementing AI in the long term and gaining a competitive edge. However, a leader's answers will also need to change with time, industry development and further technological advancement.
Careful, repeat consideration should be given to these questions as the nature of work continues to change – it will help ensure your organisation is correctly balancing both agility and stability.
Amanda Flouch is Human Capital National Lead Partner Deloitte Australia.
-
Disclaimer
This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor.
Deloitte shall not be responsible for any loss sustained by any person who relies on this publication.
About Deloitte
Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ('DTTL'), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. Please see www.deloitte.com/au to learn more.
Copyright © 2025 Deloitte Development LLC. All rights reserved.
-

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Getty argues its UK copyright case does not threaten AI
Getty argues its UK copyright case does not threaten AI

West Australian

time33 minutes ago

  • West Australian

Getty argues its UK copyright case does not threaten AI

Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development".

Getty argues its UK copyright case does not threaten AI
Getty argues its UK copyright case does not threaten AI

Perth Now

time35 minutes ago

  • Perth Now

Getty argues its UK copyright case does not threaten AI

Getty Images' landmark copyright lawsuit against artificial intelligence company Stability AI has begun at London's High Court, with Getty rejecting Stability AI's contention the case poses a threat to the generative AI industry. Seattle-based Getty, which produces editorial content and creative stock images and video, accuses Stability AI of using its images to "train" its Stable Diffusion system which can generate images from text inputs. Getty, which is bringing a parallel lawsuit against Stability AI in the United States, says Stability AI unlawfully scraped millions of images from its websites and used them to train and develop Stable Diffusion. Stability AI - which has raised hundreds of millions of dollars in funding and in March announced investment by the world's largest advertising company, WPP - is fighting the case and denies infringing any of Getty's rights. Before the trial began on Monday, Stability AI's spokesperson said "the wider dispute is about technological innovation and freedom of ideas". "Artists using our tools are producing works built upon collective human knowledge, which is at the core of fair use and freedom of expression," the spokesperson said. In court filings, Stability AI lawyer Hugo Cuddigan said Getty's lawsuit posed "an overt threat to Stability's whole business and the wider generative AI industry". Getty's lawyers said that argument was incorrect and their case was about upholding intellectual property rights. "It is not a battle between creatives and technology, where a win for Getty Images means the end of AI," Getty's lawyer Lindsay Lane told the court. She added: "The two industries can exist in synergistic harmony because copyright works and database rights are critical to the advancement and success of AI ... the problem is when AI companies such as Stability want to use those works without payment." Getty's case is one of several lawsuits brought in the United Kingdom, the US and elsewhere over the use of copyright-protected material to train AI models, after ChatGPT and other AI tools became widely available more than two years ago. Creative industries are grappling with the legal and ethical implications of AI models that can produce their own work after being trained on existing material. Prominent figures including Elton John have called for greater protections for artists. Lawyers say Getty's case will have a major effect on the law, as well as potentially informing government policy on copyright protections relating to AI. "Legally, we're in uncharted territory. This case will be pivotal in setting the boundaries of the monopoly granted by UK copyright in the age of AI," Rebecca Newman, a lawyer at Addleshaw Goddard, who is not involved in the case, said. Cerys Wyn Davies, from the law firm Pinsent Masons, said the High Court's ruling "could have a major bearing on market practice and the UK's attractiveness as a jurisdiction for AI development".

Artificial intelligence could help plan cities, but there are downsides
Artificial intelligence could help plan cities, but there are downsides

ABC News

time6 hours ago

  • ABC News

Artificial intelligence could help plan cities, but there are downsides

Artificial intelligence could be the "perfect" tool for planning the cities of the future and increasing productivity, but experts say it could come at a price. New peer-reviewed research published in Nature Cities has found large language models could assist urban planners in tasks, from acting as a database for local data and planning knowledge to drafting reports. Some experts said while the adoption of the new technology could increase productivity in urban planning, some jobs could be lost. Lead author Dr Xinyu Fu, a researcher in urban planning from the University of Waikato, explored how LLMs such as OpenAI's ChatGPT could be used in urban planning. When working on the research, Dr Fu spoke to several planners about their workflow, and found a lot of time was spent on administrative tasks. Those included interpreting policy documents, assessing development and analysing public submissions. "They spend a lot of time on doing those [administrative tasks], and a lot of those tasks can be augmented by AI," Dr Fu said. He pointed to a recent pilot project with the Hamilton City Council in New Zealand. The local council tested how a LLM could be used to analyse thousands of public submissions on planning proposals to summarise them. It's a task that can take a human months to complete, according to Dr Fu. "We experimented with the AI, which is OpenAI's GPT-4. We use that to replicate what planners are doing," he said. "It did a really good job with finding similar summaries and finding similar findings from if they were doing it manually, which will only use about $US15 and a few hours. The Australian government has flagged the need to address productivity, with Treasurer Jim Chalmers spotlighting it as the main economic issue in this term of government. A recent report from the Productivity Commission found that Australians were working in record numbers and for increasingly long hours, which was contributing to a productivity slump. It's why Dr Fu doesn't see the adoption of AI in urban planning as a replacement tool for human planners but as a way to ensure planners had time to focus on other more important elements to their work. "[Planners] are becoming quite reactive. When policy changes come, they react to that. When disasters happen, they react to that," he said. Australia's peak industry body for urban planning said artificial intelligence was among the four drivers of change for planners. The others included climate change, population and the political and social landscape. Planning Institute of Australia's (PIA) Nicole Bennetts, who is a registered planner with more than a decade of experience, said the peak body had already taken steps to provide a guidance note for planners in the country. Ms Bennetts said AI could help planners with some low-risk tasks, such as development assessments, when only a yes or no conclusion was required. "But there are guardrails that we do need to set up, because planning decisions are often not just a simple yes and no," she said. The PIA prepared the guidance note so that planners could understand what the opportunities were with AI but also the risks. LLMs often have a bias, according to Ms Bennetts. She said the PIA found that if all the past decisions of a local council or a state government were fed into an AI chatbot to make a future decision it would not have the new climate change projections within it. "It's easy where there's clear rules, but where it's a more subjective or trade off discussion, that's where it can get more challenging." The adoption of AI in planning was already happening in Australia, Ms Bennetts said. She pointed to Victoria's Yarra Ranges Council preparing to use an AI tool to do development assessments and Queensland's Sunshine Coast Regional Council is testing AI avatar technology at City Hall to answer ratepayers' questions on issues from damaged bins to pet registration. "Planners are leading [the adoption of AI] because we want to see planning being able to be as easy for customers to use as possible," she said. Professor Toby Walsh, who is the Chief Scientist at University of New South Wales AI Institute, said the adoption of AI in urban planning might mean fewer jobs but it doesn't need to. He described the use of AI in the study as a "perfect application" for the technology. Australia is currently facing a shortage of planners, according to the PIA. "But actually thinking about whether we can lift our game, whether we can use it to be more responsive to the public, to take into account more input from them that we wouldn't have been able to do before we had these AI tools available." It's why Professor Walsh believes the federal government must take a leading role in regulating the technology, which the Commonwealth have done in other areas such as social media. The European Union passed the first piece of legislation regulating AI in February, while a US bill is attempting to stop individual states regulating the technology for at least a decade. "My inclination is that the Australian public is going to be more sympathetic to the sort of approach that is being put out in Europe, as opposed to the very laissez-faire approach that we now seeing happening in the US."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store