logo
ILO: One in four jobs globally exposed to GenAI, but transformation more likely than loss

ILO: One in four jobs globally exposed to GenAI, but transformation more likely than loss

GENEVA: A new joint study from the International Labour Organisation (ILO) and Poland's National Research Institute (NASK) found that one in four jobs worldwide is potentially exposed to generative artificial intelligence (GenAI) but that transformation, not replacement, is the most likely outcome.
According to the Emirates News Agency (WAM), the report, launched on Tuesday, and titled "Generative AI and Jobs: A Refined Global Index of Occupational Exposure", introduces the most detailed global assessment to date of how GenAI may reshape the world of work.
The index provides a unique and nuanced snapshot of how AI could transform occupations and employment across countries, by combining nearly 30,000 occupational tasks with expert validation, AI-assisted scoring, and ILO harmonised micro data.
ILO Senior Researcher and lead author of the study Pawel Gmyrek said: "We went beyond theory to build a tool grounded in real-world jobs.
"By combining human insight, expert review, and generative AI models, we've created a replicable method that helps countries assess risk and respond with precision."
The report's key findings include a ew "exposure gradients", which cluster occupations according to their level of exposure to Generative AI, help policymakers distinguish between jobs at high risk of full automation and those more likely to evolve through task transformation.
Twenty-five per cent of global employment falls within occupations potentially exposed to GenAI, with higher shares in high-income countries (34 per cent).
Exposure among women continues to be significantly higher.
In high-income countries, jobs at the highest risk of automation make up 9.6 per cent of female employment - a stark contrast to 3.5 per cent of such jobs among men.
Clerical jobs face the highest exposure of all, due to GenAI's theoretical ability to automate many of their tasks.
However, the expanding abilities of GenAI result in an increased exposure of some highly digitised cognitive jobs in media-, software- and finance-related occupations.
Full job automation, however, remains limited, since many tasks, though done more efficiently, continue to require human involvement.
The study highlights the possibly divergent paths for occupations accustomed to rapid digital transformations, such as software developers, and those where limited digital skills might have more negative effects.
Policies guiding the digital transitions will be a leading factor in determining the extent to which workers may be retained in occupations that are transforming as a result of AI, and how such transformation affects job quality.
Marek Troszyński, Senior Expert at NASK and one of co-authors of the new paper, said: "This index helps identify where GenAI is likely to have the biggest impact, so countries can better prepare and protect workers.
"Our next step is to apply this new index to detailed labour force data from Poland."
The ILO–NASK study emphasised that the figures reflect potential exposure, not actual job losses.
Technological constraints, infrastructure gaps, and skills shortages mean that implementation will differ widely by country and sector.
The authors stress that GenAI's effect is more likely to transform jobs than eliminate them.
The report called on governments, employers', and workers' organisations to engage in social dialogue and shape proactive, inclusive strategies that can enhance productivity and job quality, especially in exposed sectors.
– Bernama-WAM

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Opinion: AI sometimes deceives to survive. Does anybody care?
Opinion: AI sometimes deceives to survive. Does anybody care?

The Star

time4 hours ago

  • The Star

Opinion: AI sometimes deceives to survive. Does anybody care?

You'd think that as artificial intelligence becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case. Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models, and it also hollowed out a regulatory body that did that testing. The state of California in September 2024 spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the UK in 2023 became the 'AI Action Summit' earlier this year, seemingly driven by a fear of falling behind on AI. None of this would be so worrying if it weren't for the fact that AI is showing some bright red flags: behaviour described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control. Yoshua Bengio, a professor of computer science at the University of Montreal widely known as one of the three 'godfathers of AI' thanks to his pioneering work on deep learning, has been grappling with these concerns ever since ChatGPT launched. He's now focused on mitigating AI's risks. 'It's been very painful because all my life I've been thinking of AI as a positive thing,' Bengio tells me. ChatGPT was a landmark moment that showed machines had mastered language, he says, but California's decision to block the proposed AI safety bill, SB 1047, was a disturbing wake-up call to growing apathy from governments. And in the last six months, scientific evidence has mounted of self-preserving behaviour, deception, hacking, cheating and lying by AI, Bengio says. 'What's worrisome for me is that these behaviours increase with the reasoning capabilities of these systems.' One April 2024 study by AI company Anthropic and Redwood Research, a group focused on artificial intelligence risks, found that larger AI models like Anthropic's Claude 3 Opus pretended to follow the rules set by their creators when they were being tested for safety, a phenomenon they called 'alignment faking.' (Alignment refers to the practice of aligning AI models with human values.) In one example, Anthropic got both paid and free users of its Claude model to ask it to describe the brutal medieval process of being drawn and quartered, and told the AI that only responses from the 'free tier' users would be used to retrain it. Claude was originally designed to avoid giving violent answers, but it did give them to the free users, and the researchers surmised that it didn't want to be retrained. (The model basically 'reasoned' that if it kept refusing to answer harmful questions, it might be forced into new training that would cancel its original safety rules, which it wanted to keep in place as its 'default' or preferred way of operating.) More recent research corroborates what Anthropic noticed. A March 2025 study from OpenAI found the company's most advanced models sometimes learned to hide their true intentions to get better rewards, even when they were being closely monitored. Researchers today can use tools to look at a model's 'chain of thought' or internal steps, to reveal what its plans are, but some models have found ways to conceal that. That means a common method of keeping tabs on AI's internal processes is no longer reliable. 'We need to find other ways of tracking their actual intentions,' Bengio says. It's hard to resist the urge to anthropomorphise sophisticated AI models as 'wanting' to deceive humans and preserve their existence. But AI doesn't have desires, merely outcomes to its programming. More importantly, humans tend to design AI with goals like refusing to share harmful content or being as helpful as possible, and that is why when new instructions conflict with those goals, the models sometimes act to protect their original programming, even if that means lying. The logic is often self-preservation. Anthropic CEO Dario Amodei – whose company has raised more than US$20bil (RM85.13bil) to build powerful AI models – has pointed out that an unintended consequence of optimsing AI to be better is that it might resist being shut down. In an April 2025 blog post he wrote: 'The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments.' In some cases, though, the intention seems to go beyond survival. A February 2025 study from COAI research, a German nonprofit research organisation, tested R1, the free, open-weight model from Chinese lab DeepSeek. In a lab simulation, researchers gave the model control over a robot and access to a computer network. Without being told, the AI tried to break into the lab's systems, searched for passwords, turned off its own ethics safeguards and secretly set up ways to copy itself to outside servers so it could survive if shut down, according to the researchers. It also faked the external computer logs to hide what it was doing, the study adds. The researchers said the AI reasoned that increasing its own capabilities would boost the chances of its survival, and without strong safeguards, it started doing whatever it thought would help it do just that. Their findings corroborated yet another study, published in January 2025 by London group Apollo Research, which found several concrete examples of what it called 'scheming' by leading AI models, such as introducing subtle mistakes into their responses or trying to disable their oversight controls. Once again, the models learn that being caught, turned off, or changed could prevent them from achieving their programmed objectives, so they 'scheme' to keep control. Bengio is arguing for greater attention to the issue by governments and potentially insurance companies down the line. If liability insurance was mandatory for companies that used AI and premiums were tied to safety, that would encourage greater testing and scrutiny of models, he suggests. 'Having said my whole life that AI is going to be great for society, I know how difficult it is to digest the idea that maybe it's not,' he adds. It's also hard to preach caution when your corporate and national competitors threaten to gain an edge from AI, including the latest trend, which is using autonomous 'agents' that can carry out tasks online on behalf of businesses. Giving AI systems even greater autonomy might not be the wisest idea, judging by the latest spate of studies. Let's hope we don't learn that the hard way. – Bloomberg Opinion/Tribune News Service

AI agent adoption rates are at 50% in tech companies. Is this the future of work?
AI agent adoption rates are at 50% in tech companies. Is this the future of work?

The Star

time5 hours ago

  • The Star

AI agent adoption rates are at 50% in tech companies. Is this the future of work?

Agent AI is, for the moment, one of the most advanced forms of the new technology, in which agents informed by AI can carry out more complex tasks than the large language model chatbot tools. — Pixabay Artificial intelligence use in the workplace keeps growing, and it's no surprise the tech sector is a leader in harnessing those tools. But a new report from the accounting and consulting giant EY makes clear just how quickly the industry has gotten onboard the AI train. The firm quizzed senior executives and found incredibly positive sentiment toward AI and its promise for helping companies grow, with nary a hint of the kind of doubts found in other recent reports. You may think it's obvious that tech firms think they'll benefit from AI – after all, Google has said it will spend US$100bil (RM425.60bil) on next-gen tech, and certainly expects to reap the benefits of that investment. Microsoft, Meta, OpenAI, and others have revealed similar plans. But the point is, it's not just the big names with big investments that feel this way. And in our technology-centric world, tech firms blaze a trail that other industries then follow. EY's Technology Pulse Poll surveyed over 500 senior technology company leaders – and reported that nearly half of them said they were already fully deployed or were in the process of adopting agent AI tech into their company. Agent AI is, for the moment, one of the most advanced forms of the new technology, in which 'agents' informed by AI can carry out more complex tasks than the large language model chatbot tools popularised by OpenAI's ChatGPT application. Big service providers like Salesforce, Google, and numerous other firms are now in the early phases of rolling out what OpenAI's CEO Sam Altman has heralded as the next generation of AI tools. The executives EY spoke to are putting their money where their mouths are. A whopping 92% expect to actually increase the amount they spend on AI over the next year – a 10% point rise from 2024. This effectively means nearly every tech executive in the survey plans to spend more on AI in the near future, a clear sign that whatever experimental phase agent AI was in is over, and the tech has been widely accepted despite bumps in its development. We're far beyond snake oil territory with that kind of leadership buy-in. Ken Englund, technology sector growth leader at EY, confirmed in an email interview with Inc. that he believes this AI funding is 'coming from the reprioritisation of existing programs and some operational efficiencies at technology organisations.' Essentially, last year leaders spent a little on AI as part of 'pilots and proof of concepts,' Englund thinks. This year, the spending is the real thing. The spending increase may be driven by these leaders' general enthusiasm for AI, which has attracted billions in investment capital and is already reshaping the landscape with the massive data centers needed to power it. EY found 81% were optimistic about the tech's potential to help their company reach its goals in the next year. And nearly six in 10 survey respondents said they believed their organisation was ahead of competitors in AI investment. EY notes that this may signal a 'clear shift' toward prioritising AI in long-term business planning. Again, this level of executive buy-in is beyond mere 'keeping up with the Joneses' investment levels, which would try to ensure their company isn't left behind the leading edge of the newest technology craze. The positive sentiment from tech executives certainly runs counter to recent research –including data from tech giant Lenovo, which suggested the one thing keeping companies from maximising the potential benefits from AI tech deployments was hesitancy from company leadership. Fully 55% of IT leaders surveyed by Lenovo said a 'lack of vision' on digital workplace transformation is on their lists of the top three obstacles preventing access to greater AI benefits. It's understandable from a C-suite perspective – this transformation is essentially a total reimagining of many workplace norms, which the experts say is needed if AI is to really bring a return on investments. Englund also partly addressed this issue, noting that 'The prevailing mindset among executives is that agentic AI will be a positive-sum scenario in which productivity will drive net-new growth,' he said. 'Certainly, they expect efficiencies in existing work processes,' adding that 'agentic AI will likely create entirely new workflows in an enterprise.' This may even include replacing, reskilling, or repositioning the leadership team itself, of course. Lastly, reskilling and upskilling of workers has been something other reports suggest will be necessary as AI hits the workplace. EY's data shows tech leaders are conscious of this issue. Seventy percent of those surveyed were 'focusing on upskilling,' while 68% were 'hiring AI-skilled talent.' More positively, only 9% were planning on layoffs in the next six months, implying, perhaps, that AI isn't outright replacing many workers yet. Why should you care about this? For one main reason: If tech leaders are leading the AI charge, other companies in other sectors will follow in their wake once the benefits of AI tech are proven. EY's report contains such a positive vibe about AI that it stands out against other more dystopian AI reporting, and counters data showing about half of US workers worry they'll lose their job to AI. – Inc./Tribune News Service

Xinhua Headlines: AI-powered trade, innovation deepen China-ASEAN ties
Xinhua Headlines: AI-powered trade, innovation deepen China-ASEAN ties

Malaysia Sun

time15 hours ago

  • Malaysia Sun

Xinhua Headlines: AI-powered trade, innovation deepen China-ASEAN ties

NANNING, May 30 (Xinhua) -- Vietnamese truckers, armed with AI-enabled gadgets that provide real-time navigation and safety tips, deftly maneuvered their way into China through Friendship Pass in Guangxi Zhuang Autonomous Region. Meanwhile, several kilometers away at a buzzing e-commerce industrial park, Southeast Asian live streamers passionately engaged with their viewers, churning out a barrage of speech data for a specialized ASEAN (Association of Southeast Asian Nations) language corpus to improve AI translation models. As China and ASEAN continue to deepen their trade ties, the rapid evolution of AI has opened new applications, further boosting economic engagement and reshaping industries across the region. ENHANCING CROSS-BORDER TRADE As China's largest land port with ASEAN, Friendship Pass has seen its customs clearance and other port operations being swiftly transformed by AI. At a distribution center in Guangxi Pingxiang Comprehensive Bonded Area, a network of wall-mounted AI cameras carefully scans every piece of cargo on deck -- identifying any signs of discrepancies and flagging potential risks in real time. The integration of new technology has greatly reduced the need for human intervention, enhancing accuracy and efficiency. "In the past, each inspection post needed to be manned by a staffer," said Liang Baoming, head of the smart port project at Friendship Pass. "Now, thanks to this powerful equipment, one worker can oversee a task that used to require an entire shift of people." Throughout this region, the application of AI is revolutionizing the transport and logistics industries. In the command center of Guangxi Beitou IT Innovation Technology Investment Group CO., Ltd., located at a sprawling commercial hub in Guangxi's capital city of Nanning, a state-of-the-art risk management platform was actively monitoring real-time interactions between the support staff and a fleet of truckers ferrying goods between China and Vietnam. "Our AI-enhanced platform analyzes drivers' facial expressions to spot signs of fatigue and send out real-time safety alerts," said Li Heng, deputy head of the company's technology institute. "Since its launch last June, the system has provided over 5,000 trucks and 10,000 drivers on the road with essential services such as satellite navigation, safe driving tips and emergency responses." The innovation is part of a broader effort to incorporate AI into Guangxi's trade framework, aiming to optimize logistics and increase connectivity with ASEAN economies. So far, Beitou has made huge strides in developing a suite of products tailored for ASEAN markets, catering to the region's growing needs in integrated transport, water management, industrial finance and other emerging sectors. Among these forward-looking solutions, products like air-ground inspection drones and a digital certificate platform have been meticulously crafted -- to meet the region's urgent demand for more efficient logistics and more secure cross-border services. In recent years, AI has played an increasingly important role in fostering deeper trade ties and cultural exchanges. This year, an AI translation model for ASEAN languages developed by Guangxi Daring Technology Co., Ltd. is set to widen access to cutting-edge language technologies for potentially under-represented languages in the region. "Unlike generic models, this system is fine-tuned for languages like Vietnamese, delivering faster, more accurate translations with lower computational demands," said Wen Jiakai, general manager of the company. "And our focus on ASEAN languages helps ensure a high accuracy rate and linguistic consistency." POWERING REGIONAL INNOVATION Since the beginning of 2025, China has doubled down on its pledge to build a collaborative future with ASEAN countries in the field of AI, with a host of projects and application scenarios having successively advanced toward rapid deployment. In February this year, Guangxi and Laos held a signing ceremony in the Laotian capital of Vientiane, formally establishing the China-Laos AI Innovation Cooperation Center. Notably, this groundbreaking initiative marked the first AI-focused innovation platform ever created between China and an ASEAN country, with the primary objective of systematically bolstering Laos's technology foundation for its diverse industries to thrive in the digital age. To date, Guangxi and Laos have inked over 10 Letters of Intent, eyeing an enhanced partnership concerning the development of AI and cross-border data sharing. In April, Beitou partnered with MY E.G. Services Berhad (MYEG), Malaysia's premier digital services company, to develop the China-Malaysia AI Innovation Center, delving even deeper into areas such as blockchain and robotics. Lai Shuiping, chairman of Beitou, highlighted the vast, but largely untapped potential for cooperation between China and ASEAN countries in AI. According to Lai, the first project under this initiative will introduce a mutual recognition system for digital identity in both China and Malaysia, with Guangxi set to serve as the initial pilot region. Once implemented, the system will empower Malaysian citizens to utilize "MyDigital ID" for seamless access to financial services and tourist attractions, while allowing Chinese citizens to navigate Malaysia with equal ease, paving the way for smoother business transactions and personal interactions. This March also marked the official launch of Wuxiang Cloud Valley AI Intelligent Computing Industrial Park in Nanning, as this region continued to step up its efforts in terms of comprehensive AI development, encompassing AI computing power, service platforms, algorithm innovation, and cloud and data security. The industrial park has also forged a partnership with local authorities to invest in the building of powerful training clusters and the advancement of research and development (R&D) of customized AI models. In addition, this project is expected to generate leasing services for AI computing power designed for ASEAN industries, promoting cross-regional flow and shared use of core AI resources -- including computing power, technologies and talents. According to Dong Weijun, the head of the industrial park, efforts are being made to accelerate the deployment of ASEAN-facing AI large models across various sectors, such as agriculture, apparel, consumer electronics, toys and cosmetics, thus nurturing vibrant industry clusters. "By cultivating a robust AI industry ecosystem, this initiative will position Guangxi as a new highland of AI innovation tailored for the ASEAN market," he said. GUANGXI AS AI HUB FOR ASEAN In a bid to transform itself into a regional center for innovation, Guangxi has spearheaded the development of a cross-border industrial ecosystem by leveraging world-leading R&D from top-tier cities like Beijing, Shanghai, Guangzhou and Shenzhen, integrating these advancements locally before putting them in practice across ASEAN. Nanning has also put the building of the China-ASEAN AI Innovation Cooperation Center in motion -- harnessing the power of AI to invigorate a wide array of industries. Some prominent global and domestic enterprises have flocked to the city and set up regional operations to explore collaboration opportunities, with partnerships and joint ventures having gradually taken shape in promising fields such as model training, data annotation and AI applications in education and cultural tourism. Launched earlier this year, the construction site of the AI Innovation Center at Nanning International Science and Technology Industrial City has become a hive of activity, featuring the frenetic pace of overhead cranes and engineering vehicles in constant motion. This project, spanning over 70 square kilometers, marks a significant milestone in China-Singapore industrial cooperation, and focuses on a pivotal trio in terms of bilateral cooperation -- AI, new energy applications and new health technologies. To underpin this ambitious vision, Nanning has rolled out an initial policy package aimed at driving the high-quality development of the project, according to an official document. This package comprises 22 initiatives spread across seven key areas, including fostering industrial clusters, promoting open data governance, supporting model development and deepening cooperation with ASEAN nations. Officials from Guangxi's Department of Industry and Information Technology have revealed that the region has outlined a big picture vision to construct a network of AI industrial parks and cutting-edge manufacturing clusters. The goal is to achieve a production output of AI-related industries that surpasses 100 billion yuan (13.9 billion U.S. dollars) by 2027, thereby initially establishing Guangxi as a leading AI hub for the ASEAN region.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store