What are AI agents and which jobs are they coming for first?
An AI agent is an application of a sub-category of artificial intelligence technology known as 'agentic AI,' which refers to AI systems that are autonomous. Like other AI systems, agentic AI collects and processes data to perform a function. Current generative AI programs — the likes of ChatGPT and Midjourney — create new content ranging from text to images and strings of software code using predictive modelling. Agentic AI's key difference lies in its agency (hence its name): it can manage complex tasks and make decisions on its own, based on pre-set user goals. These are tasks that involve the ability to reason, solve problems and be adaptable in response to changing situations.
Gary Filan, a partner and head of AI at KPMG Canada, said the definition of AI Agent is rapidly evolving, but that a common way to think of it is as a 'virtual assistant or coworker.'
AI agents can be deployed across a wide spectrum of industries and job functions. Some early successes have come in the fields of customer service, human resources, market analysis, and fraud detection and prevention, to name a few.
'Financial services firms are the leading group in that list,' Filan said. These businesses have used AI agents, for example, to streamline insurance claims processing, generate risk assessments, to help answer customer queries, among more, according to KPMG Canada.
For example, a run-of-the-mill AI assistant could help a bank send fraud alerts to its customers. But an AI agent could take it one step further: it can oversee real-time transactions, flag suspicious activity and work with the bank's fraud detection system to prevent malicious activity.
Manufacturing firms have leveraged AI agents to monitor equipment performance, predict failures, and dispatch maintenance teams before downtime can occur, while retailers have used them to predict demand trends, adjust stock levels, and fine-tune pricing, said Shannon Katschilo, the Canada country manager for Snowflake Inc., an AI and data cloud platform.
Rote job functions, like those in the areas of customer support, analytics, engineering and field operations, are likely to be among the most affected, Katschilo said.
But both Katschilo and Filan argue that AI agents won't necessarily replace jobs or workers. 'It's the nature of the work (that) will change,' Filan said. 'Workers may switch from a rote processing type of role to one that involves more judgment, monitoring, and management — and ability to work with these AI systems.'
Most Canadian organizations have now heard of agentic AI: almost three-quarters said they were 'very familiar' with the concept, according to an April 2025 KPMG Canada survey of 252 business leaders.
Still, uptake remains limited in Canada even while the majority of participants — 88 per cent — said that adopting agentic AI will help their organization become more competitive.
Only 27 per cent of respondents in the same survey said that their organizations have adopted or deployed agentic AI and have active use cases in their organization, while 57 per cent plan to invest in, or adopt, agentic AI in the next six months.
A second quarter 2025 Statistics Canada report on AI use by businesses in Canada found that only 12 per cent of them reported using AI to produce goods or deliver services, though that figure increased from six per cent during the same time last year.
'Canadian firms are on the laggard side with regards to adoption of agentic frameworks. The primary reason is Canadian corporations' aversion to risk,' Filan said. But Canada is now experiencing a shift with the Carney government's focus on so-called 'light and tight' AI and technology regulation, he said.
At their core, AI agents are built on large language models (LLMs) that sometimes 'hallucinate' — meaning that they can generate outputs that range from nonsensical to simply false.
The autonomy of AI agents, alongside the growing sophistication of the technology, magnifies its risk factors. Agentic AI systems that operate independent of human oversight could carry out unintended and potentially harmful actions, from leaking confidential data, to impersonating people or manipulating files.
The risk also depends on how, and where, it is applied. Agentic AI applications for 'healthcare or human resources — to decide who gets a raise or who gets laid off, for example — are much more critical than agentic AI for a food delivery app,' said Mélissa M'Raidi-Kechichian, research and advocacy fellow at the Center for AI and Digital Policy, a Washington-based non-profit focused on AI best practices. 'What remains across contexts though, is the accountability component: without human oversight, who will be held accountable when an AI system inevitably fails, does harm, or does not perform the way it was intended to?'
Some in the industry are already working toward establishing guardrails to advance the development of safe and ethical AI. Canadian-French computer scientist Yoshua Bengio, considered one of the 'godfathers of AI,' recently launched a Montreal-based non-profit called LawZero focused on AI systems that will filter out certain traits like dishonesty. He aims to create a tool to de-risk AI agents and keep them in line. 'I'm deeply concerned by the behaviours that unrestrained agentic AI systems are already beginning to exhibit — especially tendencies toward self-preservation and deception,' Bengio wrote in a June 2025 blog post.
Agentic AI technology is nascent, but developing rapidly, Filan said. 'I'm not even thinking about what it would look like 10 years from now. Most of the conversations occurring now are between a two-to-five-year period,' he said.
A growing number of startups are now developing AI agents customized for different professional and personal needs. Partners at Silicon Valley's fabled startup accelerator Y Combinator LLC, recently said that they have been bombarded with a wide range of AI agent proposals in fields ranging from marketing to recruitment and debt collection.
Silicon Valley leaders have warned that job displacement is coming rapidly. Anthropic PBC chief executive Dario Amodei told Axios in May 2025 that he thought AI could eliminate half of entry-level white-collar jobs and push unemployment to 10 to 20 per cent in the next one to five years.
Others say agentic AI technology has a ways to go.
Competition Bureau to probe whether Amazon's rules are unfair to sellers and consumers
Canadian AI darling Cohere to open Montreal office in rapid scale-up
A May 2025 Carnegie Mellon paper showed that Google LLC's Gemini 2.5 Pro, the top-performing AI agent, was unsuccessful 70 per cent of the time in completing real-world office tasks. Other rival agents created by tech giants like OpenAI and Meta Platforms Inc. had failure rates of over 90 per cent.
'Right now, we're seeing early glimpses: AI agents can already analyze data, predict trends, and automate workflows to some extent. But building AI agents that can autonomously handle complex decision-making will take more than just better algorithms. We'll need big leaps in contextual reasoning and testing for edge cases,' according to a March 2025 International Business Machines Corp. report titled AI Agents in 2025: Expectations vs. Reality.
• Email: ylau@postmedia.com

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
8 minutes ago
- Bloomberg
OpenAI Staffers to Sell $6 Billion Worth of Shares
Bloomberg's Matt Miller discusses plans by current and former OpenAI employees to sell $6 billion worth of shares to an investor group that includes SoftBank. Plus, investors react to US government plans to take a stake in Intel. And global competition in the electric vehicle space picks up. (Source: Bloomberg)


CNET
37 minutes ago
- CNET
OpenAI CEO Sam Altman Believes We're in an AI Bubble
OpenAI CEO Sam Altman believes that we're currently in an AI bubble, given all the AI hype from investors and capital expenditures. Altman made the statement during a conversation with The Verge and a handful of other reporters on Thursday. "Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes," said Altman. "Is AI the most important thing to happen in a very long time? My opinion is also yes." Other revelations include his regrets over the sudden rollout of GPT-5, worries about the parasocial relationship some people have with his AI chatbot and how he doesn't want ChatGPT to become a sex robot. "I think we totally screwed up some things on the rollout, " Altman said of the launch of GPT-5, the companiy's latest AI model. When it launched earlier this year, it replaced all previous models, including GPT-4o. This led to protests by fans who preferred the more conversational nature of 4o. In response, OpenAI once again gave Plus users access to 4o. 700 million users and growing Altman also revealed that ChatGPT has 700 million weekly users, making it the fifth most popular website in the world. He predicts that ChatGPT will jump up to the third spot soon, beating Instagram and Facebook but behind Google and YouTube. The popularity means OpenAI's servers are at capacity. The load is so great that Altman admits OpenAI can't release better models it has already developed because there isn't enough server capacity to keep up. Altman says OpenAI will spend a trillion dollars on data centers in the "not very distant future." Altman also took a slight dig at Elon Musk's Grok, which released an AI companion that leaned more into risqué territory. "You will definitely see some companies go make Japanese anime sex bots," said Altman, arguing that OpenAI wants to make useful apps and to not exploit those in fragile mental states. Investment into AI development is at an all-time high. Capital expenditure in AI added more to the US GDP in the last two quarters than all consumer spending, according to Renaissance Macro Research. Considering that US consumers like spending money, it's a staggering statistic, and the firs time such a stat has ever been recorded. Google, Amazon, Meta and Microsoft plan on spending $364 billion in AI in 2025 alone. The problem is that AI capital expenditure is boosting the economy overall, and any changes to it could have major external effects. The ongoing global tariffs placed by the Trump administration means investors are looking to software companies as a bit of a safe haven, because they deal less with importing and exporting of goods. Analysts worry that AI is creating a massive economic bubble, and if it were to pop, the reverberations could be massive, potentially crashing the economy. Altman also said that OpenAI will make a brain-computer interface to take on Elon Musk's Neuralink, more apps beyond ChatGPT are on their way and that OpenAI is interested in buying Chrome if the government forces Google to sell the popular web browser.


Bloomberg
2 hours ago
- Bloomberg
Bloomberg Tech: OpenAI Staffers to Sell Shares
Bloomberg's Matt Miller discusses plans by current and former OpenAI employees to sell $6 billion worth of shares to an investor group that includes SoftBank. Plus, investors react to US government plans to take a stake in Intel. And global competition in the electric vehicle space picks up.