logo
AI Agents - The struggle to balance automation, oversight, & security

AI Agents - The struggle to balance automation, oversight, & security

Techday NZ7 hours ago

Visa's recent announcement to harness agentic artificial intelligence (AI) for automatically transacting payments on behalf of customers has attracted widespread interest and scrutiny within the technology and security communities. The move, reported by Associated Press, signals a step change in how everyday purchases could be managed in the near future, promising to reduce both friction and manual intervention in digital commerce.
James Sherlow, Systems Engineering Director EMEA at Cequence Security, observes that Visa is "betting on AI agents to remove the friction and mundanity of regular purchases by using the technology to hunt for, select and pay for goods and services automatically." He notes that, amid the current climate of multi-level authentication processes, such innovation may prove both groundbreaking and beneficial in deterring fraudsters. However, Sherlow highlights significant hurdles regarding consumer acceptance: "The question remains whether the user will be comfortable giving AI that level of autonomy."
Sherlow elaborates on the technical aspects, explaining that Visa intends its AI agents to initially recommend purchases based on learned patterns and preferences, before moving towards more autonomous decision-making. Security remains paramount, with verification to be managed by Visa in a manner analogous to ApplePay, yet now underpinned by AI agents and with Visa handling disputes. He cautions that using AI agents with sensitive personally identifiable information (PII) and payment card industry (PCI) data "could have far reaching ramifications." Clear visibility, accountability, and robust guard rails must be built from the outset, stressing the evolving role of API security, especially as API endpoints become critical to both ecommerce and AI utilisation.
Echoing these concerns, information security practitioners point to the risks inherent in delegating decision-making to semi-autonomous systems. Joshua Walsh, Information Security Practitioner at rradar, believes agentic AI offers dramatic gains in productivity and efficiency by automating complex tasks. Still, "this same autonomy also brings serious security and governance risks that must be addressed before deployment to the live environment," he states. Because AI agents operate across multiple platforms and often without direct human oversight, vulnerabilities such as prompt injection or misconfiguration carry disproportionately high risks, potentially leading to compromised data or even regulatory breaches.
Walsh underscores accountability as a core issue: "When an agent makes a bad call or acts in a way that could be seen as malicious, who takes responsibility?" He advocates for human-in-the-loop safeguards for high-risk actions, strict role-based access controls, rigorous audit logging, and continuous monitoring—especially where sensitive data is involved. Walsh argues that deploying such capabilities safely requires a foundation of transparency and meticulous, sustained testing before production rollout.
Within the broader debate on agentic AI, there is also scepticism about overestimating its capabilities. Roberto Hortal, Chief Product and Technology Officer at Wall Street English, warns that "the promise of AI agents is tempting," but urges caution: "Agents aren't a silver bullet. They're only effective when built with clear goals and deployed with human oversight." Hortal points out that unsupervised use often results in "AI slop," an abundance of low-value output that increases rather than decreases human workload. He draws a parallel to onboarding untested staff, stating, "You wouldn't let a brand-new intern rewrite your strategy or email your customers unsupervised. AI agents should be treated the same." Hortal emphasises the value of keeping AI tightly scoped and always supportive, not substitutive, of human decision-making.
Gartner's latest research indicates that so-called "guardian agents" will account for up to 15% of the agentic AI market by 2030, reflecting the heightened importance of trust and security as AI agents proliferate. Guardian agents, according to Gartner, are designed for "trustworthy and secure interactions," acting both as assistants for content review and autonomous overseers capable of redirecting or blocking AI actions to ensure alignment with predefined objectives. In a recent webinar, 24% of CIOs and IT leaders reported already deploying multiple AI agents, while the majority are either experimenting or planning imminent adoption.
As agentic AI gains traction across internal administrative and customer-facing tasks, risks including data poisoning, credential hijacking, and agent deviation have come to the fore. Avivah Litan, VP Distinguished Analyst at Gartner, comments, "Agentic AI will lead to unwanted outcomes if it is not controlled with the right guardrails." With the rapid evolution toward complex, multi-agent systems, traditional human oversight is becoming impractical, further accelerating the need for automated, intelligent checks and balances.
Gartner recommends organisations categorise guardian agents into three primary types: reviewers (verifying AI-generated content), monitors (tracking agentic actions for follow-up), and protectors (automatically intervening to adjust or block actions as needed). Integration of these roles is expected to become a central pillar of future AI systems, with Gartner predicting that 70% of AI applications will utilise multi-agent approaches by 2028.
The debate on agentic AI thus hinges on balancing automation, oversight, and security at unprecedented scale. Visa and other firms setting the pace in this new domain will need to combine technological innovation with careful risk management to achieve both user adoption and operational resilience.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI Agents - The struggle to balance automation, oversight, & security
AI Agents - The struggle to balance automation, oversight, & security

Techday NZ

time7 hours ago

  • Techday NZ

AI Agents - The struggle to balance automation, oversight, & security

Visa's recent announcement to harness agentic artificial intelligence (AI) for automatically transacting payments on behalf of customers has attracted widespread interest and scrutiny within the technology and security communities. The move, reported by Associated Press, signals a step change in how everyday purchases could be managed in the near future, promising to reduce both friction and manual intervention in digital commerce. James Sherlow, Systems Engineering Director EMEA at Cequence Security, observes that Visa is "betting on AI agents to remove the friction and mundanity of regular purchases by using the technology to hunt for, select and pay for goods and services automatically." He notes that, amid the current climate of multi-level authentication processes, such innovation may prove both groundbreaking and beneficial in deterring fraudsters. However, Sherlow highlights significant hurdles regarding consumer acceptance: "The question remains whether the user will be comfortable giving AI that level of autonomy." Sherlow elaborates on the technical aspects, explaining that Visa intends its AI agents to initially recommend purchases based on learned patterns and preferences, before moving towards more autonomous decision-making. Security remains paramount, with verification to be managed by Visa in a manner analogous to ApplePay, yet now underpinned by AI agents and with Visa handling disputes. He cautions that using AI agents with sensitive personally identifiable information (PII) and payment card industry (PCI) data "could have far reaching ramifications." Clear visibility, accountability, and robust guard rails must be built from the outset, stressing the evolving role of API security, especially as API endpoints become critical to both ecommerce and AI utilisation. Echoing these concerns, information security practitioners point to the risks inherent in delegating decision-making to semi-autonomous systems. Joshua Walsh, Information Security Practitioner at rradar, believes agentic AI offers dramatic gains in productivity and efficiency by automating complex tasks. Still, "this same autonomy also brings serious security and governance risks that must be addressed before deployment to the live environment," he states. Because AI agents operate across multiple platforms and often without direct human oversight, vulnerabilities such as prompt injection or misconfiguration carry disproportionately high risks, potentially leading to compromised data or even regulatory breaches. Walsh underscores accountability as a core issue: "When an agent makes a bad call or acts in a way that could be seen as malicious, who takes responsibility?" He advocates for human-in-the-loop safeguards for high-risk actions, strict role-based access controls, rigorous audit logging, and continuous monitoring—especially where sensitive data is involved. Walsh argues that deploying such capabilities safely requires a foundation of transparency and meticulous, sustained testing before production rollout. Within the broader debate on agentic AI, there is also scepticism about overestimating its capabilities. Roberto Hortal, Chief Product and Technology Officer at Wall Street English, warns that "the promise of AI agents is tempting," but urges caution: "Agents aren't a silver bullet. They're only effective when built with clear goals and deployed with human oversight." Hortal points out that unsupervised use often results in "AI slop," an abundance of low-value output that increases rather than decreases human workload. He draws a parallel to onboarding untested staff, stating, "You wouldn't let a brand-new intern rewrite your strategy or email your customers unsupervised. AI agents should be treated the same." Hortal emphasises the value of keeping AI tightly scoped and always supportive, not substitutive, of human decision-making. Gartner's latest research indicates that so-called "guardian agents" will account for up to 15% of the agentic AI market by 2030, reflecting the heightened importance of trust and security as AI agents proliferate. Guardian agents, according to Gartner, are designed for "trustworthy and secure interactions," acting both as assistants for content review and autonomous overseers capable of redirecting or blocking AI actions to ensure alignment with predefined objectives. In a recent webinar, 24% of CIOs and IT leaders reported already deploying multiple AI agents, while the majority are either experimenting or planning imminent adoption. As agentic AI gains traction across internal administrative and customer-facing tasks, risks including data poisoning, credential hijacking, and agent deviation have come to the fore. Avivah Litan, VP Distinguished Analyst at Gartner, comments, "Agentic AI will lead to unwanted outcomes if it is not controlled with the right guardrails." With the rapid evolution toward complex, multi-agent systems, traditional human oversight is becoming impractical, further accelerating the need for automated, intelligent checks and balances. Gartner recommends organisations categorise guardian agents into three primary types: reviewers (verifying AI-generated content), monitors (tracking agentic actions for follow-up), and protectors (automatically intervening to adjust or block actions as needed). Integration of these roles is expected to become a central pillar of future AI systems, with Gartner predicting that 70% of AI applications will utilise multi-agent approaches by 2028. The debate on agentic AI thus hinges on balancing automation, oversight, and security at unprecedented scale. Visa and other firms setting the pace in this new domain will need to combine technological innovation with careful risk management to achieve both user adoption and operational resilience.

What you need to know about the Parent Boost Visa
What you need to know about the Parent Boost Visa

RNZ News

time5 days ago

  • RNZ News

What you need to know about the Parent Boost Visa

Ethnic communities were excited to see the unveiling of the long-awaited Parent Boost Visa on Sunday, which aims to support parents visiting their families in New Zealand for longer durations. However, the excitement was followed by mixed emotions as they worked hard to figure out the conditions they would need to satisfy for their parents to be eligible for the visa. Questions emerged about the visa's health insurance and income requirements, among other things. Immigration New Zealand said it would release more information on its website prior to applications opening on 29 September. Until then, here's what we know about the Parent Boost Visa so far. It is a multiple-entry visitor visa that allows parents of New Zealand citizens and residents to visit for up to five years, with the possibility of extending it for another five years, enabling a total stay of up to 10 years. Applicants must have a sponsor who is their biological or adopted child and is a New Zealand citizen or resident. The sponsor must also agree to cover the applicants' essential costs, including accommodation and other daily expenses. There are three ways to meet the financial requirements. The sponsor must earn at least the New Zealand median wage, which is $69,804.80 a year to sponsor one parent, or 1½ times the median wage for joint sponsors, equivalent to $104,707.30. The amount increases by 0.5 times the median wage ($34,902.40) for additional parents being sponsored. Immigration New Zealand updates the median wage it applies to visa applications in February each year. Alternatively, parents can have a personal income equivalent to New Zealand Superannuation, which is $32,611.28 per year for a single parent and $49,552.88 for a couple. Otherwise, parents must have personal funds worth $160,000 for a single applicant or $250,000 for a couple. Photo: RNZ Applicants must hold at least one year of health insurance that covers emergency medical care (minimum $250,000 a year), repatriation, return of remains and cancer treatment (minimum $100,000). They must hold valid health insurance for the entire duration of their stay in New Zealand. No. Applicants must apply for a Parent Boost Visa when outside New Zealand. "This offshore application requirement reinforces the visa's temporary status and the expectation that parents maintain a connection to their home country," Jock Gilray, director of visas at Immigration New Zealand, told RNZ. Parent Boost Visa holders can study for up to three months each year or undertake remote work for an offshore employer. They do not have the right to work for a New Zealand employer. Visa holders are required to leave New Zealand any time between three and four years of the visa to complete a health check to confirm they can continue to meet the health standards, Immigration Minister Erica Stanford said. Gilray said the visa is a temporary one and it's essential that holders maintain strong connections to their home country, and the requirement to leave New Zealand supported that approach. The visa cost most people $3000 to apply and $2450 for people eligible for the Pacific fee band. Applicants also needed to pay the $100 International Visitor Conservation and Tourism Levy. After three years, an additional processing fee was charged for the third-year health check of $325 for most people, or $240 if eligible for the Pacific fee band. At the media standup on Sunday, Stanford said the visa was "not a regular visitor visa" when explaining the hefty application fee. "There's quite a lot more to it that we're going to have to check in terms of health and insurance, a character and many other things," she said. "It is more akin almost to a residence application, so it will take longer and will cost us more to process." An elderly couple walks down Auckland's Queen Street. Photo: RNZ / Yiting Lin Immigration New Zealand estimated most applications would be processed within four months. Timeframes would depend on the information and evidence provided, whether further assessment was required (for example where there are potential health issues) and demand for the visa. Immigration New Zealand said no English-language requirement was needed. Applicant must remain out of New Zealand for three months after the first visa expired before applying for a second five-year-visa, Gilray said. This was also to reinforce the visa's temporary status and the expectation that parents maintained a connection to their home country, Gilray said. Sponsors must continue to live in New Zealand while their parents visit to ensure they could provide the support required, Gilray said. "This won't prevent the sponsor from going on holidays, but they should primarily reside in New Zealand," Gilray said, adding that further information about this requirement will be made available before the application opens. This visa is only available for parents of New Zealand citizens and residents except in some cases where the sponsor's parents are deceased and grandparents who fill this role could be included. Instead, grandparents can use the Parent and Grandparent Visitor Visa to enter New Zealand. There is no cap. Luxon said the government was expecting 2000 to 10,000 takers a year, averaging 6000 annually. Although parents on the visa must hold private health insurance to visit, some worry that having them in New Zealand for prolonged periods risks putting pressure on the country's health system due to limited resources and staffing shortages. A spokesperson from Stanford's office told RNZ the two health examinations at application and three years would ensure that parents were healthy enough to stay in New Zealand. "The requirement for ongoing health insurance will support the visa holder to meet their health costs as they will not be eligible for publicly funded healthcare," the spokesperson said. Aside from the standard visitor visa, people can also apply for a Parent and Grandparent Visitor Visa, which allows visa holders to visit New Zealand for up to six months at a time, and 18 months in three years, with an application fee of $441. A Parent Resident Visa allows parents to live in New Zealand indefinitely, with income thresholds for sponsors that have been described as a hurdle. The 2500 yearly cap, application backlog and ballot system have also made the process difficult for many families. It costs $5810 to apply. Parents will be able to apply for a Permanent Resident Visa after holding this visa for 10 years. There is also the more expensive Parent Retirement Resident Visa, which requires parents to have at least $1 million to invest in New Zealand for four years and have at least $500,000 for settlement, and an annual income of at least $60,000. Parents can stay here indefinitely with this visa, which costs $12,850 to apply. If parents meet all conditions, they may be granted a Permanent Resident Visa at the end of the four-year investment period. The government said the Parent Boost Visa was not a pathway to residence. "Those seeking permanent residence should consider submitting an expression of interest under the Parent Category Resident Visa," according to Immigration New Zealand's website. Settings for Parent Boost Visa would be reviewed in 2027 to ensure they were working as intended, Stanford's office said. For more information, visit the Immigration New Zealand website .

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store