logo
SAS Viya launches agentic AI framework for ethical decisions

SAS Viya launches agentic AI framework for ethical decisions

Techday NZ07-05-2025

SAS has introduced a new framework for customisable AI agents on its SAS Viya platform, focusing on transparent, governed and ethically calibrated decision-making for enterprise customers.
The announcement details the SAS Viya agentic AI framework, which enables organisations to design, deploy and manage AI agents with oversight on both human and AI autonomy. According to the company, this system aims to create collaborative environments where AI and human decision-makers work together based on the complexity and risk of tasks.
Nick Patience, Vice President and Practice Lead, Artificial Intelligence Software and Tools at The Futurum Group, said: "SAS' approach to agentic AI strikes the critical balance between autonomous decision-making and ethical governance. Its intelligent agents represent not just technological advancement but a pragmatic framework for responsible enterprise AI adoption – precisely what organizations need as they navigate this rapidly evolving landscape to gain a competitive advantage."
The structure of the SAS Viya agentic AI framework rests on three main pillars. The first is a hybrid decisioning method, combining deterministic analytics with large language models (LLMs) to offer reliable and precise decision outputs. This approach, the company says, allows AI agents to function effectively under the necessary business rules and regulatory requirements that are essential in highly regulated sectors.
The second pillar focuses on balancing the degree of autonomy between human input and AI. The platform lets organisations determine the level of control AI agents have for different kinds of tasks, distinguishing between those that are suited to fully autonomous operation and those where ongoing human oversight remains essential for ethical judgement and strategic guidance.
The third aspect is governance, with Viya's embedded framework intended to ensure that output not only meets performance standards but also supports data privacy, organisational ethics and wider regulatory compliance.
Tiffany McCormick, Research Director, Digital Business Models and Monetization at IDC, stated: "As organizations evolve toward open, interoperable AI ecosystems across multi-cloud and hybrid environments, trust and explainability in AI governance are emerging as key differentiators among tech vendors. SAS is taking industry-leading steps to address the growing demand for agentic AI, with a clear commitment to ethical rigor and differentiated execution in AI decisioning."
SAS Viya is presented as supporting organisations throughout the entire lifecycle of building and maintaining AI agents—from initial data ingestion and processing, through to deployment, monitoring and adaptation. The framework provides mechanisms for continuous oversight, including audit trails, bias detection and compliance checks.
The company outlined future plans for its agentic AI roadmap, which include introducing co-pilot productivity assistants into the Viya ecosystem, aimed at helping users expedite their workflows while remaining aligned with enterprise requirements. There are also plans to release pre-packaged, domain-tailored AI agents for use in specific scenarios such as data engineering and supply chain optimisation. These domain-specific agents are designed to ease adoption and reduce time-to-value for customers without eroding their confidence in the automation.
Marinela Profi, Global AI Market Strategy Lead at SAS, commented: "SAS Viya builds agents that don't just act – they decide with purpose, guided by analytics, business rules and adaptability and grounded by decades of SAS' trusted governance. SAS' unified, governed, decision-first framework turns AI agents from a science experiment to a business differentiator."
SAS said it aims to position its platform as a comprehensive and regulated solution for organisations looking to integrate agentic AI into their digital workflows while maintaining confidence and clarity in AI-powered business decisions.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Scoop

time5 hours ago

  • Scoop

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Article – RNZ It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? , Producer – 30′ with Guyon Espiner It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? 'My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people.' Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim. His career at the forefront of machine learning began at its inception – before the first Pacman game was released. But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence. Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril – and a potential apocalypse. The Good: 'It's going to do wonderful things for us' Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. 'It's going to do wonderful things for us,' he says. According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade. Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses. 'In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients – including quite a few with the same very rare condition you have – that knows your genome, knows all your tests, and hasn't forgotten any of them.' He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive – a human-AI synergy he believes will only improve over time. Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. 'I think that's a bit optimistic.' 'If he said 25 years I'd believe it.' The Bad: 'Autonomous lethal weapons' Despite these benefits, Hinton warns of pressing risks that demand urgent attention. 'Right now, we're at a special point in history,' he says. 'We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes.' He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war. 'This shows,' says Hinton of his former employers, 'the company's principals were up for sale.' He believes defense departments of all major arms dealers are already busy working on 'autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind'. He also points out the grim fact that Europe's AI regulations – some of the world's most robust – contain 'a little clause that says none of these regulations apply to military uses of AI'. Then there is AI's capacity for deception – designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged – in just one year – by 1200 percent. The Apocalyptic: 'We'd no longer be needed' At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet? 'I think it would be a bad thing for people – because we'd no longer be needed.' Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise. 'If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose.' 'Ask a chicken' For those who believe a rogue AI can simply be shut down by 'pulling the plug', Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive. This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off – despite being given clear instructions to do so by the research team. Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. 'There are so many bad uses as well as good,' he says. 'And our political systems are just not in a good state to deal with this coming along now.' It's a sobering reflection from one of the brightest minds in AI – whose work helped build the systems now raising alarms. He closes on a metaphor that sounds absurd as it does chilling: 'If you want to know what it's like not to be the apex intelligence, ask a chicken.' Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Scoop

time6 hours ago

  • Scoop

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? "My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people." Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim. His career at the forefront of machine learning began at its inception - before the first Pacman game was released. But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence. Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril - and a potential apocalypse. The Good: 'It's going to do wonderful things for us' Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. "It's going to do wonderful things for us," he says. According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade. Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses. "In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients - including quite a few with the same very rare condition you have - that knows your genome, knows all your tests, and hasn't forgotten any of them." He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive - a human-AI synergy he believes will only improve over time. Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. "I think that's a bit optimistic." "If he said 25 years I'd believe it." The Bad: 'Autonomous lethal weapons' Despite these benefits, Hinton warns of pressing risks that demand urgent attention. "Right now, we're at a special point in history," he says. "We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes." He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war. "This shows," says Hinton of his former employers, "the company's principals were up for sale." He believes defense departments of all major arms dealers are already busy working on "autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind". He also points out the grim fact that Europe's AI regulations - some of the world's most robust - contain "a little clause that says none of these regulations apply to military uses of AI". Then there is AI's capacity for deception - designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged - in just one year - by 1200 percent. The Apocalyptic: 'We'd no longer be needed' At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet? "I think it would be a bad thing for people - because we'd no longer be needed." Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise. "If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose." 'Ask a chicken' For those who believe a rogue AI can simply be shut down by "pulling the plug", Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive. This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off - despite being given clear instructions to do so by the research team. Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. "There are so many bad uses as well as good," he says. "And our political systems are just not in a good state to deal with this coming along now." It's a sobering reflection from one of the brightest minds in AI - whose work helped build the systems now raising alarms. He closes on a metaphor that sounds absurd as it does chilling: "If you want to know what it's like not to be the apex intelligence, ask a chicken." Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.

Why it's time Australian asset management leaders explore the true value of AI
Why it's time Australian asset management leaders explore the true value of AI

Techday NZ

time6 hours ago

  • Techday NZ

Why it's time Australian asset management leaders explore the true value of AI

Modern asset management across industries like mining, utilities, public infrastructure, road and rail represents a massive exercise in managing and analysing data. This data-driven domain is a critical function within these industries, albeit one that has seen less digital transformation than other segments in recent years. Because of this, there appears to be increasing pressure on asset-centric business leaders to lunge for an AI project as a means to quickly manifest efficiencies in their operations. However, whether organisations are curious, excited, motivated, or feeling like they're under boardroom duress to deploy an AI solution into their asset management workflows, strategic considerations about how best to integrate these tools must come first. Digital transformation is not a small undertaking, nor should it be about technology in and of itself, especially for companies managing critical infrastructure and assets potentially worth billions. The stakes are high, which is why one industry leader is calling on organisations to zoom out and take a look at the big picture. Hype cycles, distractions and trust If you ask Scott McGowan, CEO of data-led asset management solutions firm COSOL, what he thinks of AI, he'll tell you it represents a lot of potential in the asset management sector. However, he'll also tell you not to run out and invest in an AI solution just because your board thinks you should. Observing the proliferation of AI adoption hype across industry events, social media and the press, McGowan is concerned that market pressures might push organisations in asset-centric industries to try and run with AI before they can walk. "Everyone expects everyone to build an AI division in-house, but if you're a mining or infrastructure company, the reality is that's not your core business," he says. "Your core business is to produce iron, or coal, or copper, or to run trains and provide services to the public." Without reason or a roadmap, the discourse around AI integration often represents an unwelcome distraction from the primary mission of asset-centric industries. Adoption pressures are being met with a degree of hesitation from business leaders, McGowan adds, due to the somewhat untested nature of AI in an asset management context and an absence of established trust. "Hesitation toward AI probably comes from multiple aspects, but they all come back to one underlying principle, which is around trust," he says. "Trust in security, trust in algorithms, and trust in data." "Trust is built on experience and understanding, and I think the challenge we have with AI as it stands today, is that the algorithms are designed in such a way that there's little to no transparency as to the decision-making methodology those algorithms have. "Data foundations must also be trusted, and solutions that act on that data must be trusted to do so correctly. Organisations need to trust that any solution they adopt is repeatable, robust and resilient." Exploring a path forward for AI in asset-centric industries will be the subject of a series of roundtables that COSOL, in partnership with IBM, will be running across three Australian cities in June. While the sessions are closed, select insights are set to be released highlighting discussions, pain points and strategies business leaders face. Ahead of the events, COSOL's McGowan touched on some of the messaging set to be covered during the campaign, including what might it look like for an organisation to take its first steps with AI before building maturity in considered stages. First steps: Learning to walk with AI AI transformation is a journey, not a big bang project. Organisations that approach their AI maturity through a lens of a walk-jog-run approach based on the true needs of the business are much more likely to find value in implementation versus those that seek shiny solutions. In the first instance, the walk phase, McGowan urges that AI has to be solving an existing problem. "There are a lot of AI initiatives and a lot of technologies looking for a problem; I think that's the wrong way to go about it," he says. "The walk concept is solving a discrete problem that already lies within the technology space. If you think about self-solving and trust in data, using AI to resolve, infer, continually assess or optimise data quality is actually a really interesting place to start. "Through this approach AI starts to solve its own trust problem by addressing one of the initial play areas. The opportunity around master data quality and the ability to link physical equipment with digital representations for accuracy is in my view the first part of the AI journey in our context." David Small, Principal Sales Leader for IBM Automation Software in Australia and New Zealand, echoes the need for organisational data layers to be updated and as accurate as possible to enable smarter asset management. "Without quality data, organisations will struggle to achieve key benefits as they move along the asset maturity curve," he says. "Data is the building block that asset management systems need and rely on to support the company's chosen asset management strategies. "The direction that IBM has taken is to embed AI capability within the Maximo Application Suite vs saying to organisations go and build your own AI capability. These AI enhancements are integrated AI experiences that deliver immediate value, increase productivity and seamlessly scale." Adding capability and building an AI-integrated business Looking further into an AI modernisation program, companies might look to select an AI partner to help them further investigate ways and means AI can be implemented, be it in a proof of concept, or solving discrete problems. Partnering with someone who specialises in AI can help companies build capability within the business from a cross functional perspective around asset management. "This phase is around automation of things like non-value tasks, work order administration, and master data creation as examples," McGowan says. "There is a lot of work that goes into master data creation. So, following from a solution in the first phase that knows what good data looks like, you can take it to the next phase of being a master data specialist that understands how to take a piece of equipment and represent it digitally and accurately." When a company is running with AI, it is essentially changing its operating business model. With each new wave of technology, transformation initiatives should not be about solving technology problems, but enhancing the operational capability of the business. Of course, McGowan notes that business parameters will need to be changed to enable technology to solve business problems if the integration is significant. In the case of AI, this may look like redefining career paths, redefining the operating model and the job descriptions for individual components and then rolling out the purpose of AI. "I can almost see a place where you define a job description for the AI agent or component, and it performs that task, has regular performance reviews, learns from its mistakes, and is managed like anyone else in a business," he says. "I think that's the evolution: creating a space where AI has a role to play that is embedded in the organisation and the operating model, which measures and manages it like every other human resource within the business." Playing the long game With every new wave of technology across data, analytics, automation and AI, business leaders are often targeted and lobbied to act with urgency on a new solution. The warning to 'innovate or be left behind' is regularly bandied around these concepts. While McGowan notes the AI opportunity in asset management represents considerable potential, he feels strongly that it must be approached driven by practical business needs, organisational fit and readiness, rather than hype. "The potential for AI particularly in the asset management space is almost endless, because largely the sector has not gone through as many digital revolutions that other industries and sectors have gone through," he says. "There are opportunities to do things like provide real-time feedback around avoiding potential incidents, and the automation of non-value-adding tasks will also be significant. "But it's important that when we talk about driving efficiency with AI that we evolve our thinking in terms of what it means for careers and jobs and tasks, and working to make those AI-infused rather than AI-replaced. "We need to look at evolving the operating model to support the introduction of efficiency gains from AI and then driving your valued labour to much higher value tasks." This messaging will form part of the jumping off point for COSOL's upcoming roundtable series in Australia. With 60 asset management leaders across Sydney, Melbourne and Brisbane coming together to discuss the pivotal role of AI in shaping the future of asset management in the coming weeks, valuable insights are guaranteed to emerge. Many of these insights will be shared in an upcoming report. To register for these learnings, click this link.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store