
Exclusive: How Oracle's AI agents are driving a new era of enterprise automation
Oracle's AI-powered agents are reshaping business automation, with finance teams seeing both dramatic efficiency gains and cost savings.
When Rondy Ng joined Oracle as a developer back in 1990, the idea of artificial intelligence running enterprise processes was the stuff of science fiction.
Today, as EVP of Applications Development at Oracle, Ng witnesses Oracle's AI evolution firsthand - overseeing a sweeping transformation in the way global businesses operate.
"We have more than 14,000 customers on our Fusion platform," Ng explained to TechDay during a recent interview. "More than 50 percent of Fortune 500 companies run on Fusion Cloud ERP."
Ng's role now spans far beyond product development.
"My job is not just about building technology, but about helping the world's largest companies - CIOs, CFOs, and CEOs - transform their businesses. And that's exciting, especially now with the rise of AI."
AI agents, he explained, mark the next major shift in enterprise automation - especially in finance departments that have historically relied on complex, manual processes. Oracle has already embedded over 50 AI agents throughout its Fusion Cloud Applications Suite.
"These agents automate end-to-end workflows, personalise insights, and act on behalf of specific roles - all within the context of the business," he said.
Ng gave a striking example of how the technology works in finance. "In accounts payable, you might receive thousands of invoices in different formats - PDFs, images, even handwritten notes. Our Document IO Agent reads and interprets all of these, then enriches the data and creates invoices, journal entries or payment instructions ready for review."
Crucially, he added, these AI agents go well beyond traditional rule-based automation.
"They're powered by large language models, so they can carry out multi-step processes, understand user roles, respond to natural-language prompts and adapt to new scenarios."
Ng believes the arrival of AI agents will redefine how people interact with enterprise software.
"In the past, finance users had to learn complex systems just to do basic data entry. Now, they can use natural language - just like texting on a phone - to tell the system what to do. It's a complete change in user experience."
He gave another example: "Let's say you want to change a cost centre across a group of invoices. You no longer need to go into multiple screens. You just say it - 'Change the cost centre on these invoices to this one' - and the AI agent does it for you."
This kind of innovation, he said, is only possible on a true SaaS platform that's constantly updated. "We push updates quarterly. Everyone is on the latest version, and customers can choose when to turn new features on. It's like your phone's iOS updates - seamless, with innovations ready when you are."
The flexibility of Oracle's platform has attracted major clients like Hearst, one of the world's largest media companies. Hearst uses Oracle's AI capabilities to automate invoice processing, drive intelligent payments and unlock dynamic discounting with suppliers.
"With Intelligent Payments, Hearst could offer early payment proposals automatically, saving hundreds of thousands in costs," Ng said. "And their invoice matching accuracy jumped from 70 percent to 95 percent over time - dramatically reducing manual effort."
Ng was quick to point out that businesses still clinging to legacy systems are likely to be left behind. "Many of our customers are still in the early stages of moving off legacy platforms," he said.
"But to get the full benefits of AI, you have to simplify and standardise your processes first."
He warned against delaying the shift. "If your data is fragmented across ten different systems, AI can't help you much.
You need a unified SaaS platform as a prerequisite. Don't wait until your competitors have already moved and are reaping the benefits."
He believes many businesses misunderstand the urgency. "I know some CFOs are thinking, 'My current system is good enough.' But AI is moving fast. The longer you wait, the harder it will be to catch up."
Oracle's AI agents are already proving their worth, but Ng sees this as only the beginning.
"The long-term vision is for AI agents to fundamentally change the way work gets done," he said. "We're heading towards a future where agents communicate with each other - across departments, across functions - to automate even more complex processes."
His early instincts about the value of AI have only deepened with time.
"We've been building automation into our systems for years," he said. "But AI makes it simpler, smarter and far more powerful."
Despite the hype around AI, Ng stressed that this isn't a passing trend - it's a tectonic shift. "We're still at the beginning," he said. "But this is the direction the enterprise world is going. And there's no turning back."
Ng ended with a message for decision-makers weighing the move to AI: "My recommendation? Don't wait. It's now."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
9 hours ago
- Techday NZ
Oracle unveils AMD-powered zettascale AI cluster for OCI cloud
Oracle has announced it will be one of the first hyperscale cloud providers to offer artificial intelligence (AI) supercomputing powered by AMD's Instinct MI355X GPUs on Oracle Cloud Infrastructure (OCI). The forthcoming zettascale AI cluster is designed to scale up to 131,072 MI355X GPUs, specifically architected to support high-performance, production-grade AI training, inference, and new agentic workloads. The cluster is expected to offer over double the price-performance compared to the previous generation of hardware. Expanded AI capabilities The new announcement highlights several key hardware and performance enhancements. The MI355X-powered cluster provides 2.8 times higher throughput for AI workloads. Each GPU features 288 GB of high-bandwidth memory (HBM3) and eight terabytes per second (TB/s) of memory bandwidth, allowing for the execution of larger models entirely in memory and boosting both inference and training speeds. The GPUs also support the FP4 compute standard, a four-bit floating point format that enables more efficient and high-speed inference for large language and generative AI models. The cluster's infrastructure includes dense, liquid-cooled racks, each housing 64 GPUs and consuming up to 125 kilowatts per rack to maximise performance density for demanding AI workloads. This marks the first deployment of AMD's Pollara AI NICs to enhance RDMA networking, offering next-generation high-performance and low-latency connectivity. Mahesh Thiagarajan, Executive Vice President, Oracle Cloud Infrastructure, said: "To support customers that are running the most demanding AI workloads in the cloud, we are dedicated to providing the broadest AI infrastructure offerings. AMD Instinct GPUs, paired with OCI's performance, advanced networking, flexibility, security, and scale, will help our customers meet their inference and training needs for AI workloads and new agentic applications." The zettascale OCI Supercluster with AMD Instinct MI355X GPUs delivers a high-throughput, ultra-low latency RDMA cluster network architecture for up to 131,072 MI355X GPUs. AMD claims the MI355X provides almost three times the compute power and a 50 percent increase in high-bandwidth memory over its predecessor. Performance and flexibility Forrest Norrod, Executive Vice President and General Manager, Data Center Solutions Business Group, AMD, commented on the partnership, stating: "AMD and Oracle have a shared history of providing customers with open solutions to accommodate high performance, efficiency, and greater system design flexibility. The latest generation of AMD Instinct GPUs and Pollara NICs on OCI will help support new use cases in inference, fine-tuning, and training, offering more choice to customers as AI adoption grows." The Oracle platform aims to support customers running the largest language models and diverse AI workloads. OCI users leveraging the MI355X-powered shapes can expect significant performance increases—up to 2.8 times greater throughput—resulting in faster results, lower latency, and the capability to run larger models. AMD's Instinct MI355X provides customers with substantial memory and bandwidth enhancements, which are designed to enable both fast training and efficient inference for demanding AI applications. The new support for the FP4 format allows for cost-effective deployment of modern AI models, enhancing speed and reducing hardware requirements. The dense, liquid-cooled infrastructure supports 64 GPUs per rack, each operating at up to 1,400 watts, and is engineered to optimise training times and throughput while reducing latency. A powerful head node, equipped with an AMD Turin high-frequency CPU and up to 3 TB of system memory, is included to help users maximise GPU performance via efficient job orchestration and data processing. Open-source and network advances AMD emphasises broad compatibility and customer flexibility through the inclusion of its open-source ROCm stack. This allows customers to use flexible architectures and reuse existing code without vendor lock-in, with ROCm encompassing popular programming models, tools, compilers, libraries, and runtimes for AI and high-performance computing development on AMD hardware. Network infrastructure for the new supercluster will feature AMD's Pollara AI NICs that provide advanced RDMA over Converged Ethernet (RoCE) features, programmable congestion control, and support for open standards from the Ultra Ethernet Consortium to facilitate low-latency, high-performance connectivity among large numbers of GPUs. The new Oracle-AMD collaboration is expected to provide organisations with enhanced capacity to run complex AI models, speed up inference times, and scale up production-grade AI workloads economically and efficiently.


Techday NZ
a day ago
- Techday NZ
Oracle & NVIDIA expand OCI partnership with 160 AI tools
Oracle and NVIDIA have expanded their partnership to enable customers to access more than 160 AI tools and agents while leveraging the necessary computing resources for AI development and deployment. The collaboration brings NVIDIA AI Enterprise, a cloud-native software platform, natively to the Oracle Cloud Infrastructure (OCI) Console. Oracle customers can now use this platform across OCI's distributed cloud, including public regions, Government Clouds, and sovereign cloud solutions. Platform access and capabilities By integrating NVIDIA AI Enterprise directly through the OCI Console rather than a marketplace, Oracle allows customers to utilise their existing Universal Credits, streamlining transactions and support. This approach is designed to speed up deployment and help customers meet security, regulatory, and compliance requirements in enterprise AI processes. Customers can now access over 160 AI tools focused on training and inference, including NVIDIA NIM microservices. These services aim to simplify the deployment of generative AI models and support a broad set of application-building and data management needs across various deployment scenarios. "Oracle has become the platform of choice for AI training and inferencing, and our work with NVIDIA boosts our ability to support customers running some of the world's most demanding AI workloads," said Karan Batta, Senior Vice President, Oracle Cloud Infrastructure. "Combining NVIDIA's full-stack AI computing platform with OCI's performance, security, and deployment flexibility enables us to deliver AI capabilities at scale to help advance AI efforts globally." The partnership includes making NVIDIA GB200 NVL72 systems available on the OCI Supercluster, supporting up to 131,072 NVIDIA Blackwell GPUs. The new architecture provides a liquid-cooled infrastructure that targets large-scale AI training and inference requirements. Governments and enterprises can take advantage of the so-called AI factories, using platforms like NVIDIA's GB200 NVL72 for agentic AI tasks reliant on advanced reasoning models and efficiency enhancements. Developer access to advanced resources Oracle has become one of the first major cloud providers to integrate with NVIDIA DGX Cloud Lepton, which links developers to a global marketplace of GPU compute. This integration offers developers access to OCI's high-performance GPU clusters for a range of needs, including AI training, inference, digital twin implementations, and parallel HPC applications. Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, said: "Developers need the latest AI infrastructure and software to rapidly build and launch innovative solutions. With OCI and NVIDIA, they get the performance and tools to bring ideas to life, wherever their work happens." With this integration, developers are also able to select compute resources in precise regions to help achieve both strategic and sovereign AI aims and satisfy long-term and on-demand requirements. Customer projects using joint capabilities Enterprises in Europe and internationally are making use of the enhanced partnership between Oracle and NVIDIA. For example, Almawave, based in Italy, utilises OCI AI infrastructure and NVIDIA Hopper GPUs to run generative AI model training and inference for its Velvet family, which supports Italian alongside other European languages and is being deployed within Almawave's AIWave platform. "Our commitment is to accelerate innovation by building a high-performing, transparent, and fully integrated Italian foundational AI in a European context—and we are only just getting started," said Valeria Sandei, Chief Executive Officer, Almawave. "Oracle and NVIDIA are valued partners for us in this effort, given our common vision around AI and the powerful infrastructure capabilities they bring to the development and operation of Velvet." Danish health technology company Cerebriu is using OCI along with NVIDIA Hopper GPUs to build an AI-driven tool for clinical brain MRI analysis. Cerebriu's deep learning models, trained on thousands of multi-modal MRI images, aim to reduce the time required to interpret scans, potentially benefiting the clinical diagnosis of time-sensitive neurological conditions. "AI plays an increasingly critical role in how we design and differentiate our products," said Marko Bauer, Machine Learning Researcher, Cerebriu. "OCI and NVIDIA offer AI capabilities that are critical to helping us advance our product strategy, giving us the computing resources we need to discover and develop new AI use cases quickly, cost-effectively, and at scale. Finding the optimal way of training our models has been a key focus for us. While we've experimented with other cloud platforms for AI training, OCI and NVIDIA have provided us the best cloud infrastructure availability and price performance." By expanding the Oracle-NVIDIA partnership, customers are now able to choose from a wide variety of AI tools and infrastructure options within OCI, supporting both research and production environments for AI solution development.


Techday NZ
a day ago
- Techday NZ
Databricks launches Lakebase Postgres database for AI era
Databricks has launched Lakebase, a fully managed Postgres database designed specifically for artificial intelligence (AI) applications, and made it available in Public Preview. Lakebase integrates an operational database layer into Databricks' Data Intelligence Platform, with the goal of enabling developers and enterprises to build data applications and AI agents more efficiently on a single multi-cloud environment. Purpose-built for AI workloads Operational databases, commonly known as Online Transaction Processing (OLTP) systems, are fundamental to application development across industries. The market for these databases is estimated at over USD $100 billion. However, many OLTP systems are based on architectures developed decades ago, which makes them challenging to manage, inflexible, and expensive. The current shift towards AI-driven applications has introduced new technical requirements, including the need for real-time data handling and scalable architecture that supports AI workloads at speed and scale. Lakebase, which leverages Neon technology, delivers operational data to the lakehouse architecture — combining low-cost data storage with computing resources that automatically scale to meet workload requirements. This design allows for the convergence of operational and analytical systems, reducing latency for AI processes and offering enterprises current data for real-time decision-making. "We've spent the past few years helping enterprises build AI apps and agents that can reason on their proprietary data with the Databricks Data Intelligence Platform," said Ali Ghodsi, Co-founder and CEO of Databricks. "Now, with Lakebase, we're creating a new category in the database market: a modern Postgres database, deeply integrated with the lakehouse and today's development stacks. As AI agents reshape how businesses operate, Fortune 500 companies are ready to replace outdated systems. With Lakebase, we're giving them a database built for the demands of the AI era." Key features Lakebase separates compute and storage, supporting independent scaling for diverse workloads. Its cloud-native architecture offers low latency (under 10 milliseconds), high concurrency (over 10,000 queries per second), and is designed for high-availability transactional operations. The service is built on Postgres, an open source database engine widely used by developers and supported by a rich ecosystem. For AI workloads, Lakebase launches in under a second and operates on a consumption-based payment model, so users only pay for the resources they use. Branching capabilities allow developers to create copy-on-write database clones, supporting safe testing and experimentation by both humans and AI agents. Lakebase automatically syncs data with lakehouse tables and provides an online feature store for machine learning model serving. It also integrates with other Databricks services, including Databricks Apps and Unity Catalog. The database is managed entirely by Databricks, with features such as encrypted data at rest, high availability, point-in-time recovery, and enterprise-grade compliance and security. Market adoption and customer perspectives According to the company, hundreds of enterprises participated in the Private Preview stage of Lakebase. Potential applications for the technology span sectors, from personalised product recommendations in retail to clinical trial workflow management in healthcare. Jelle Van Etten, Head of Global Data Platform at Heineken, commented: "At Heineken, our goal is to become the best-connected brewer. To do that, we needed a way to unify all of our datasets to accelerate the path from data to value. Databricks has long been our foundation for analytics, creating insights such as product recommendations and supply chain enhancements. Our analytical data platform is now evolving to be an operational AI data platform and needs to deliver those insights to applications at low latency." Anjan Kundavaram, Chief Product Officer at Fivetran, said: "Lakebase removes the operational burden of managing transactional databases. Our customers can focus on building applications instead of worrying about provisioning, tuning and scaling." David Menninger, Executive Director at ISG Software Research, said: "Our research shows that the data and insights from analytical processes are the most critical data to enterprises' success. In order to act on that information, they must be able to incorporate it into operational processes via their business applications. These two worlds are no longer separate. By offering a Postgres-compatible, lakehouse-integrated system designed specifically for AI-native and analytical workloads, Databricks is giving customers a unified, developer-friendly stack that reduces complexity and accelerates innovation. This combination will help enterprises maximise the value they derive across their entire data estate — from storage to AI-enabled application deployment." Integration and partner network Lakebase is launching with support from a network of partners, including technology vendors and system integrators such as Accenture, Deloitte, Cloudflare, Informatica, Qlik, and Redis, among others. These partnerships are designed to ease data integration, enhance business intelligence, and support governance for customers as they adopt Lakebase as part of their operational infrastructure. Lakebase is now available in Public Preview with further enhancements planned in the coming months. Customers can access the preview directly through their Databricks workspace.