logo

Broadridge patents GenAI applications

Finextra14-05-2025

Broadridge Financial Solutions Inc. (NYSE:BR) has been awarded a new U.S. patent on its large language model (LLM) orchestration of machine learning agents.
0
These patented methods and systems are behind BondGPT, Broadridge's award-winning GenAI application first demonstrated in the market in the LTX e-trading platform, a Broadridge subsidiary. BondGPT was released in June 2023, followed by the enterprise version, BondGPT+, in October 2023. These applications provide timely, secure and accurate responses to natural language questions using OpenAI GPT models and the orchestration of multiple AI agents to automatically retrieve and process data from multiple datasets and analytical models simultaneously.
'We are consistently developing innovative data science and execution capabilities to improve our clients' pre-trade and trade execution workflows,' said Jim Kwiatkowski, CEO of LTX. 'As we reflect on the positive feedback we've received about the value and uniqueness of BondGPT, it's validating to receive this patent for our innovations. We will continue to work closely with clients integrating AI into their workflows to increase productivity and optimize trading.'
BondGPT and BondGPT+ harness powerful AI and machine learning to offer enhanced, personalized trading capabilities to corporate bond traders, portfolio managers, and analysts on the buy- and sell-side. By deploying Broadridge's patented methods for LLM orchestration of machine learning agents, the BondGPT+ enterprise application integrates clients' proprietary data and analytical models, third-party datasets, as well as sophisticated personalization features, and provides unparalleled access to critical pre-trade data and models, improving efficiency and saving valuable time for users.
Other significant features patented in U.S. Patent No.12,061,970 include:
Explainability as to how the output of the patented methods of LLM orchestration of machine learning agents was generated through a "Show your work" feature that offers step-by-step transparency;
A multi-agent adversarial feature for enhanced accuracy; and
An AI-powered compliance verification feature, based on custom compliance rules configured to an enterprise's unique compliance and risk management processes.
The use of User Profile attributes such as user role to inform data retrieval and security
The announcement builds on the momentum of other patented technologies awarded to LTX's fixed income trading, including innovations of bond similarity technology, dealer selection score technology, liquidity aggregation technology, and RFQ+ trading protocol.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

NVIDIA DGX Spark : World's First 128GB LLM Mini System with GB10 Grace Blackwell Superchip
NVIDIA DGX Spark : World's First 128GB LLM Mini System with GB10 Grace Blackwell Superchip

Geeky Gadgets

time3 hours ago

  • Geeky Gadgets

NVIDIA DGX Spark : World's First 128GB LLM Mini System with GB10 Grace Blackwell Superchip

What if the future of artificial intelligence wasn't just smarter but also smaller? Imagine a system so compact it could fit into the tightest spaces, yet powerful enough to process vast datasets and deliver real-time insights. Enter the NVIDIA DGX Spark the world's first 128GB Large Language Model (LLM), a new leap in AI technology. By condensing the capabilities of massive AI systems into a sleek, efficient design, this innovation challenges the notion that bigger is always better. Powered by the NVIDIA GB10 Grace Blackwell Superchip, NVIDIA DGX Spark delivers 1 petaFLOP of AI performance. With the NVIDIA AI software stack preinstalled and 128GB of memory, developers can prototype, fine-tune, and inference the latest generation of reasoning AI models from DeepSeek, Meta, Google, and others with up to 200 billion parameters locally In this video, Alex Ziskind explores how the 128GB LLM system is reshaping the AI landscape. From its remarkable balance of power and portability to its ability to tackle complex tasks across industries like healthcare, finance, and logistics, this compact powerhouse is setting a new standard for what AI systems can achieve. You'll discover how its energy-efficient design not only boosts performance but also aligns with sustainability goals, making it a forward-thinking solution for modern challenges. As we unpack its features and applications, one question lingers: Could this be the start of a new era where AI is not just smarter but also more accessible? NVIDIA DGX Spark 128GB LLM Mini Overview Key Features of the 128GB LLM Mini The NVIDIA DGX Spark 128GB LLM system stands out as a new achievement in AI technology. By integrating the capabilities of large-scale models into a compact framework, it addresses the increasing demand for high-performance AI systems that can function effectively in space-limited environments. Its design prioritizes portability without sacrificing computational power, making it an ideal choice for businesses, researchers, and innovators seeking to deploy advanced AI solutions without the need for extensive infrastructure. What sets the NVIDIA DGX Sparkapart is its ability to deliver robust performance while maintaining a compact form factor. This balance of power and portability ensures that it can meet the needs of diverse applications, from real-time data analysis to predictive modeling, all while operating efficiently in constrained spaces. Exceptional Performance for Complex Applications The NVIDIA DGX Spark is engineered to excel in handling complex and data-intensive tasks with remarkable speed and accuracy. Its high-capacity architecture enables it to process vast datasets efficiently, supporting real-time analysis and decision-making. This capability is particularly fantastic for industries that rely on actionable insights to address intricate challenges. Key sectors benefiting from this technology include: Healthcare: Using AI to analyze patient data, support diagnostics, and identify emerging health trends. Using AI to analyze patient data, support diagnostics, and identify emerging health trends. Finance: Enhancing risk assessment, fraud detection, and financial forecasting with precision and speed. Enhancing risk assessment, fraud detection, and financial forecasting with precision and speed. Logistics: Optimizing supply chain operations through predictive analytics and automation. By delivering reliable insights and automating decision-making processes, the 128GB LLM system enables industries to tackle complex challenges with greater efficiency and confidence. World's First 128GB LLM Mini Is Here Watch this video on YouTube. Unlock more potential in Large Language Models by reading previous articles we have written. Efficiency and Sustainability in AI Efficiency is a defining characteristic of the NVIDIA DGX Spark. Its design focuses on optimizing resource utilization to deliver advanced performance while minimizing energy consumption and processing time. This balance ensures consistent, high-speed outputs, making it a practical solution for applications requiring both reliability and sustainability. Examples of its use include: Natural language processing to enhance customer support systems with faster and more accurate responses. Predictive analytics for identifying market trends and improving business forecasting. Personalized recommendation systems in retail to improve customer engagement and satisfaction. The streamlined operation of the LLM system not only reduces operational overhead but also supports environmentally conscious AI development by lowering energy demands, making it a forward-thinking tool for modern industries. Adaptability Across Diverse Sectors The versatility of the NVIDIA DGX Spark ensures its applicability across a wide range of industries. Its scalable architecture allows businesses to customize the model to meet their specific needs, providing tailored solutions for unique challenges. Some of the key applications include: Retail: Enhancing customer experiences through AI-driven personalized shopping recommendations. Enhancing customer experiences through AI-driven personalized shopping recommendations. Healthcare: Supporting medical research, diagnostics, and patient care with data-driven insights. Supporting medical research, diagnostics, and patient care with data-driven insights. Finance: Streamlining operations, improving decision-making, and increasing overall efficiency. This adaptability positions the 128GB LLM unit as a critical tool for organizations aiming to maintain a competitive edge in an increasingly AI-driven landscape. Its ability to integrate seamlessly into various workflows ensures that it can meet the evolving demands of modern industries. Driving the Future of Artificial Intelligence The introduction of the NVIDIA DGX Spark powered by the NVIDIA GB10 Grace Blackwell Superchip represents a pivotal step forward in the development of artificial intelligence. Its compact design, combined with advanced computational capabilities and energy efficiency, establishes it as a cornerstone of next-generation AI technology. As industries continue to adopt AI-driven solutions, the LLM unit provides a scalable, high-capacity option that meets the demands of contemporary applications while paving the way for future advancements. By bridging the gap between performance and portability, the 128GB LLM system offers a practical and innovative solution for businesses, researchers, and innovators. Its ability to deliver powerful results in a compact form factor ensures that it will play a central role in shaping the future of AI, allowing new possibilities and driving progress across diverse sectors. Media Credit: Alex Ziskind Filed Under: AI, Hardware, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

NatWest appoints first chief AI research officer
NatWest appoints first chief AI research officer

Finextra

timea day ago

  • Finextra

NatWest appoints first chief AI research officer

NatWest has appointed Dr Maja Pantic as its first chief AI research officer. 0 Pantic currently serves as Professor of Affective & Behavioural Computing at Imperial College London, having previously been founding research director of Samsung AI Research Centre in Cambridge, and AI scientific research director at Meta London. At NatWest, Pantic will be tasked with developing AI use cases for multimodal AI in combatting depfake threats and Generative AI. This will entail the progressive roll out of AI for bank-wide simplification via the development of tools for improving productivity. Scott Marcar, CIO of NatWest Group, says: 'It's not the first time I've said that AI is helping us to be a simpler NatWest and to transform our customers' experiences as we become even more of a trusted partner in the moments that matter most. Maja's appointment is another important and exciting milestone; her unique skills and experience will help us adapt and meet customers' changing needs faster, and more effectively, whilst complementing our team's existing capabilities.' The appointment builds on recent momentum in NatWest's AI strategy, including its recent collaboration with OpenAI, the roll-out of the bank's internal GenAI platform to all staff, and success in how it operates and serves customers through virtual assistant tools like Cora+ and AskArchie+. The bank claims the GenAI functionality offered by Cora+ has shown a 150% improvement in customer satisfaction, while reducing the number of times a colleague needs to intervene, with similar benefits with Ask Archie+, where up to 75% of HR queries no longer need human intervention.

When billion-dollar AIs break down over puzzles a child can do, it's time to rethink the hype
When billion-dollar AIs break down over puzzles a child can do, it's time to rethink the hype

The Guardian

time2 days ago

  • The Guardian

When billion-dollar AIs break down over puzzles a child can do, it's time to rethink the hype

A research paper by Apple has taken the tech world by storm, all but eviscerating the popular notion that large language models (LLMs, and their newest variant, LRMs, large reasoning models) are able to reason reliably. Some are shocked by it, some are not. The well-known venture capitalist Josh Wolfe went so far as to post on X that 'Apple [had] just GaryMarcus'd LLM reasoning ability' – coining a new verb (and a compliment to me), referring to 'the act of critically exposing or debunking the overhyped capabilities of artificial intelligence … by highlighting their limitations in reasoning, understanding, or general intelligence'. Apple did this by showing that leading models such as ChatGPT, Claude and Deepseek may 'look smart – but when complexity rises, they collapse'. In short, these models are very good at a kind of pattern recognition, but often fail when they encounter novelty that forces them beyond the limits of their training, despite being, as the paper notes, 'explicitly designed for reasoning tasks'. As discussed later, there is a loose end that the paper doesn't tie up, but on the whole, its force is undeniable. So much so that LLM advocates are already partly conceding the blow while hinting at, or at least hoping for, happier futures ahead. In many ways the paper echoes and amplifies an argument that I have been making since 1998: neural networks of various kinds can generalise within a distribution of data they are exposed to, but their generalisations tend to break down beyond that distribution. A simple example of this is that I once trained an older model to solve a very basic mathematical equation using only even-numbered training data. The model was able to generalise a little bit: solve for even numbers it hadn't seen before, but unable to do so for problems where the answer was an odd number. More than a quarter of a century later, when a task is close to the training data, these systems work pretty well. But as they stray further away from that data, they often break down, as they did in the Apple paper's more stringent tests. Such limits arguably remain the single most important serious weakness in LLMs. The hope, as always, has been that 'scaling' the models by making them bigger, would solve these problems. The new Apple paper resoundingly rebuts these hopes. They challenged some of the latest, greatest, most expensive models with classic puzzles, such as the Tower of Hanoi – and found that deep problems lingered. Combined with numerous hugely expensive failures in efforts to build GPT-5 level systems, this is very bad news. The Tower of Hanoi is a classic game with three pegs and multiple discs, in which you need to move all the discs on the left peg to the right peg, never stacking a larger disc on top of a smaller one. With practice, though, a bright (and patient) seven-year-old can do it. What Apple found was that leading generative models could barely do seven discs, getting less than 80% accuracy, and pretty much can't get scenarios with eight discs correct at all. It is truly embarrassing that LLMs cannot reliably solve Hanoi. And, as the paper's co-lead-author Iman Mirzadeh told me via DM, 'it's not just about 'solving' the puzzle. We have an experiment where we give the solution algorithm to the model, and [the model still failed] … based on what we observe from their thoughts, their process is not logical and intelligent'. The new paper also echoes and amplifies several arguments that Arizona State University computer scientist Subbarao Kambhampati has been making about the newly popular LRMs. He has observed that people tend to anthropomorphise these systems, to assume they use something resembling 'steps a human might take when solving a challenging problem'. And he has previously shown that in fact they have the same kind of problem that Apple documents. If you can't use a billion-dollar AI system to solve a problem that Herb Simon (one of the actual godfathers of AI) solved with classical (but out of fashion) AI techniques in 1957, the chances that models such as Claude or o3 are going to reach artificial general intelligence (AGI) seem truly remote. So what's the loose thread that I warn you about? Well, humans aren't perfect either. On a puzzle like Hanoi, ordinary humans actually have a bunch of (well-known) limits that somewhat parallel what the Apple team discovered. Many (not all) humans screw up on versions of the Tower of Hanoi with eight discs. But look, that's why we invented computers, and for that matter calculators: to reliably compute solutions to large, tedious problems. AGI shouldn't be about perfectly replicating a human, it should be about combining the best of both worlds; human adaptiveness with computational brute force and reliability. We don't want an AGI that fails to 'carry the one' in basic arithmetic just because sometimes humans do. Whenever people ask me why I actually like AI (contrary to the widespread myth that I am against it), and think that future forms of AI (though not necessarily generative AI systems such as LLMs) may ultimately be of great benefit to humanity, I point to the advances in science and technology we might make if we could combine the causal reasoning abilities of our best scientists with the sheer compute power of modern digital computers. What the Apple paper shows, most fundamentally, regardless of how you define AGI, is that these LLMs that have generated so much hype are no substitute for good, well-specified conventional algorithms. (They also can't play chess as well as conventional algorithms, can't fold proteins like special-purpose neurosymbolic hybrids, can't run databases as well as conventional databases, etc.) What this means for business is that you can't simply drop o3 or Claude into some complex problem and expect them to work reliably. What it means for society is that we can never fully trust generative AI; its outputs are just too hit-or-miss. One of the most striking findings in the new paper was that an LLM may well work in an easy test set (such as Hanoi with four discs) and seduce you into thinking it has built a proper, generalisable solution when it has not. To be sure, LLMs will continue to have their uses, especially for coding and brainstorming and writing, with humans in the loop. But anybody who thinks LLMs are a direct route to the sort of AGI that could fundamentally transform society for the good is kidding themselves. This essay was adapted from Gary Marcus's newsletter, Marcus on AI Gary Marcus is a professor emeritus at New York University, the founder of two AI companies, and the author of six books, including Taming Silicon Valley

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store