logo
#

Latest news with #LLMs

Atlas Cloud Launches High-Efficiency AI Inference Platform, Outperforming DeepSeek
Atlas Cloud Launches High-Efficiency AI Inference Platform, Outperforming DeepSeek

Miami Herald

time6 hours ago

  • Business
  • Miami Herald

Atlas Cloud Launches High-Efficiency AI Inference Platform, Outperforming DeepSeek

Developed with SGLang, Atlas Inference surpasses leading AI companies in throughput and cost, running DeepSeek V3 & R1 faster than DeepSeek themselves. NEW YORK CITY, NEW YORK / ACCESS Newswire / May 28, 2025 / Atlas Cloud, the all-in-one AI competency center for training and deploying AI models, today announced the launch of Atlas Inference, an AI inference platform that dramatically reduces GPU and server requirements, enabling faster, more cost-effective deployment of large language models (LLMs). Atlas Inference, co-developed with SGLang, an AI inference engine, maximizes GPU efficiency by processing more tokens faster and with less hardware. When comparing DeepSeek's published performance results, Atlas Inference's 12-node H100 cluster outperformed DeepSeek's reference implementation of their DeepSeek-V3 model while using two-thirds of the servers. Atlas' platform reduces infrastructure requirements and operational costs while addressing hardware costs, which represent up to 80% of AI operational expenses. "We built Atlas Inference to fundamentally break down the economics of AI deployment," said Jerry Tang, Atlas CEO. "Our platform's ability to process 54,500 input tokens and 22,500 output tokens per second per node means businesses can finally make high-volume LLM services profitable instead of merely break-even. I believe this will have a significant ripple effect throughout the industry. Simply put, we're surpassing industry standards set by hyperscalers by delivering superior throughput with fewer resources." Atlas Inference's performance also exceeds major players like Amazon, NVIDIA and Microsoft, delivering up to 2.1 times greater throughput using 12 nodes compared to competitors' larger setups. It maintains sub-5-second first-token latency and 100-millisecond inter-token latency with more than 10,000 concurrent sessions, ensuring a scaled, superior experience. The platform's performance is driven by four key innovations: Prefill/Decode Disaggregation: Separates compute-intensive operations from memory-bound processes to optimize efficiencyDeepExpert (DeepEP) Parallelism with Load Balancers: Ensures over 90% GPU utilizationTwo-Batch OverlapTechnology: Increases throughput by enabling larger batches and utilization of both compute and communication phases simultaneouslyDisposableTensor Memory Models: Prevents crashes during long sequences for reliable operation "This platform represents a significant leap forward for AI inference," said Yineng Zhang, Core Developer at SGLang. "What we built here may become the new standard for GPU utilization and latency management. We believe this will unlock capabilities previously out of reach for the majority of the industry regarding throughput and efficiency." Combined with a lower cost per token, linear scaling behavior, and reduced emissions compared to leading vendors, Atlas Inference provides a cost-efficient and scalable AI deployment. Atlas Inference works with standard hardware and supports custom models, giving customers complete flexibility. Teams can upload fine-tuned models and keep them isolated on dedicated GPUs, making the platform ideal for organizations requiring brand-specific voice or domain expertise. The platform is available immediately for enterprise customers and early-stage startups. About Atlas Cloud Atlas Cloud is your all-in-one AI competency center, powering leading AI teams with safe, simple, and scalable infrastructure for training and deploying models. Atlas Cloud also offers an on-demand GPU platform that delivers fast, serverless compute. Backed by Dell, HPE, and Supermicro, Atlas delivers near instant access to up to 5,000 GPUs across a global SuperCloud fabric with 99% uptime and baked-in compliance. Learn more at SOURCE: Atlas Cloud press release

To Create Value with AI, Improve the Quality of Your Unstructured Data
To Create Value with AI, Improve the Quality of Your Unstructured Data

Harvard Business Review

time7 hours ago

  • Business
  • Harvard Business Review

To Create Value with AI, Improve the Quality of Your Unstructured Data

A company's content lies largely in 'unstructured data'—those emails, contracts, forms, Sharepoint files, recordings of meetings and so forth created via work processes. That proprietary content makes gen AI more distinctive, more knowledgeable about your products and services, less likely to hallucinate, and more likely to bring economic value. As a chief data officer we interviewed pointed out, 'You're unlikely to get much return on your investment by simply installing CoPilot.' Many companies have concluded that the most value from gen AI lies in combining the astounding language, reasoning, and general knowledge of large language models (LLMs) with their own proprietary content. That combination is necessary, for example, in enterprise-level gen AI applications in customer service, marketing, legal, and software development, and product/service offerings for customer use. The most common approach by far to adding a company's own content is 'retrieval augmented generation' or RAG, which combines traditional information-gathering tools like databases with information retrieved by LLMs. It is used because submitting vast quantities of content in a prompt is often technically infeasible or expensive. While technically complex, the RAG approach is quite feasible and yields accurate responses to user prompts if the unstructured data used in RAG is of high quality. Therein lies the problem. Unstructured data is frequently of poor quality—obsolete, duplicative, inaccurate, and poorly-structured, among other problems. Most companies have not done well with the quality of structured data, even as this data is used every day to complete business transactions and understand performance. Unstructured data is tougher. The last serious attempts to address unstructured data date to the 1990s and 2000s when knowledge management was popular. Most efforts proved unsuccessful. Surveys confirm that most leaders are aware that poor quality hampers their generative AI efforts, and that they did not have a strong focus on unstructured data until the advent of gen AI. Of course, the best way to deal with data quality problems is to prevent them. Over the long-term, companies serious about AI must develop programs to do just that. Those who create documents, for example, need to learn to evaluate them for quality and tag key elements. But this will take much concerted effort and is no help in the short term. To get value from gen AI, companies need to build RAG applications using high-quality unstructured data. Our objective in this article is to help them do so by summarizing the most important data problems and the best approaches for dealing with them, both human and technical. What Is Data Quality for Unstructured Data? High-quality data, whether structured or unstructured, only results from focused effort, led by active, engaged leadership, some well-placed professionals, clear management responsibilities for all who touch data, and a relentless commitment to continuous improvement. Absent these things, chances are high your data is not up-to-snuff. As coach and advisor, Alex Borek of the Data Masterclass told us, 'When AI doesn't work, it often reveals flaws in the human system.' Indeed, the best estimate is that 80% of time spent on an AI project will be devoted to data. For example, a Philippines-based Morgan Stanley team spent several years curating research reports in advance of their AI @ Morgan Stanley assistant project. The curation started before gen AI became widespread, which allowed Morgan Stanley to more quickly get their application into production. To work effectively, RAG requires documents directly relevant to the problem at hand, a minimum of duplicated content, and the information contained in those documents to be complete, accurate, and up-to-date. Further, as Seth Earley of Earley Information Science noted, 'You must supply context, as much as possible, if a LLM is to properly interpret these documents.' Unstructured data does not come pre-loaded with the needed context, and gen AI is largely incapable of determining what is the best information to solve a particular business question or issue. It is also not good at 'entity resolution,' i.e., 'Is this 'John Smith' in document A, about customers, the same person as 'J. A. Smith' in that document B, about vendors, and/or the same person as 'Mr. J Smith' in the other document C, about a donation to our foundation?' Most structured data is defined in a data model or dictionary. This provides some context and helps reduce the John Smith/J. A. Smith problem described above. For structured data it is easier to find the data desired, learn who is responsible for it, and understand what the data means. As John Duncan, the head of data governance for the large car retailer CarMax told us, unstructured data also requires the same need for clarity on data ownership, producers, consumers, and stewards. It also benefits from standards for data quality thresholds, data lineage, access controls, and retention durations. This metadata is typically included in a data dictionary. However, with unstructured data, there is seldom a dictionary. Often there is no centralized management of such content; documents are stored haphazardly using different naming conventions and on different computers or cloud providers across the company. There is often no common definition of a content type; an ad agency data leader confessed that there is no common definition of a 'pitch' across the agency. Finally, unstructured documents were often developed with a different purpose than feeding gen AI. A contract with a supplier, for example, was not designed to provide insight about the level of risk in a supplier relationship. We believe it was the late management thinker Charles Handy who observed, 'Information gathered for one purpose is seldom useful for another.' An Unstructured Data Quality Process Fortunately, there are several approaches and tools that can help to improve unstructured data. We recommend that all AI projects follow a disciplined process, building quality in wherever they can. Such a process must embrace the following steps: Address unstructured data quality issues problem by problem, not all at once. Identify and assess the data to be used. Assemble the team to address the problem. Prepare the data, employing both humans (D1) and AI (D2), when possible. Develop your application and validate that it works. Support the application and try to inculcate quality in content creation processes. 1. Address unstructured data quality issues problem by problem, not all at once. There is too much unstructured data to improve all at once. Project leaders should ensure that all involved agree on the problem/opportunity to be addressed. Priorities should be based first on the value to the business of solving the problem, and second on the feasibility and cost of developing a solution—including data quality improvement. Areas of the business with data that is already of reasonably good quality should receive higher priority. That's the approach Nelson Frederick Bamundagea, IT director at the truck refrigeration servicing company W&B Services, has taken. His knowledge retrieval application for service technicians uses the schematics of some 20 (refrigerator) models provided by two manufacturers. These have been used over and over and the vocabulary employed is relatively small, providing for a high level of trust. More generally, Alex Borek advises companies to 'first look to highly curated data products whenever possible.' 2. Identify and assess the data to be used. Since the data is critical to the success of an LLM-based knowledge project, it's important to assess the data at an early stage. There is a human tendency to include any possibly relevant document in a RAG, but companies should adopt a healthy skepticism and a 'less is more' philosophy: absent a good reason to trust a document or content source, don't include it. It's not likely that experts can evaluate every document, but they can dig deeply into a small sample. Are the sample documents loaded with errors, internal inconsistencies, or confusing language—or are they relatively clean? Use your judgment: Keep clean data and proceed with caution; toss bad data. If the data are in horrible shape or you can't find enough good data, reconsider the project. 3. Assemble the team to address the problem. Given the need for some human curation of unstructured data, it's unlikely that a small team of experts can accomplish the necessary work. In addition, those who work with the data from day-to-day typically have a better idea of what constitutes high quality and how to achieve it. In many cases, then, it may be helpful to make data quality improvement a broadly participative project. For example, at Scotiabank, the contact center organization needed to curate documents for a customer chatbot. Center staff took responsibility for the quality of its customer support knowledge base and ensured that each document fed into the RAG-based chatbot was clear, unique, and up to date. 4a. Prepare the data. If you've concluded—and you should—that there must be a human contribution to improving unstructured data quality, this is the time to engage it. That contribution could include having a stakeholder group agree on the key terms—e,g., 'contract,' 'proposal,' 'technical note,' and 'customer' might be examples—and how they are defined. Document this work in a business glossary. This can be hard: Consistent with 'Davenport's Law'—first stated more than 30 years ago —the more an organization knows or cares about a particular information element, the less likely it is to have a common term and meaning for it. This issue can be overcome through 'data arguing' (not data architecture) until the group arrives at a consensus. And, of course, if there is a human curation role, this is the time to begin it. That entails deciding which documents or content sources are the best for a particular issue, 'tagging' it with metadata, and scoring content on such attributes as recency, clarity, and relevance to the topic. Morgan Stanley has a team of 20 or so analysts based in the Philippines that scores each document along 20 different criteria. 4b. Prepare the data with AI. Gen AI itself is quite good at some tasks needed to prepare unstructured data for other gen AI applications. It can, for example, summarize content, classify documents by category of content, and tag key data element. For example, CarMax uses generative AI to translate different car manufacturers' specific language for describing automotive components and capabilities into a standard set of descriptions that is meant to enable a consumer to compare cars across manufacturers. Gen AI can also create good first drafts of 'knowledge graphs,' or displays of what information is related to other information in a network. Knowledge graphs improve the ability of RAG to find the best content quickly. Gen AI is also good at de-duplication, or the process of finding exact or very similar copies of documents and eliminating all but one. Since RAG approaches pick documents based on specified criteria, these criteria (recency, authorship, etc.) can be changed ('re-ranked') to give higher weight to certain ones in content search. We have found, however, that AI is not particularly good at identifying the best document in a set of similar ones, even when given a grading rubric. For that and reviewing tasks humans are still necessary. As a starting point, we recommend using humans to figure what needs to be done, and machines to increase scale and decrease unit cost in execution. 5. Develop your application and validate that it works. The process of developing a RAG models from curated data involves several rather technical steps, best performed by qualified tech staff. Even after having done everything possible to prepare the data, it is essential that organizations rigorously test their RAG applications before putting them into production. This is particularly important for applications that are highly regulated or involve human well-being. One way to validate the model involves identifying '50 Golden Questions,' in which a team identifies questions that the RAG must get right, determines whether it does so, and acts accordingly. The validation should be done over time given that foundational LLMs change often. When a European insurer tried to validate its system for knowledge on how to address claims, it found that customers' contracts, call center personnel, the company's knowledge base, and the claims department often disagreed. This led the company to clarify that the Claims Department 'owned' the answer, i.e., served as the 'gold standard.' Changes to the chatbot, customer contracts, and call center training followed. 6. Support the application and try to inculcate ongoing quality. As a practical matter, no RAG application will enjoy universal acclaim the minute it is deployed. The application can still hallucinate, there will be bugs to be worked out and some level of customer dissatisfaction. We find that some discount a well-performing RAG application if it makes any errors whatsoever. Finally, changes will be needed as the application is used in new ways. So, plan for ongoing quality management and improvement. The plan should include: Some amount of 'qualified-human-in-the-loop' especially in more critical situations A means to trap errors, conduct root cause analysis, and prevent them going forward Efforts to understand who the customers of the RAG are, how they use it, and how they define 'good' Feedback to managers responsible for business processes that create unstructured data to improve future inputs. Content creators can be trained, for example, to create higher-quality documents, tag them as they create, and add them to a central repository. It appears that RAG, featuring proprietary content, combined with LLMs, is going to be with us for the foreseeable future. It is one of the best ways to gain value from gen AI if one can feed models high-quality unstructured data. We know there is a lot here, but it is certainly within reach of those who buckle down and do the work.

Scammers can exploit your data from just 1 ChatGPT search
Scammers can exploit your data from just 1 ChatGPT search

Fox News

time7 hours ago

  • Health
  • Fox News

Scammers can exploit your data from just 1 ChatGPT search

ChatGPT and other large language models (LLMs) have become amazing helpers for everyday tasks. Whether it's summarizing complex ideas, designing a birthday card or even planning your apartment's layout, you can get impressive results with just a simple prompt. But as helpful as these AI tools are, their convenience comes with hidden risks, especially when it comes to your personal privacy. If you haven't tried an LLM like ChatGPT before, here's the gist: They're advanced language processors that chat with you through text. No special commands or coding needed, just type what you want to know or do, and they respond. For example, asking "Why is the conclave kept secret?" will get you a detailed explanation in seconds. This simplicity is what makes LLMs so useful, but it also opens the door to risks. Instead of harmless questions, someone could ask for a detailed profile on a person, and the model might generate a surprisingly thorough report. While these tools have safeguards and often refuse certain requests, clever phrasing can sometimes bypass those limits. Unfortunately, it doesn't take much effort for someone to use ChatGPT to gather personal information about you. But don't worry, there are ways to protect yourself from this kind of digital snooping. These AI tools don't just pull information out of thin air. They need to access real online sources to work. In other words, your data is already out there on the internet; AI tools just make it easier to find. And if you look at the sources, most of the information you wouldn't want shared online, like your address, relatives and so on, is made public by people-search sites. Other sources include social media, like LinkedIn and Facebook, as well as public databases. But none of them are as invasive as people-search sites. Let's see what you can do to limit how much of your information is exposed online. To effectively safeguard your personal information from being exposed or misused, it's important to follow these steps and adopt key precautions. Although not all people-search sites are required to offer it, most of them do provide an option to request an opt-out. But that comes with a few challenges. Where to start: Identifying people-search sites that expose your personal information There are hundreds of people-search sites registered in the U.S. Going through each and every one is, realistically speaking, impossible. You'll need to narrow your search somehow. Using AI tools: How to find and list data broker sites with your personal data Use AI tools and ask them to run a deep search on yourself. It's not a perfect or complete solution; LLMs tend to shorten their responses to save resources. But it will give you a good starting point, and if you keep asking for more results, you should be able to put together a decent list of people-search sites that might have your profile. Submitting opt-out requests: How to remove your information from people-search sites Now, you'll have to go through each of these people-search sites and submit opt-out requests. These usually aren't complicated, but they're definitely time-consuming. The opt-out forms are typically located at the bottom of each site, in the footer. The naming can vary from "Do Not Sell My Info" to "Opt-Out" or something similar. Each people-search site is a little different. Opting out of every people-search site that exposes your personal information is a mammoth task. I've discussed it in more detail here. Alternatively, you can automate this process. Data removal services are real-time and energy savers when it comes to protecting your personal information online. The way these services work is simple. They send hundreds of data removal requests on your behalf to people-search sites you might not even know exist but are still exposing your data. And with some services, the process goes even further than that. People-search sites aren't the only places exposing your personal information without your knowledge. In fact, they're just a small part of the larger data broker industry. There are marketing, health, financial, risk and many other types of data brokers trading your information. Your data is a commodity they use to make a profit, often without you even realizing it. Data removal services have taken on the challenge of fighting this threat to your privacy. They continuously scour the web, looking for your profiles. This way, you can just sign up and let them handle the work in the background. And here's the best part: They take about 10 minutes to set up, roughly the same time it takes to opt out of a single people-search site. And that's it. The removal process is entirely automated and requires little to no effort on your part. With this small initial effort, you may save yourself from privacy-related risks, including scams and even identity theft. But what if your data is exposed on a people-search site not covered by any data removal service? Every removal service out there has limitations on the number of data brokers it supports. It's not about a lack of effort; it's mostly because brokers are generally unwilling to cooperate, to put it mildly. But there's a way to address this issue without going back to manual opt-outs. The top names in the data removal industry now offer custom removals. In simple terms, this means you can ask them to remove your personal information from websites not currently covered by their standard plans. The catch is that you'll need to do the research yourself and point out which sites are exposing your data. It's not as convenient as having everything done automatically, but it's a relatively minor inconvenience for the sake of your online privacy. Being mindful of the information you provide to AI tools is the first and most crucial step in protecting your privacy. Don't share sensitive details such as your full name, home address, financial information, passwords or any other personal data that could be used to identify or harm you or others. Protecting your AI accounts from unauthorized access helps keep your interactions and data safe. Always use strong, unique passwords and consider using a password manager to generate and store those complex passwords. Enable multifactor authentication whenever possible to add an extra layer of security. Regularly review your account permissions and remove access for any devices or applications you no longer use. Get more details about my best expert-reviewed password managers of 2025 here. Adjusting your social media privacy settings can greatly reduce the amount of personal information available to data brokers. Make your profiles private, limit who can see your posts and be selective about accepting friend or follower requests. Periodically audit your privacy settings and remove any unnecessary third-party app connections to further minimize your exposure. Protecting your devices with strong antivirus software adds an essential layer of security against digital threats. Antivirus programs defend against malware, phishing and identity theft. Be sure to choose reputable software and regularly update it to stay protected against the latest threats. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices. Using a dedicated email address for opt-outs and online sign-ups helps reduce spam and protects your primary email. This practice also makes it easier to track which sites and services have your contact information. If your alias email becomes compromised, you can quickly change it without disrupting your main accounts. See my review of the best secure and private email services here. Get a free scan to find out if your personal information is already out on the web. Large language models like ChatGPT are transforming how we work, create and solve problems, but they also introduce new privacy and security risks that can't be ignored. As these tools become more powerful and accessible, it's up to each of us to take proactive steps to safeguard our personal information and understand where our data might be exposed. By staying alert and making use of available privacy tools, we can enjoy the benefits of AI while minimizing the risks. Should OpenAI be held legally accountable when its tools are used to collect or expose private data without consent? Let us know your experience or questions by writing us at Your story could help someone else stay safe. For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Follow Kurt on his social channels: Answers to the most-asked CyberGuy questions: New from Kurt: Copyright 2025 All rights reserved.

Why Alphabet (GOOGL) Stock Is Up Today
Why Alphabet (GOOGL) Stock Is Up Today

Yahoo

timea day ago

  • Business
  • Yahoo

Why Alphabet (GOOGL) Stock Is Up Today

Shares of online advertising giant Alphabet (NASDAQ:GOOGL) jumped 2.7% in the afternoon session after the major indices rebounded (Nasdaq +2.0%, S&P 500 +2.0%) as President Trump postponed the planned 50% tariff on European Union imports, shifting the start date to July 9, 2025. Companies with substantial business ties to Europe likely had some relief as the delay reduced near-term cost pressures and preserved cross-border demand. After the initial pop the shares cooled down to $172.69, up 2.5% from previous close. Is now the time to buy Alphabet? Access our full analysis report here, it's free. Alphabet's shares are extremely volatile and have had 39 moves greater than 2.5% over the last year. In that context, today's move indicates the market considers this news meaningful but not something that would fundamentally change its perception of the business. The previous big move we wrote about was 6 days ago when the stock gained 5.2% after the company rolled out new AI tools at its annual I/O show, which was well received by Wall Street as it laid out a detailed plan to maintain its dominance in the web search market. The announcement also appeared well-timed, given recent concerns some internet users were leaning more on LLMs like ChatGPT as their default search product. What really stood out during the show was "AI Mode," which is like a chat bot baked into Google's search engine. This feature makes it faster for users to access Google's AI search feature, and was rolled out immediately to US users. But it wasn't just about search. Google also threw out a bunch of shiny new AI toys. These include Google Beam (a new video chat platform built from the ground up with AI). Then there's a collaboration with Warby Parker to launch AI-powered smart glasses running on Google's Android XR. And for content creators, they teased Veo 3, their next-gen AI video generator. In terms of monetization, Google announced a new 'AI Ultra' subscription plan that could fetch up to $249.99 per month, which shows it is heavily invested in enterprises and high-end users who tend to have staying power and could make the products catch on faster. Citi analyst Ronald Josey with a Buy rating and $200 price target summed up the Street's sentiment, adding "With AI Mode going live in the U.S., the Gemini App reaching 400M MAUs [monthly active users] (vs. ~350M in March), and AI Overviews delivering query growth, we believe GOOGL's product cadence and user adoption of newer tools is ramping which should provide some stability for Search growth." Overall, the announcements indicate Alphabet is not just keeping up in the AI race but is better positioned in the fast-evolving market. Alphabet is down 8.8% since the beginning of the year, and at $172.69 per share, it is trading 16.3% below its 52-week high of $206.38 from February 2025. Investors who bought $1,000 worth of Alphabet's shares 5 years ago would now be looking at an investment worth $2,432. Today's young investors likely haven't read the timeless lessons in Gorilla Game: Picking Winners In High Technology because it was written more than 20 years ago when Microsoft and Apple were first establishing their supremacy. But if we apply the same principles, then enterprise software stocks leveraging their own generative AI capabilities may well be the Gorillas of the future. So, in that spirit, we are excited to present our Special Free Report on a profitable, fast-growing enterprise software stock that is already riding the automation wave and looking to catch the generative AI next. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Deutsche Bank collaborates with FinaXai to adopt AI for tokenised funds servicing
Deutsche Bank collaborates with FinaXai to adopt AI for tokenised funds servicing

Finextra

timea day ago

  • Business
  • Finextra

Deutsche Bank collaborates with FinaXai to adopt AI for tokenised funds servicing

Deutsche Bank is collaborating with finaXai, a Singapore-based AI company co-founded by researchers from Nanyang Technological University, Singapore (NTU Singapore) and the National University of Singapore (NUS). 0 The collaboration will assess a significant extension of Project DAMA 2, the public-permissioned multi-chain asset servicing pilot focused on transforming how asset managers can more efficiently manage and service tokenised funds and their investors. FinaXai is also part of the Fincubator programme at the Asian Institute of Digital Finance (AIDF), a university-level institute in NUS jointly established by the Monetary Authority of Singapore, the National Research Foundation and NUS, to support the development of fintech startups focused on deep technology and digital finance innovation. The team will explore integrating Machine Learning and Large Language Models (LLMs) into asset servicing workflows, leveraging explainable AI to help asset managers plan and execute fund lifecycle activities with greater speed, transparency, and precision. Anand Rengarajan, Deutsche Bank's Head of Securities Services APAC & MEA and Global Head of Sales said: 'We are delighted to work with a diverse mix of experts from the academic and innovation worlds to contribute to the future of tokenised assets. Through this collaboration, we unify leading research-backed solutions with industry applications to reduce complexity and boost AI's explainability and integration with DLT. With this work, we seek to better understand and anticipate our asset manager clients' needs as they explore the future of tokenisation.' Dr. Erik Cambria, Co-Founder of finaXai and Professor, College of Computing and Data Science, NTU said: "finaXai is excited to partner with Deutsche Bank in applying cutting-edge explainable AI techniques to the domains of finance and asset tokenisation. This collaboration bridges academic research with real-world applications and explores how explainable AI can converge with tokenisation to streamline processes and enhance the accessibility, adoption, and management of digital assets. Ultimately, this initiative lays the groundwork for synergies between trustworthy AI and asset managers, enabling faster, more precise planning and execution of digital asset and fund lifecycle activities." Dr. Gianmarco Mengaldo, Co-Founder of finaXai and Assistant Professor at the Department of Mechanical Engineering, College of Design and Engineering, NUS, said: 'finaXai operates at the exciting intersection of AI applications in finance and comprises researchers from various institutions who share a common interest in exploring the frontiers of AI in this field. This collaboration demonstrates the significant impact that can be achieved when scientific interests align. The team based in NUS focuses on bridging human understanding with AI methods through explainable AI, and integrating existing knowledge into AI-driven solutions when beneficial. It will be exciting to see how our work, complemented with the work by NTU counterparts, can aid in dealing with complex real-world tasks in the financial sector.' Representatives involved in the project include: Deutsche Bank: Boon Hiong Chan, Asia Pacific Head of Securities & Technology Advocacy and Industry Applied Innovation Lead. Jie Yi (Jaelynn) Lee, Digital Product Owner. finaXai: Dr. Erik Cambria, Co-Founder of finaXai and Professor, College of Computing and Data Science, NTU. Dr. Gianmarco Mengaldo, Co-Founder of finaXai and Assistant Professor at the Department of Mechanical Engineering, College of Design and Engineering, NUS. Dr. Mao Rui, Technology Lead and Research Scientist, College of Computing and Data Science, NTU. Keane Ong, Technology Scientist and PhD candidate in Digital Financial Technology, specialising in Machine Learning and Natural Language Processing at NUS. Federico Cristina, CFA, Business Lead, finaXai.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store