
What is deep learning?
0
This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.
Artificial intelligence (AI), a concept once confined to the realms of science fiction, has in the last several years entered the mainstream – with the release of OpenAI's inaugural ChatGPT model, and other competitor engines subsequently, transforming innumerable industries.
Often uttered in conjunction with 'AI' is the term 'deep learning', though its nature and use cases may be less clear to the layman. While AI is a broad field encompassing a machine's capacity to perform tasks historically necessitating human intelligence, deep learning is a subset of machine learning (ML) which leverages artificial neural networks with multiple layers to learn from complex datasets.
According to the third survey on AI and ML in UK financial services by The Bank of England and the Financial Conduct Authority (FCA), published in November 2024, 75% of firms are already using AI, with a further 10% planning to use it over the next three years. Foundation models form 17% of all AI use cases supporting anecdotal evidence for the rapid adoption of this complex type of machine learning.
But what's the draw? How does deep learning work exactly? What is its relevance to financial services? This short read explores the phenomenon of deep learning; its structures, models, applications, and broad use cases.
The inner workings
Deep learning is a specific kind of ML, which leverages artificial neural networks – as opposed to algorithms – to glean patterns from complex or unstructured datasets.
Neural networks are inspired by the structure of the human brain; comprising a collection of nodes, each is its own processing unit. By passing data that is statistically significant from one layer of nodes to the next, neural networks can train themselves to recognise patterns in information and make predictions, with little human intervention.
Neural networks are typically split into three layers:
Input layer
The nodes in this layer receive and process input data – be it structured, unstructured, multi-media, plain text, or other.
Hidden layer
This slice is sometimes composed of hundreds of subset layers. It receives data from the input layer and processes at different levels, analysing the problem from numerous perspectives and adapting its behavior as new learnings are gleaned.
Output layer
The nature of the output layer depends on the system and its goal. A 'yes' or 'no' (binary) output model, for example, would only require two nodes. More complex systems, such as Generative AI (GenAI), would require a highly complex set of nodes, in order to deliver nuanced outputs and unstructured information – videos, images, or analyses.
The unique architecture of deep learning technology makes it the delivery engine behind many everyday AI systems, such as chatbots and code generators, digital assistants, fraud detectors, and facial recognition.
The models
Deep learning systems can be arranged in many different models. The three most common are:
Convolutional neural network (CNN)s
CNNs excel at identifying objects in images – even when they are obscured or distorted. As such, image recognition and processing are the domain of CNNs.
Deep reinforcement learning
This kind of model is most often deployed in robotics or gaming; enabling an agent to learn how to behave in an environment by interacting with it and receiving rewards or punishments.
Recurrent neural network (RNN)s
RNNs are very good at understanding sentences and phrases in a contextual manner. They are often used for speech recognition, translations, or to generate text. This capability is otherwise known as natural language processing (NLP).
Other deep learning models include autoencoders and variational autoencoders; Generative adversarial networks (GANs); diffusion models; and transformer models.
The applications
Thanks to the sheer variety of models available to deep learning technology, it boasts countless applications – particularly within financial services. Here is a (non-exhaustive) list of uses, to give an idea of its scope:
Computer vision – mines information and insights from images and videos; for quicker and more secure customer onboarding
Speech recognition – interprets and analyses spoken language; for better Know-Your-Customer (KYC) processes
NLP – extracts meaning from text and documents; for streamlined back-office processes
Recommendation engines – tracks end-user activity and develops product or service recommendations; for hyper-personalisation and enhanced customer care
GenAI – creates new content and communications; for enhancing the capabilities of developers or providing customers with AI agents
Digital labour – performs the heavy lifting in operations; for robotic workforce support and augmentation
The use cases
As we have seen, deep learning can be applied to numerous areas within financial services. To bring these examples to life, here is a real-world use case, whereby deep learning helped one organisation analyse financial data for equity trades.
In 2021, the independent algorithmic trading technology provider, Pragma, released execution algorithms with deep-learning capabilities. Pragma initiated the project to see if deep neural networks could be applied to an execution algorithm's micro-trading engine; governing decisions such as the routing, sizing, pricing and timing of orders – and deal with complex multi-dimensional trading challenges more effectively.
Following a beta launch in 2020, Pragma managed several controlled trials with its clients. It observed a significant improvement in execution quality, with an average shortfall improvement of 33% to 50% across billions of traded shares.
The future of deep learning
If we accept that technological innovation versus time assumes an 'S' curve, then we can safely predict that we have many more years – perhaps decades – until the full potential of deep learning and AI is laid bare.
The key to leveraging this transformative technology in the meantime is overcoming the implementation challenges – and ensuring our vast volumes of input data are clean, accurate, integrous, representative, and accountable. In the future, when we look back on the developments in AI that are underway today, we will see the vestigial origins of these principles in our data protection regulations. Yet, to ensure the output of deep learning remains effective, equitable, and sanitary, they will need to be applied scrupulously.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
7 minutes ago
- Reuters
It's time to address preservation of generative AI prompts and outputs
June 10, 2025 - Generative artificial intelligence (GAI) tools, which create text and media based upon the data they were trained on, raise legal concerns like data privacy, data security, and privilege considerations. In discovery, GAI prompts and outputs may be considered unique information that must be preserved for litigation. Organizations must consider how to preserve this information and whether and how to incorporate changes to their standard ESI agreement. It is also imperative for organizations to have information governance policies and trainings in place to account for the use of GAI tools across their business. This includes determining if the GAI-generated prompts and outputs are considered "records" and, if so, updating records retention policies and schedules accordingly. It is essential to have knowledgeable counsel who specialize in the discovery and governance of GAI information to ensure prompts and outputs are retained if/as needed. Each GAI tool operates uniquely based on its configuration, as well as its data storage setup. Legal professionals must understand both the types of data being created and the locations where the data is stored for each tool. These are rapidly evolving products that may differ greatly from one to the next, and it is incumbent on practitioners to ascertain the form and function of a given tool, including where it stores its prompts and outputs. For example, an application that creates a bullet-point summary of a meeting typically begins by creating a transcript of that meeting, which it then analyzes to produce a summary. Will these documents be stored in the meeting organizer's online file storage, integrated into a corporate network, or distributed across the participants' storage? How long will these records be retained? The answers will depend on both technical configurations and the organization's applicable retention policies. While GAI tools have been rapidly proliferating over the past couple of years, courts and litigants are just starting to address their use and output. In the 2024 case Tremblay v. OpenAI in the U.S. District Court for the Northern District of California, a group of authors sued OpenAI for copyright infringement, alleging that it trained ChatGPT using their copyrighted books. OpenAI sought discovery of the plaintiffs' ChatGPT account information and the prompts used in pre-suit testing, including negative outputs that did not reproduce or summarize the plaintiffs' work. The magistrate judge granted the request, finding that although the account settings and negative test results are fact work product, the plaintiffs waived this by including a substantial set of those facts in their complaint and exhibits. In ruling on the plaintiffs' motion for relief challenging that order, the district judge found that the magistrate had misapplied the law, as the prompts were queries created by counsel and reflected their mental impressions and strategies for interrogating ChatGPT. The court granted the plaintiffs' motion for relief, denying the defendant's request to compel the production of negative tests and documentation of the testing process, but ordered the plaintiffs to produce the prompts and accounting settings used to generate the examples used in their complaint. The parties' ability to effectively advocate their positions in this dispute rested on their having employed a methodical and reproducible workflow, and in turn on having ensured the preservation of the data necessary to do so. As with any matter where these issues are implicated, accounting for these facts ahead of time through the skilled counsel of experts in preservation and information governance is the best practice. Documents and data created with GAI tools may be relevant to anticipated or ongoing disputes if they pertain to claims and defenses and are proportional to the needs of the case. Legal and information governance professionals must be prepared for this possibility if their clients use these tools. Here are some suggested best practices. Legal and information governance professionals should be considered essential stakeholders to consult when an organization decides to deploy GAI tools. If legal is notified only after a tool has been adopted — or worse, has been in use for some time — there may be hurdles to ensure that relevant data is preserved, or in advising on critical considerations such as protecting attorney client privilege and confidentiality while using the tool. Information governance professionals will also provide valuable best practices for retention and data disposition with the use of the tools. Legal and information governance stakeholders should also be involved in the selection, testing, and deployment of GAI tools to understand where each tool creates and stores the potentially relevant documents and data. An organization cannot preserve relevant data without understanding where the data is stored and how to preserve and retrieve it for discovery purposes. A thorough investigation of storage locations and an understanding of what is created are essential. In the context of GAI, this is even more crucial as the rapid evolution of these products merits closer attention and analysis than is required with more established tools. Document retention policies may need to be updated to ensure that GAI-generated documents and data are retained for the appropriate duration based on business need and applicable law. Similarly, legal hold policies and notices must address the new data types created by AI tools to ensure employees understand the need for preservation. These policies are only effective when compliance is acknowledged and monitored, so processes should be established to ensure proper data retention. Like any tool, the results and reliability of GAI tools depend heavily upon how they are used. A robust GAI training program that emphasizes not only the features but also the risks presented by the tool should be a perquisite to access by users. Since AI tools can hallucinate and generate documents and data that may not reflect reality or employee inputs, there is a risk of inadvertently creating discoverable data that is inaccurate. Such data is not only useless for business purposes but also presents a serious risk in litigation if a party relies on the hallucinated facts. For this reason, any AI-generated output must be reviewed and verified before preservation — bullet points, summaries, transcripts, arguments and other GAI outputs must be carefully reviewed and confirmed. Training should be refreshed as new tools become available and use carefully monitored to ensure appropriate use and mitigate the risk that problematic artifacts are created. As with any emerging technology, it is essential that the risks and obligations that may attach be assessed in parallel to the benefits of its use. From the broad integration of GAI into a corporate environment by information governance professionals to the careful tweaking of an ESI protocol by outside counsel, the introduction of GAI into corporate environments and legal practice is an essential challenge that requires a thoughtful and comprehensive approach. Generative AI tools hold transformative potential, but they must be carefully evaluated, tested, configured, and used with attention to the creation of potentially relevant documents and data that must be preserved. Tara Lawler is a regular contributing columnist on e-discovery for Reuters Legal News and Westlaw Today.

South Wales Argus
3 hours ago
- South Wales Argus
Is ChatGPT still down? Thousands of users report issues
But is it still down? Well, it seems that the platform has been experiencing multiple outages as the day has gone on. According to Downdetector, there are currently 1000 reports of a fault, down from 1,400 this morning, but still up on the hour. ChatGPT is always wrong about topics where I'm an expert. But that's ok, I already know that stuff, I'm just glad it's correct about everything else. — Zac Mabry (@ZacMabry) June 8, 2025 What is Downdetector? Downdetector is an online website that allows users of popular services, including X (Twitter), Instagram, Facebook, Virgin Media and more, to report if they are having trouble with the services. The site only reports an incident when the number of problem reports is significantly higher than the typical volume for that time of day. Describing itself on its website, Downdetector is 'where people go when services don't work.' The website monitors disruptions to various key services, including the internet, web hosting platforms, banks, social media and more. You can see the full methodology and learn more about how Downdetector collects status information and detects problems via its website. In short, users can get involved by submitting any issue reports on the platform. Problem indicators are also collected from social media and other factors on the web, Downdetector explained. Recommended reading: These reports are then validated and analysed in real time. An incident is then detected if the number of reports is significantly higher than the typical average for that particular service. When a user goes to investigate the issue, they can see what the most reported problems are currently and check if their areas are in one of the outage hotspots. Do you use Downdetector? Let us know in the comments.

Finextra
3 hours ago
- Finextra
What is natural language processing?
0 This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. The term natural language processing, or NLP, describes a computer's ability to understand, interpret, or generate human language. It is a type of artificial intelligence (AI) which uses algorithm-backed machine learning (ML) to recognise both the written and spoken word. Testament to its utility, in 2023 the market size of NLP in finance was valued at $5.5 billion. It is projected to swell to $40 billion by 2032, with a compound annual growth rate (CAGR) of over 25%. In this instalment of Finextra's Explainer series, we look at the capabilities of NLP, the various types and benefits, as well as the potential applications within financial services. Three kinds of NLP There are three main families of NLP technology: Rules-based NLP This uses pre-defined rules to analyse and process text. The rules are created by humans and designed to handle specific structures and patterns in language. Rules-based NLP is best deployed in controlled domains – such as legal documents or technical manuals. It is not so reliable when applied to more dynamic or creative works. Statistical NLP Instead of pre-defined rules, statistical NLP uses ML to spot patterns and relationships in datasets – enabling automated extraction and classification of linguistic elements. This makes statistical NLP useful for spell checkers, text summaries, and chatbots, though it can struggle with the context-dependent nuances of language. Deep learning NLP This method leans on a specific type of ML – artificial neural networks – to understand language. The benefit of neural networks is that they are adaptive, enabling tasks like text classification, translation, and question answering, with great accuracy. Deep learning NLP is useful for sentiment analysis, machine translation, speech recognition, text generation, and even interpreting the emotional tone of text. The benefits The benefits of NLP are so numerous that the technology is already being rolled out by several industries. Generally speaking, NLP has the ability to: Enhance human-computer interaction – meaning humans need not study complex computer languages to get the output from machines they require. This democratises information technology (IT) Automate repetitive tasks – by automating document handling or data entry, staff are freed up to focus on more complex, revenue-driving tasks Improve data analysis and insights – NLP can rapidly extract valuable information from unstructured datasets, like customer reviews. These learnings can then be leveraged to, for instance, create new products or improve customer experiences Enhance searches – alongside search engines, NLP technology can better understand the intent behind users' queries and therefore extend more relevant results and boost satisfaction The applications Here are some examples of the ways these benefits can be exploited by the financial services industry: Speech recognition – improving security through identity checks for in-bound customer calls Chatbots – helping to triage customer queries at the front office Fraud detection – examining transaction references and other communications to identify suspicious patterns or fraud schemes Market analysis – trawling traditional media, social media, or call transcripts to gauge sentiment; foresee market movements; and inform investment decisions Document processing – automating regulatory obligations like Know Your Customer (KYC) processes, by extracting key data from unstructured files Legal analysis – interpreting contracts to save on legal costs and human resources In today's fast-paced, digital zeitgeist, the financial services industry has moved from measuring time in weeks and days to seconds. Indeed, a mere nanosecond can be the difference between catching and missing a fraudulent transaction, or executing and losing a vital trade. NLP enables institutions to greatly accelerate the mining of information, from multiple mediums, and use it to the advantage of the business and its customers. Countless banks are already deploying NLP technology to this effect. HSBC, for example, recently launched AI Markets – a digital service that leverages NLP to support institutional investors. The proprietary NLP engine generates bespoke financial market analytics, with access to the bank's real-time and historic cross-asset data sets. The challenges When implementing and deploying NLP technology, banks should be aware of the challenges that surround it. One of the most widely covered issues is that some NLP tools carry the biases of their programmers or the datasets used to train them. This means that NLP has the potential to reinforce societal biases when applied. Systems must be created that offer a fair service to all customers. As with all languages, meaning is delivered not just by words themselves, but by the tone and context. NLP technologies must be advanced enough to catch these nuances and ensure an accurate output is rendered. To make matters even more challenging, languages and dialects are constantly shifting – with new accents and vocabularies constantly entering the vernacular. NLP models must have the flexibility to shoulder these dynamics and deliver a consistently reliable service to companies. Integrating NLP with next-generation tech Thanks to the ability of NLP technology to automate routine tasks, interpret sprawling datasets, and offer actionable insights to product teams, it is poised for massive growth in financial services. Indeed, its place in banks' back offices will only become more embedded, as efficiencies are driven, data analysis is enhanced, customer experiences become hyper-personalised, and algorithmic trading becomes commonplace. Eventually, NLP will be integrated with other next-generation technologies, like quantum computing and generative AI (GenAI), to unlock seismic opportunities across an institution's every business line. At each stage of development, it is incumbent on those deploying the technology to spot and restrain data bias, develop an intimate understanding of how the models make decisions, and ensure information accuracy remains as clear as glacial runoff.