Morgan Stanley and Bank of America are focusing AI power on tools to make employees more efficient
The financial industry's approach to artificial intelligence reveals considerable pragmatism.
Popular notions of generative AI, guided by the explosive growth of OpenAI's ChatGPT, often center on consumer-facing chatbots. But financial institutions are leaning more heavily on internal AI tools that streamline day-to-day tasks.
This requires training programs and user-experience design that help a bank's entire organization — from relationship bankers directing high-value accounts to associates — understand the latest AI technology.
From AI classification to AI generation
Banks have long used traditional AI and machine learning techniques for various functions, such as customer service bots and decision algorithms that provide a faster-than-human response to market swings.
But modern generative AI is different from prior AI/ML methods, and it has its own strengths and weaknesses. Hari Gopalkrishnan, Bank of America's chief information officer and head of retail, preferred, small business, and wealth technology, said generative AI is a new tool that offers new capabilities, rather than a replacement for prior AI efforts.
"We have a four-layer framework that we think about with regards to AI," Gopalkrishnan told Business Insider.
The first layer is rules-based automation that takes actions based on specific conditions, like collecting and preserving data about a declined credit card transaction when one occurs. The second is analytical models, such as those used for fraud detection. The third layer is language classification, which Bank of America used to build Erica, a virtual financial assistant, in 2016.
"Our journey of Erica started off with understanding language for the purposes of classification," Gopalkrishnan said. But the company isn't generating anything with Erica, he added: "We're classifying customer questions into buckets of intents and using those intents to take customers to the right part of the app or website to help them serve themselves."
The fourth layer, of course, is generative AI.
Koren Picariello, a Morgan Stanley managing director and its head of wealth management generative AI, said Morgan Stanley took a similar path. Throughout the 2010s, the company used machine learning for several purposes, like seeking investment opportunities that meet the needs and preferences of specific clients. Many of these techniques are still used.
"Historically, I was working in analytics, data, and innovation within the wealth space. In that space, Morgan Stanley did focus on the more traditional AI/ML tools," Picariello told BI. "Then in 2022, we started a dialogue with OpenAI before they became a household name. And that began our generative-AI journey."
How banks are deploying AI
Given the history, it'd be reasonable to think banks would turn generative-AI tools into new chatbots that more or less serve as better versions of Bank of America's Erica, or as autonomous financial advisors. But the most immediate changes instead came to internal processes and tools.
Morgan Stanley's first major generative-AI tool, Morgan Stanley Assistant, was launched in September 2023 for employees such as financial advisors and support staff who help clients manage their money. Powered by OpenAI's GPT-4, it was designed to give responses grounded in the company's library of over 100,000 research reports and documents.
The second tool, Morgan Stanley Debrief, was launched in June. It helps financial advisors create, review, and summarize notes from meetings with clients.
"It's kind of like having the most informed person at Morgan Stanley sitting next to you," Picariello said. "Because any question you have, whether it was operational in nature or research in nature, what we've asked the model to do is source an answer to the user based on our internal content."
Bank of America is pursuing similar applications, including a call center tool that saves customer associates' time by transcribing customer conversations in real time, classifying the customer's needs, and generating a summary for the agent.
Keeping humans in the loop
The decision to deploy generative AI internally first, rather than externally, was in part due to generative AI's most notable weakness: hallucinations.
In generative AI, a hallucination is an inaccurate or nonsensical response to a prompt, like when Google Search's AI infamously recommended that home chefs use glue to keep cheese from sliding off a pizza.
Banks are wary of consumer-facing AI chatbots that could make similar errors about bank products and policies.
Deploying generative AI internally lessens the concern. It's not used to autonomously serve a bank's customers and clients but to assist bank employees, who have the option to accept or reject its advice or assistance.
Bank of America provides AI tools that can help relationship bankers prep for a meeting with a client, but it doesn't aim to automate the bank-client relationship, Gopalkrishnan told BI.
Picariello said Morgan Stanley takes a similar approach to using generative AI while maintaining accuracy. The company's AI-generated meeting summaries could be automatically shared with clients, but they're not. Instead, financial advisors review them before they're sent.
Training the finance workforce for AI
Bank of America and Morgan Stanley are also training bank employees on how to use generative-AI tools, though their strategies diverge.
Gopalkrishnan said Bank of America takes a top-down approach to educating senior leadership about the potential and risks of generative AI.
About two years ago, he told BI, he helped top-level staff at the bank become "well aware" of what's possible with AI. He said having the company's senior leadership briefed on generative AI's perks, as well as its limitations, was important to making informed decisions across the company.
Meanwhile, Morgan Stanley is concentrating on making the company's AI tools easy to understand.
"We've spent a lot of time thinking through the UX associated with these tools, to make them intuitive to use, and taking users through the process and cycle of working with generative AI," Picariello said. "Much of the training is built into the workflow and the user experience." For example, Morgan Stanley's tools can advise employees on how to reframe or change a prompt to yield a better response.
For now, banks are focusing AI initiatives on identifying and automating increasingly more complex and nuanced tasks within the organizations rather than developing one-off applications targeted at the customer experience.
"We try to approach problems not as a technology problem but as a business problem. And the business problem is that Bank of America employees all perform lots of tasks in the company," said Gopalkrishnan. "The opportunity is to think more holistically, to understand the tasks and find the biggest opportunities so that five and 10 years from now, we're a far more efficient organization."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
25 minutes ago
- Yahoo
Citi drops firearms restriction in a bow to conservative pressure
This story was originally published on Banking Dive. To receive daily news and insights, subscribe to our free daily Banking Dive newsletter. Citi has abandoned a 7-year-old policy restricting firearms sales by its retail clients, the bank said in a statement on its website Tuesday. The move could be read as a bow to increasing pressure from conservative circles that have accused big U.S. banks of failing to serve clients whose political leanings fall right of center. 'Citi has always been fully committed to treating all current and potential clients fairly,' Ed Skyler, the bank's head of enterprise services and public affairs, said Tuesday. 'At the same time, we appreciate the concerns that are being raised regarding 'fair access' to banking services, and we are following regulatory developments, recent Executive Orders and federal legislation that impact this area.' Broadly, the bank said it would update its employee code of conduct and customer-facing financial access policy 'to clearly state that we do not discriminate on the basis of political affiliation in the same way we are clear that we do not discriminate on the basis of other traits such as race and religion.' 'This will codify what we've long practiced,' Skyler said in Tuesday's statement. But Citi specifically noted it will 'no longer have a specific policy as it relates to firearms.' The bank in 2018 enacted a policy prohibiting the sale of firearms to customers who had not passed a background check or were younger than 21 (unless they had military training). The policy – which also banned the sale of bump stocks and high-capacity magazines to clients who offered credit cards backed by Citi, borrowed money or raised capital through the company – came in response to the Parkland, Florida, school shooting that left 17 people dead. 'As a society, we all know that something needs to change,' Skyler said at that time. 'And as a company, we feel we must do our part.' Citi is not the first big U.S. bank to roll back its firearms restrictions. Bank of America in December 2023 tweaked the language of its environmental and social risk policy. While financing the manufacturing of certain firearms had been listed as a 'business restriction' in a 2022 policy, it was downgraded to a 'business escalation' the following year. Clients and transactions in that sector 'must go through an enhanced due diligence process and be escalated to the senior-most risk review body of the applicable line of business for decisioning,' Bank of America said in its December 2023 update. But where BofA kept its changes contained to fine print, Citi arguably took a bolder, more transparent stance Tuesday through its public statement. Citi's 2018 policy 'was intended to promote the adoption of best sales practices as prudent risk management,' the bank said Tuesday. 'Many retailers have been following these best practices, and we hope communities and lawmakers will continue to seek out ways to prevent the tragic consequences of gun violence.' John Commerford, executive director of the National Rifle Association's Institute for Legislative Action, called Citi's policy change 'a good first step' but sought more, prodding the Senate to pass a bill, reintroduced in February, that would penalize banks and credit unions with more than $10 billion of assets "if they refuse to do business with any legally compliant, credit-worthy person," according to a press release from the office of Sen. Kevin Cramer, R-ND. That bill would also prevent payment card networks from discriminating against qualified customers over political or reputational considerations. Failure to comply could spur a fine of up to $10,000 per violation, according to the legislation, which has a companion bill in the House. Gun violence prevention advocates expressed disappointment at Citi's policy change. "The actions Citi announced in 2018 were common-sense steps to promote public safety," Nick Suplina, senior vice president for law and policy at Everytown for Gun Safety, said in a statement. "With firearms as the leading cause of death for children and teens, the logic of 2018 is just as relevant today." However, the political climate has undoubtedly shifted. Within a week of taking office, President Donald Trump leveled a scathing – and very public – accusation at Bank of America CEO Brian Moynihan. 'You've done a fantastic job, but I hope you start opening your bank to conservatives … because what you're doing is wrong,' Trump said in a meandering response to a question Moynihan asked about the economic impact of executive orders. Moynihan later refuted Trump's claim, saying Bank of America 'bank[s] everybody.' But the CEO indicated he interpreted the president's barb as a question 'about over-regulation.' Trump's executive orders, though, have already spurred policy change. After the administration signaled a wide-scale de-emphasis on diversity, equity and inclusion, large banks including Bank of America and Citi, scrubbed their legal statements of particular keywords. Bank of America, for example, swapped words like 'diversity' for 'talent' and 'equity' for 'opportunity.' Citi, meanwhile, rebranded its 'diversity, equity and inclusion and talent management' team to simply 'talent management and engagement,' and the bank dropped the 'aspirational representation goals' it had set for its workforce – except where required by local laws – citing pressure from the White House. Citi may see Tuesday's policy shift as an effort to further keep in check with the prevailing political winds. 'We took an objective look at our policies and practices with the intent of striking the right balance between our commitment to fair and unbiased access to our products while continuing to manage all risks to the bank appropriately,' Skyler said in Tuesday's statement. Recommended Reading OCC nominee Gould aims to 'shine a spotlight' on de-banking Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
33 minutes ago
- Yahoo
Big AI isn't just lobbying Washington—it's joining it
Welcome to Eye on AI! In this edition…OpenAI releases report outlining efforts to block malicious use of its tools…Amazon continues its AI data center push in the South, with plans to spend $10 billion in North Carolina…Reddit sues Anthropic, accusing it of stealing data. After spending a few days in Washington, D.C. this week, it's clear that 'Big AI'—my shorthand for companies including Google, OpenAI, Meta, Anthropic, and xAI that are building and deploying the most powerful AI models—isn't just present in the nation's capital. It's being welcomed with open arms. Government agencies are eager to deploy their models, integrate their tools, and form public-private partnerships that will ultimately shape policy, national security, and global strategy inside the Beltway. And frontier AI companies, which also serve millions of consumer and business customers, are ready and willing to do business with the U.S. government. For example, just today Anthropic announced a new set of AI models tailored for U.S. national security customers, while Meta recently revealed that it's making its Llama models available to defense partners. This week, former Google CEO Eric Schmidt was a big part of bringing Silicon Valley and Washington together. I attended an AI Expo that served up his worldview, which sees artificial intelligence, business, geopolitics, and national defense as interconnected forces reshaping America's global strategy (which will be chock-full of drones and robots if he gets his way). I also dressed up for a gala event hosted by the Washington AI Network, with sponsors including OpenAI, Meta, Microsoft, and Amazon, as well as a keynote speech from U.S. Commerce Secretary Howard Lutnick. Both events felt like a parallel AI universe to this D.C. outsider: In this universe, discussions about AI are less about increasing productivity or displacing jobs, and more about technological supremacy and national survival. Winning the AI 'race' against China is front and center. Public-private partnerships are not just desirable—they're essential to help the U.S. maintain an edge in AI, cyber, and intelligence systems. I heard no references to Elon Musk and DOGE's 'move fast and break things' mode of implementing AI tools into the IRS or the Veterans Administration. There were no discussions about AI models and copyright concerns. No one was hand-wringing about Anthropic's new model blackmailing its way out of being shut down. Instead, at the AI Expo, senior leaders from the U.S. military talked about how the recent Ukrainian drone attacks on Russian air bases are prime examples of how rapidly AI is changing the battlefield. Federal procurement experts discussed how to accelerate the Pentagon's notoriously slow acquisition process to keep pace with commercial AI advances. OpenAI touted its o3 reasoning model, now deployed on a secure government supercomputer at Los Alamos National Laboratory. At the gala, Lutnick made the stakes explicit: 'We must win the AI race, the quantum race—these are not things that are open for discussion.' To that end, he added, the Trump administration is focused on building another terawatt of power to support the massive AI data centers sprouting up across the country. 'We are very, very, very bullish on AI,' he said. The audience—packed with D.C.-based policymakers and lobbyists from Big AI—applauded. Washington may not be a tech town, but if this week was any indication, Silicon Valley and the nation's capital are learning to speak the same language. Still, the growing convergence of Silicon Valley and Washington makes many observers uneasy—especially given that it's been just seven years since thousands of Google employees protested the company's involvement in a Pentagon AI project, ultimately forcing it to back out. At the time, Google even pledged not to use its AI for weapons or surveillance systems that violated 'internationally accepted norms.' On Tuesday, the AI Now Institute, a research and advocacy nonprofit that studies the social implications of AI, released a report that accused AI companies of 'pushing out shiny objects to detract from the business reality while they desperately try to derisk their portfolios through government subsidies and steady public-sector (often carceral or military) contracts.' The organization says the public needs 'to reckon with the ways in which today's AI isn't just being used by us, it's being used on us.' But the parallel AI universe I witnessed—where Big AI and the D.C. establishment are fusing interests—is already realigning power and policy. The biggest question now is whether they're doing so safely, transparently, and in the public interest—or simply in their own. The race is on. With that, here's the rest of the AI news. Sharon This story was originally featured on


TechCrunch
36 minutes ago
- TechCrunch
Anthropic co-founder on cutting access to Windsurf: ‘It would be odd for us to sell Claude to OpenAI'
Anthropic Co-founder and Chief Science Officer Jared Kaplan said his company cut Windsurf's direct access to Anthropic's Claude AI models largely because of rumors and reports that OpenAI, its largest competitor, is acquiring the AI coding assistant. 'We really are just trying to enable our customers who are going to sustainably be working with us in the future,' said Kaplan during an onstage interview Thursday with TechCrunch at TC Sessions: AI 2025. 'I think it would be odd for us to be selling Claude to OpenAI,' Kaplan said. The comment comes just a few weeks after Bloomberg reported that OpenAI was acquiring Windsurf for $3 billion. Earlier this week, Windsurf said that Anthropic cut its direct access to Claude 3.5 Sonnet and Claude 3.7 Sonnet, two of the more popular AI models for coding, forcing the startup to find third-party computing providers on relatively short notice. Windsurf said it was disappointed in Anthropic's decision and that it might cause short-term instability for users trying to access Claude via Windsurf. Windsurf declined to comment on Kaplan's remarks, and an OpenAI spokesperson did not immediately respond to TechCrunch's request. The companies have not confirmed the acquisition rumors. Part of the reason Anthropic cut Windsurf's access to Claude, according to Kaplan, is because the company is quite computing-constrained today. Anthropic would like to reserve its computing for what Kaplan characterized as 'lasting partnerships.' However, Kaplan said the company hopes to greatly increase the availability of models it can offer users and developers in the coming months. He added that Anthropic has just started to unlock capacity on a new computing cluster from its partner, Amazon, which he says is 'really big and continues to scale.' Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | REGISTER NOW As Anthropic pulls away from Windsurf, Kaplan said he's collaborating with other customers building AI coding tools, such as Cursor — a company Kaplan said Anthropic expects to work with for a long time. Kaplan rejected the idea that Anthropic was in competition with companies like Cursor, which is developing its own AI models. Meanwhile, Kaplan says Anthropic is increasingly focused on developing its own agentic coding products, such as Claude Code, rather than AI chatbot experiences. While companies like OpenAI, Google, and Meta are competing for the most popular AI chatbot platform, Kaplan said the chatbot paradigm was limiting due to its static nature, and that AI agents would in the long run be much more helpful for users.