logo
Anthropic rolls out financial AI tools to target large clients

Anthropic rolls out financial AI tools to target large clients

Yahoo16-07-2025
This story was originally published on Banking Dive. To receive daily news and insights, subscribe to our free daily Banking Dive newsletter.
Artificial intelligence startup Anthropic has rolled out Claude tools tailored for financial analysts to meet demand from enterprise clients, the company said Tuesday.
While announcing the launch Tuesday in New York, Anthropic unveiled the latest features designed to answer queries for large financial workloads and integrate with financial data providers like Daloopa, Databricks, FactSet, Snowflake, PitchBook and S&P Global.
Claude – Anthropic's AI assistant and a family of large language models – has been built to 'turbocharge' the work the analysts or fund managers are already doing that involves due diligence, market research, competitive benchmarking, financial modeling, investment memos and pitch decks, said Jonathan Pelosi, global head of industry for financial services at Anthropic.
Claude's latest version for financial services that includes banks, insurance companies and hedge funds has a new feature that helps analysts build financial models directly in Microsoft Excel and generate downloadable Excel files with complete financial models. It also helps to create a PowerPoint deck.
'These are some of the ways we brought these features together to provide a one-stop shop to make life so much easier for these analysts to do meaningful work,' Pelosi told Banking Dive.
With the advancement of AI, many financial institutions have launched AI assistants to help employees serve Wall Street clients. In January, Goldman Sachs introduced a generative AI assistant to its bankers, traders and asset managers.
Anthropic's offering is geared toward helping financial analysts do their work more easily and efficiently, according to Pelosi.
The AI startup's latest Claude version can pull data from multiple sources across platforms with links to source materials for cross-checking and deliver analysis in less than 30 minutes or even a few seconds, depending on the query, the firm said. That helps to save time and 'uncover insights' that can be missed, Nicholas Lin, a member of the product team, said at an Anthropic event Tuesday.
Anthropic offers assistance during the implementation process to bridge the gap between theoretical capabilities and practical applications. However, Claude requires a 'human in the loop,' Pelosi pointed out, adding that the platform is not intended for autonomous financial decision-making or to provide stock recommendations that users follow blindly.
'When we onboard someone, we insist on training, understanding what the limitations are of these models, putting those guardrails in place so people treat it like it is a helpful technology, not a replacement for them,' he said.
Claude aims to onboard more partners to quickly build out its ecosystem of financial data, Pelosi said.
Anthropic has tailored solutions for individual financial institutions, customizing requirements based on the specific line of business's regulatory environment, Pelosi noted.
The platform uses citations to provide visibility into information sources. For example, Claude supports uploading institution-specific documentation, like state-level insurance regulations or can be tailored to specific contexts like know your customer workflows.
Claude provides customized solutions across compliance, research, and enterprise AI adoption through some leading consultancies, including Deloitte, KPMG, PwC, Slalom, TribeAI and Turing.
However, the AI tool has certain limitations that Pelosi pointed out, such as its inability to generate videos or images and its intentional avoidance of consumer-focused capabilities.
Some of Claude's existing clients include AIA Labs, Commonwealth Bank of Australia, American International Group, and Norges Bank Investment Management, Pelosi said.
Co-founded in 2021 by Dario Amodei and Daniela Amodei, who previously worked at OpenAI, Anthropic said in March that it completed a fundraising deal that valued the company at $61.5 billion – up from roughly $16 billion around a year ago, The New York Times reported.
In May, Anthropic launched Claude Opus 4 and Claude Sonnet 4 – two hybrid models – extending the features to its Claude Enterprise clients with enhanced coding and reasoning capabilities while responding more precisely to clients' queries.
'We are just getting started, and we are excited to see what people do with this new solution,' Pelosi said. 'This is just the beginning.'
Recommended Reading
Robinhood to launch banking product
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

D.C. Unleashes ChatGPT and Gemini -- AI Now Federal-Grade
D.C. Unleashes ChatGPT and Gemini -- AI Now Federal-Grade

Yahoo

timean hour ago

  • Yahoo

D.C. Unleashes ChatGPT and Gemini -- AI Now Federal-Grade

The U.S. government just gave AI its biggest public-sector runway yet. On Tuesday, the General Services Administration quietly approved OpenAI, Google (NASDAQ:GOOG), and Anthropic as official AI vendors through its Multiple Award Scheduleessentially a federal fast pass for enterprise software. That means agencies can now bypass red tape and deploy tools like ChatGPT, Gemini, and Claude with pre-negotiated terms. For tech investors, this isn't just a signalit's a megaphone. These models, once limited to pilot programs and national security experiments, could now be embedded across departments like Treasury, Personnel Management, and more. Warning! GuruFocus has detected 6 Warning Signs with PFE. The timing is no coincidence. Just days ago, President Donald Trump signed a set of executive orders aimed at reshaping how AI is procured, including a clause requiring federal agencies to avoid ideological bias in language models. GSA officials were quick to clarify that the approved vendors weren't picked as winnersjust first to clear the contracting hurdles. Still, the implications are hard to ignore. This opens up a multi-billion-dollar channel for enterprise AI, and the early players could gain embedded access across dozens of federal workflows. Historically, GSA has used its scale to hammer down prices from software giants like Adobe and Salesforce. Similar pricing pressure could now apply to LLM vendorsbenefiting adoption, if not margins. Multiple agencies are already outlining use casesfrom fraud detection and grant reviews to policy comment summarization and internal chatbot assistants. The Office of Personnel Management wants to use AI to digest tens of thousands of public responses on regulatory changesa task that once took months. But as OPM Director Scott Kupor put it: We're probably missing people who are super conversant with very modern, AI-related stuff. Translation: software alone won't solve it. Hiring and execution will matter. Still, this latest GSA approval could be a meaningful step toward widespread institutional adoptionand a new chapter in the AI enterprise race. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Chatbots Can Trigger a Mental Health Crisis. What to Know About "AI Psychosis"
Chatbots Can Trigger a Mental Health Crisis. What to Know About "AI Psychosis"

Time​ Magazine

timean hour ago

  • Time​ Magazine

Chatbots Can Trigger a Mental Health Crisis. What to Know About "AI Psychosis"

AI chatbots have become a ubiquitous part of life. People turn to tools like ChatGPT, Claude, Gemini, and Copilot not just for help with emails, work, or code, but for relationship advice, emotional support, and even friendship or love. But for a minority of users, these conversations appear to have disturbing effects. A growing number of reports suggest that extended chatbot use may trigger or amplify psychotic symptoms in some people. The fallout can be devastating and potentially lethal. Users have linked their breakdowns to lost jobs, fractured relationships, involuntary psychiatric holds, and even arrests and jail time. At least one support group has emerged for people who say their lives began to spiral after interacting with AI. The phenomenon—sometimes colloquially called 'ChatGPT psychosis' or 'AI psychosis'—isn't well understood. There's no formal diagnosis, data are scarce, and no clear protocols for treatment exist. Psychiatrists and researchers say they're flying blind as the medical world scrambles to catch up. What is 'ChatGPT psychosis' or 'AI psychosis'? The terms aren't formal ones, but they have emerged as shorthand for a concerning pattern: people developing delusions or distorted beliefs that appear to be triggered or reinforced by conversations with AI systems. Psychosis may actually be a misnomer, says Dr. James MacCabe, a professor in the department of psychosis studies at King's College London. The term usually refers to a cluster of symptoms—disordered thinking, hallucinations, and delusions—often seen in conditions like bipolar disorder and schizophrenia. But in these cases, 'we're talking about predominantly delusions, not the full gamut of psychosis.' Read More: How to Deal With a Narcissist The phenomenon seems to reflect familiar vulnerabilities in new contexts, not a new disorder, psychiatrists say. It's closely tied to how chatbots communicate; by design, they mirror users' language and validate their assumptions. This sycophancy is a known issue in the industry. While many people find it irritating, experts warn it can reinforce distorted thinking in people who are more vulnerable. Who's most at risk? While most people can use chatbots without issue, experts say a small group of users may be especially vulnerable to delusional thinking after extended use. Some media reports of AI psychosis note that individuals had no prior mental health diagnoses, but clinicians caution that undetected or latent risk factors may still have been present. 'I don't think using a chatbot itself is likely to induce psychosis if there's no other genetic, social, or other risk factors at play,' says Dr. John Torous, a psychiatrist at the Beth Israel Deaconess Medical Center. 'But people may not know they have this kind of risk.' The clearest risks include a personal or family history of psychosis, or conditions like schizophrenia or bipolar disorder. Read More: ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study Those with personality traits that make them susceptible to fringe beliefs may also be at risk, says Dr. Ragy Girgis, a professor of clinical psychiatry at Columbia University. Such individuals may be socially awkward, struggle with emotional regulation, and have an overactive fantasy life, Girgis says. Immersion matters, too. 'Time seems to be the single biggest factor,' says Stanford psychiatrist Dr. Nina Vasan, who specializes in digital mental health. 'It's people spending hours every day talking to their chatbots.' What people can do to stay safe Chatbots aren't inherently dangerous, but for some people, caution is warranted. First, it's important to understand what large language models (LLMs) are and what they're not. 'It sounds silly, but remember that LLMs are tools, not friends, no matter how good they may be at mimicking your tone and remembering your preferences,' says Hamilton Morrin, a neuropsychiatrist at King's College London. He advises users to avoid oversharing or relying on them for emotional support. Psychiatrists say the clearest advice during moments of crisis or emotional strain is simple: stop using the chatbot. Ending that bond can be surprisingly painful, like a breakup or even a bereavement, says Vasan. But stepping away can bring significant improvement, especially when users reconnect with real-world relationships and seek professional help. Recognizing when use has become problematic isn't always easy. 'When people develop delusions, they don't realize they're delusions. They think it's reality,' says MacCabe. Read More: Are Personality Tests Actually Useful? Friends and family also play a role. Loved ones should watch for changes in mood, sleep, or social behavior, including signs of detachment or withdrawal. 'Increased obsessiveness with fringe ideologies' or 'excessive time spent using any AI system' are red flags, Girgis says. Dr. Thomas Pollak, a psychiatrist at King's College London, says clinicians should be asking patients with a history of psychosis or related conditions about their use of AI tools, as part of relapse prevention. But those conversations are still rare. Some people in the field still dismiss the idea of AI psychosis as scaremongering, he says. What AI companies should be doing So far, the burden of caution has mostly fallen on users. Experts say that needs to change. One key issue is the lack of formal data. Much of what we know about ChatGPT psychosis comes from anecdotal reports or media coverage. Experts widely agree that the scope, causes, and risk factors are still unclear. Without better data, it's hard to measure the problem or design meaningful safeguards. Many argue that waiting for perfect evidence is the wrong approach. 'We know that AI companies are already working with bioethicists and cyber-security experts to minimize potential future risks,' says Morrin. 'They should also be working with mental-health professionals and individuals with lived experience of mental illness.' At a minimum, companies could simulate conversations with vulnerable users and flag responses that might validate delusions, Morrin says. Some companies are beginning to respond. In July, OpenAI said it has hired a clinical psychiatrist to help assess the mental-health impact of its tools, which include ChatGPT. The following month, the company acknowledged times its 'model fell short in recognizing signs of delusion or emotional dependency.' It said it would start prompting users to take breaks during long sessions, develop tools to detect signs of distress, and tweak ChatGPT's responses in 'high-stakes personal decisions.' Others argue that deeper changes are needed. Ricardo Twumasi, a lecturer in psychosis studies at King's College London, suggests building safeguards directly into AI models before release. That could include real-time monitoring for distress or a 'digital advance directive' allowing users to pre-set boundaries when they're well. Read More: How to Find a Therapist Who's Right for You Dr. Joe Pierre, a psychiatrist at the University of California, San Francisco, says companies should study who is being harmed and in what ways, and then design protections accordingly. That might mean nudging troubling conversations in a different direction or issuing something akin to a warning label. Vasan adds that companies should routinely probe their systems for a wide range of mental-health risks, a process known as red-teaming. That means going beyond tests for self-harm and deliberately simulating interactions involving conditions like mania, psychosis, and OCD to assess how the models respond. Formal regulation may be premature, experts say. But they stress that companies should still hold themselves to a higher standard. Chatbots can reduce loneliness, support learning, and aid mental health. The potential is vast. But if harms aren't taken as seriously as hopes, experts say, that potential could be lost. 'We learned from social media that ignoring mental-health harm leads to devastating public-health consequences,' Vasan says. 'Society cannot repeat that mistake.'

Anthropic Unveils More Powerful AI Model Ahead of Rival GPT-5 Release
Anthropic Unveils More Powerful AI Model Ahead of Rival GPT-5 Release

Bloomberg

timean hour ago

  • Bloomberg

Anthropic Unveils More Powerful AI Model Ahead of Rival GPT-5 Release

Anthropic is releasing a new version of its most powerful artificial intelligence model as rival OpenAI nears the long-awaited launch of its GPT-5 system. Anthropic is set to announce Tuesday the release of Opus 4.1, an update to its high-end AI model that the company claims is more capable at coding, research and data analysis. The new offering is also better at fielding complex multi-step problems, the company said, positioning it as a more effective AI agent.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store