
Goldman Sachs is piloting its first autonomous coder in major AI milestone for Wall Street
The bank is testing an autonomous software engineer from artificial intelligence startup Cognition that is expected to soon join the ranks of the firm's 12,000 human developers, Goldman tech chief Marco Argenti told CNBC.
The program, named Devin, became known in technology circles last year with Cognition's claim that it had created the world's first AI software engineer. Demo videos showed the program operating as a full-stack engineer, completing multi-step assignments with minimal intervention.
"We're going to start augmenting our workforce with Devin, which is going to be like our new employee who's going to start doing stuff on the behalf of our developers," Argenti said this week in an interview.
"Initially, we will have hundreds of Devins [and] that might go into the thousands, depending on the use cases," he said.
It's the latest indicator of the dizzying speed in which AI is being adopted in the corporate world. Just last year, Wall Street firms including JPMorgan Chase and Morgan Stanley were rolling out cognitive assistants based on OpenAI models to get employees acquainted with the technology.
Now, the arrival of agentic AI on Wall Street — referencing programs like Devin that don't just help humans with tasks like summarizing documents or writing emails, but instead execute complex multi-step jobs like building entire apps — signals a much larger shift, with greater potential rewards.
Tech giants including Microsoft and Alphabet have said AI is already producing about 30% of the code on some projects, and Salesforce CEO Marc Benioff said last month that AI handles as much as 50% of the work at his company.
At Goldman Sachs, one of the world's top investment banks, this more powerful form of AI has the potential to boost worker productivity by up to three or four times the rate of previous AI tools, according to Argenti.
Devin will be supervised by human employees and will handle jobs that engineers often consider drudgery, like updating internal code to newer programing languages, he said.
Goldman is the first major bank to use Devin, according to Cognition, which was founded in late 2023 by a trio of engineers and whose staff is reportedly stocked with champion coders.
In March, the startup doubled its valuation to nearly $4 billion just a year after the release of Devin. The company counts Peter Thiel and Joe Lonsdale, the prominent venture capitalists and Palantir co-founders, among its investors.
Goldman doesn't own a stake in Cognition, according to a person with knowledge of the matter who declined to be identified speaking about the bank's investments.
The bank's move could spark a fresh round of anxiety on Wall Street and beyond about job cuts as a result of AI.
Executives at companies from Amazon to Ford have grown more candid about what AI will mean for hiring plans. Banks around the world will cut as many as 200,000 jobs in the next three to five years as they implement AI, Bloomberg's research arm said in January.
For his part, Argenti — who joined Goldman from Amazon in 2019 — charted out a vision for the near future that he called a "hybrid workforce" where humans and AI coexist.
"It's really about people and AIs working side-by-side," Argenti said. "Engineers are going to be expected to have the ability to really describe problems in a coherent way and turn it into prompts … and then be able to supervise the work of those agents."
While the role of software developer is one that most lends itself to the type of training, called reinforcement learning, that is used to make AI smarter, other roles at a bank aren't far off from being automated, according to Argenti.
"Those models are basically just as good as any developer, it's really cool," Argenti said. "So I think that will serve as a proof point also to expand it to other places."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NBC News
8 minutes ago
- NBC News
Musk threatens 'immediate' legal action against Apple over alleged antitrust violations
Elon Musk on Monday threatened Apple with legal action over alleged antitrust violations related to rankings of the Grok AI chatbot app, which is owned by his artificial intelligence startup xAI. 'Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation. xAI will take immediate legal action,' Musk wrote in a post on his social media platform X. Apple declined to comment on Musk's threat. 'Why do you refuse to put either X or Grok in your 'Must Have' section when X is the #1 news app in the world and Grok is #5 among all apps? Are you playing politics?' Musk said in another post. Apple last year tied up with OpenAI to integrate its ChatGPT chatbot into iPhone, iPad, Mac laptop and desktop products. Musk at that time had said that 'If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.' Prior to his legal threats against Apple, Musk had celebrated Grok surpassing Google as the fifth top free app on the App Store. When contacted by CNBC, xAI did not immediately respond to a request for further information on a potential lawsuit. CNBC confirmed that ChatGPT was ranked No. 1 in the top free apps section of the American iOS store, and was the only AI chatbot in Apple's 'Must-Have Apps' section. The App Store also featured a link to download OpenAI's new flagship AI model, ChatGPT-5 at the top of its 'Apps' section. OpenAI on Thursday announced GPT-5, its latest and most advanced large-scale AI model, following xAI's release of its newest chatbot, Grok 4, last month. Musk has an ongoing feud with ChatGPT maker OpenAI, which he co-founded in 2015. The billionaire stepped down from its board in 2018, four years after saying that AI was 'potentially more dangerous than nukes.' He is now suing the Microsoft -backed startup, and its CEO Sam Altman, alleging they abandoned OpenAI's founding mission to develop artificial intelligence 'for the benefit of humanity broadly.' Robert Keele, who headed the legal department at xAI, announced last week that he had left the company to spend more time with his family. In his announcement, Keele also acknowledged 'daylight between our worldviews' with Musk. In response to Musk's antitrust threats against Apple, OpenAI CEO Sam Altman said in an X post: 'This is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like.' This is not the first time Apple has been challenged on antitrust grounds. In a landmark case, the Department of Justice last year sued the company over charges of running an iPhone ecosystem monopoly. In June, a panel of judges also denied an emergency application from Apple to halt the changes to its App Store resulting from a ruling that the company could no longer charge a commission on payment links inside its apps, nor tell developers how the links should look.


Forbes
8 minutes ago
- Forbes
AWS Launches Open-Weight OpenAI Models, Eroding Microsoft Exclusivity
AWS made a splash last week when it announced day-of-launch availability of two new open-weight models from OpenAI for use on Amazon Bedrock and Amazon SageMaker. This means that AWS customers can now use the models via these managed services, especially for building new functionality with AI agents. The move also represents the latest development in the ongoing competition among the big cloud service providers to bring the best of enterprise AI to their customers, although this one feels a little different because of the traditionally tight relationship between Microsoft and OpenAI. The same day these new models launched, I had a chance to talk through this news with Shaown Nandi, who leads the technical deal team worldwide for AWS, and I want to share my takeaways from that conversation and my perspective on what it all means. (Note: AWS is an advisory client of my firm, Moor Insights & Strategy.) What 'Open-Weight' Means And Why It's Important Open-weight models have parameters that are visible to the people using them, though the underlying training data isn't visible like it would be in a fully open-source model. Still, having access to the parameters means that AWS customers can fine-tune the models for their specific use cases. The two models just launched — gpt-oss-120b and gpt-oss-20b — are the first open-weight models that OpenAI has released since GPT-2, way back in 2019. In terms of intelligence level, OpenAI says that they are positioned between GPT-3 and GPT-4. For AWS customers, these two new entrants join open-weight models from Meta, Mistral and other makers that are already supported on Bedrock. Customers will be able to run the new models, edit them and fine-tune them within the AWS toolset and infrastructure — without interacting with OpenAI directly. You can bet that many of the use cases already being set in motion by AWS users involve agentic workflows. The two new models are text-based, not multi-modal, which makes them well-suited for agentic use cases like browsing the web or navigating software. Nandi also assured me that the new models will have full access to the same Bedrock infrastructure capability as any other model. (As usual with a model introduction like this, AWS is launching it region by region, in this case starting with the U.S. West.) How AWS And OpenAI Benefit From This Linkage First, this helps AWS continue its long tradition of (trying to) offer the widest range of choice to its customers. In fact, I see that outlook as being baked into the Amazon ethos, going all the way back to the company's main web storefront. Nandi summarized his view of the AWS AI mindset when he told me, 'Offering customers choice is something we've been ultra-focused on, probably since we launched Bedrock back in 2023.' I talk with a lot of CIOs, CTOs and CEOs week in and week out — especially about their technology purchases and rationales — and I can confirm that these people want optionality, which AWS is definitely bringing with the new OpenAI models. When I shared this observation with Nandi, he pointed out that having the choice to work with any of the top AI models via AWS also provides 'air cover' for executives' AI decisions. Your board of directors wants to know that you're working with the right providers; when you can tell them that you're working with the biggest CSP using the biggest variety of open models, that's a fruitful path into a conversation about what you're enabling in terms of innovation and productivity. So what's in it for OpenAI? Nandi can't speak for another company, of course, and OpenAI is well-known for not answering questions. But I can share an industrywide perspective that's grounded in just how quickly the enterprise AI environment is shifting. All of the model creators — independents like OpenAI and hyperscalers like AWS alike — can see how fast things are changing. New models are dropping all the time, and disruptive events such as the debut of DeepSeek at the start of this year force everyone to reconsider the best ways to build and train models efficiently and cost-effectively. OpenAI set off the AI gold rush when it launched ChatGPT late in 2022, but it's hardly the only game in town now. It has compelling technology, but there are enterprise customers that are tooled for AWS rather than Azure, and if those customers can't easily access OpenAI via AWS, they may turn to some other model provider. Conversely, being so readily available via Bedrock and SageMaker could benefit OpenAI in terms of building out its ecosystem, meeting customers where they already live — and simply moving fast. How AWS Customers Stand To Benefit The customers that are already getting under the hood of this thing are sure to be connecting existing applications to the new models to see how they perform. Naturally, they'll be looking for ways to improve performance and drive costs down. According to the press release accompanying the launch, AWS says that the new models 'are 10x more price-performant than the comparable Gemini model, 18x more than DeepSeek-R1 and 7x more than the comparable OpenAI o4 model.' I'd like to judge those numbers for myself against real-world field results; it's early days yet, but I'm sure I'll have more insights to validate or challenge these claims in the coming months. Setting aside the potential cost advantages, there are significant operational benefits I have no question about. First, being able to access OpenAI models through AWS tools you're already using means that you don't need to have a commercial agreement with OpenAI. Nothing against OpenAI, mind you — just that it's way simpler to call up one of these models in SageMaker or Bedrock and try it out for a proof-of-concept when you don't also have to go through a vendor-onboarding process or a set of engineering steps to tap directly into OpenAI's technical ecosystem. That also extends to your AI devs who are doing the actual work. They don't need to learn a new platform or test out how well it works with their existing tools, nor do they need to rebuild their applications. Rather, they can stay within their current tools to access the new models and get down to work. If I were the engineering leader running an AI shop already tooled out for AWS, I would welcome this. Nandi confirmed that his customers have been calling for this. Every month or two, they see new models being launched in the market that they're curious to try because they think it might save them money, improve latency or bring some other benefit. They like using Bedrock to try out new models — and to run existing models from Meta, DeepSeek, Amazon itself and so on. And yet, Nandi told me, ''You're missing OpenAI' — that's what they would say.' Now that gap has been addressed. AWS, OpenAI And Microsoft So what does this mean for Microsoft? The company is doing pretty well, to the tune of $101 billion in profit for the fiscal year that ended on June 30. As part of that success, AI has helped drive Azure revenue to $75 billion in the past year. (AWS is above $100 billion annually.) Azure also provides a range of OpenAI models — including hot-off-the-presses GPT-5 variants — that goes well beyond the two open-weight models AWS just launched. That said, Microsoft's long-running relationship with OpenAI is complex, and at times it has been vexed. I don't want to read too much into the availability of a couple of slick new open-weight models via the biggest CSP AI platform in the world. Yet the lock-in that Microsoft enjoyed until last week for all things OpenAI was the first angle that popped into my head when I heard the AWS news. At the moment — and surely for some years to come — there is plenty of AI business to go around. As Nandi pointed out during our conversation, 'Agentic is super-early.' More than that, for all the enterprises building focused agents for different use cases, 'They're not looking for one general-purpose model for agents.' Rather, they want to find different models that supply the right price/performance for each use case. While they can do that with a single AI service provider — AWS, Azure, Google Cloud, Oracle, IBM — they probably won't find it with a single AI model provider. AWS will of course be adding features to support the new open-weight OpenAI models in the weeks and months to come. And there's no question these are nice additions to the AWS toolbox. Maybe six months or a year from now, Microsoft's loss of exclusivity with OpenAI in this instance won't seem like a big deal. But I do wonder whether there could be a scenario where we look back on this as the first chink in the Microsoft–OpenAI armor.


Bloomberg
8 minutes ago
- Bloomberg
Trump Mocks Goldman, Says Bank Made ‘Bad Prediction' on Tariffs
President Donald Trump assailed David Solomon, the CEO of Goldman Sachs Group Inc. on Tuesday, saying the bank had made a 'bad prediction' about the impact of his sweeping tariff agenda on markets and consumer costs. 'They made a bad prediction a long time ago on both the Market repercussion and the Tariffs themselves, and they were wrong, just like they are wrong about so much else,' Trump said on his social media platform.