Latest news with #NateRush

Business Insider
6 days ago
- Business
- Business Insider
AI coding tools made some experienced software engineers less productive in a recent study
In an interesting twist, a new study suggests AI tools made some developers less productive. Experienced developers using AI coding tools took 19% longer to complete issues than those not using generative AI assistance, according to a new study from Model Evaluation & Threat Research (METR). Even after completing the tasks, participants couldn't accurately gauge their own productivity, the study said: The average AI-assisted developers still thought their productivity had gained by 20%. How the study was set up METR's study recruited 16 developers with large, open-source repositories that they had worked on for years. The developers were randomly assigned into two groups: Those allowed to use AI coding assistance and those who weren't. The AI-assisted coders could choose which vibe-coding tool they used. Most chose Cursor with Claude 3.5/3.7 Sonnet. Business Insider reached out to Cursor for comment. Developers without AI spent over 10% more time actively coding, the study said. The AI-assisted coders spent over 20% more time reviewing AI outputs, prompting AI, waiting on AI, or being idle. A 'really surprising' result — but it's important to remember how fast AI tools are progressing METR researcher Nate Rush told BI he uses an AI code editor every day. While he didn't make a formal prediction about the study's results, Rush said he jotted down positive productivity figures he expected the study to reach. He remains surprised by the negative end result — and cautions against taking it out of context. "Much of what we see is the specificity of our setting," Rush said, explaining that developers without the participants' 5-10 years of expertise would likely see different results. "But the fact that we found any slowdown at all was really surprising." Steve Newman, serial entrepreneur and cofounder of Google Docs, described the findings in a Substack post as "too bad to be true," but after more careful analysis of the study and its methodology, he found the study credible. "This study doesn't expose AI coding tools as a fraud, but it does remind us that they have important limitations (for now, at least)," Newman wrote. The METR researchers said they found evidence for multiple contributors to the productivity slowdown. Over-optimism was one likely factor: Before completing the tasks, developers predicted AI would decrease implementation time by 24%. For skilled developers, it may still be quicker to do what you know well. The METR study found that AI-assisted participants slowed down on the issues they were more familiar with. They also reported that their level of experience made it more difficult for AI to help them. AI also may not be reliable enough yet to produce clean and accurate code. AI-assisted developers in the study accepted less than 44% of the generated code, and spent 9% of their time cleaning AI outputs. Ruben Bloom, one of the study's developers, posted a reaction thread on X. Coding assistants have developed considerably since he participated in February. "I think if the result is valid at this point in time, that's one thing, I think if people are citing in another 3 months' time, they'll be making a mistake," Bloom wrote. METR's Rush acknowledges that the 19% slowdown is a "point-in-time measurement" and that he'd like to study the figure over time. Rush stands by the study's takeaway that AI productivity gains may be more individualized than expected.

Business Insider
6 days ago
- Business
- Business Insider
AI coding tools could make experienced software engineers less productive, a new study suggests
AI code editors have quickly become a mainstay of software development, employed by tech giants such as Amazon, Microsoft, and Google. In an interesting twist, a new study suggests that AI tools might actually be slowing experienced developers down. Experienced developers using AI coding tools took 19% longer to complete issues than those not using generative AI assistance, according to a new study from Model Evaluation & Threat Research (METR). Even after completing the tasks, participants couldn't accurately gauge their own productivity, the study said: The average AI-assisted developers still thought their productivity had gained by 20%. How the study was set up METR's study recruited 16 developers with large, open-source repositories that they had worked on for years. The developers were randomly assigned into two groups: Those allowed to use AI coding assistance and those who weren't. The AI-assisted coders could choose which vibe-coding tool they used. Most chose Cursor with Claude 3.5/3.7 Sonnet. Business Insider reached out to Cursor for comment. Developers without AI spent over 10% more time actively coding, the study said. The AI-assisted coders spent over 20% more time reviewing AI outputs, prompting AI, waiting on AI, or being idle. A 'really surprising' result — but it's important to remember how fast AI tools are progressing METR researcher Nate Rush told BI he uses an AI code editor every day. While he didn't make a formal prediction about the study's results, Rush said he jotted down positive productivity figures he expected the study to reach. He remains surprised by the negative end result — and cautions against taking it out of context. "Much of what we see is the specificity of our setting," Rush said, explaining that developers without the participants' 5-10 years of expertise would likely see different results. "But the fact that we found any slowdown at all was really surprising." Steve Newman, serial entrepreneur and cofounder of Google Docs, described the findings in a Substack post as "too bad to be true," but after more careful analysis of the study and its methodology, he found the study credible. "This study doesn't expose AI coding tools as a fraud, but it does remind us that they have important limitations (for now, at least)," Newman wrote. The METR researchers said they found evidence for multiple contributors to the productivity slowdown. Over-optimism was one likely factor: Before completing the tasks, developers predicted AI would decrease implementation time by 24%. For skilled developers, it may still be quicker to do what you know well. The METR study found that AI-assisted participants slowed down on the issues they were more familiar with. They also reported that their level of experience made it more difficult for AI to help them. AI also may not be reliable enough yet to produce clean and accurate code. AI-assisted developers in the study accepted less than 44% of the generated code, and spent 9% of their time cleaning AI outputs. Ruben Bloom, one of the study's developers, posted a reaction thread on X. Coding assistants have developed considerably since he participated in February. "I think if the result is valid at this point in time, that's one thing, I think if people are citing in another 3 months' time, they'll be making a mistake," Bloom wrote. METR's Rush acknowledges that the 19% slowdown is a "point-in-time measurement" and that he'd like to study the figure over time. Rush stands by the study's takeaway that AI productivity gains may be more individualized than expected. "A number of developers told me this really interesting anecdote, which is, 'Knowing this information, I feel this desire to use AI more judiciously,'" Rush said. "On an individual level, these developers know their actual productivity impact. They can make more informed decisions."


Phone Arena
6 days ago
- Business
- Phone Arena
Do you know what is slowing down senior coders? As it turns out, it's AI
What has been your experience with AI? It helps me do things faster. It's a bottleneck! Can't really say right now. It helps me do things faster. 0% It's a bottleneck! 0% Can't really say right now. 0% Receive the latest mobile news By subscribing you agree to our terms and conditions and privacy policy Recommended Stories – Joel Becker, METR research leader, July 2025 What about smartphone AI? Grab Surfshark VPN now at more than 50% off and with 3 extra months for free! Secure your connection now at a bargain price! We may earn a commission if you make a purchase Check Out The Offer Picture this: the very thing that was created to help you is now your performance bottleneck. Quite the a new study that wreaks havoc to the idea that artificial intelligence supercharges the work of seasoned software developers. Instead of speeding them up, using AI tools actually slowed down experienced coders when they tackled projects they already knew study is carried out by AI research nonprofit METR and is focused on veteran developers working with Cursor, a popular AI coding assistant, on open-source projects they were familiar with. Before diving into the study, the developers expected the AI would save them time, guessing it could cut task completion by nearly a quarter. Even after using the AI, many still felt it had made them roughly 20% faster. But the hard data told a different story: AI extended the time needed to finish tasks by 19%.Joel Becker and Nate Rush, who led the research, admitted they were caught off guard by the outcome. Rush, before the study began, had predicted the AI would double productivity – and that's what many of us would think. The findings, however, cast doubt on the widespread assumption that AI tools reliably boost the productivity of highly skilled, high-cost software engineers. This idea is something that many companies have chosen to invest heavily into and big investments have been made comes as some tech leaders, including Dario Amodei, CEO of AI company Anthropic, have suggested that AI could eliminate as much as half of all entry-level white-collar jobs within the next five years. Previous studies have added to the hype, reporting significant productivity gains: one claimed AI sped up coding by 56%, while another found developers using AI completed 26% more tasks within the same METR's study tells a very different story. The boost in productivity doesn't hold up when developers are working on large, complex codebases they know well. In these cases, AI not only failed to help but actively slowed developers down. The problem stemmed from the need to double-check and often correct the AI's suggestions – suggestions that were frequently close to correct, but not precise enough to be trusted without careful explained that video recordings of the participants showed how AI often nudged developers in the right direction, but rarely delivered exactly what was needed:That led to additional time spent reviewing, editing, and sometimes discarding the AI's researchers were careful to point out that these results likely don't apply across the board. Less experienced engineers or those working on unfamiliar codebases could still benefit from AI study's findings may also offer a glimpse into how everyday smartphone users (like you and me) interact with AI-powered features on their devices. Just like developers, many smartphone users expect AI tools – from predictive text and voice assistants to photo editing suggestions – to streamline the day-to-day these features often require us to pause, review, and correct AI missteps, sometimes making simple actions feel more complicated than they should be. Whether it's an autocorrect blunder, a poorly framed photo enhancement, or a confusing AI-generated message reply, we as smartphone users are also discovering that AI is far from can sometimes come at the cost of time!

Straits Times
6 days ago
- Business
- Straits Times
AI slows down some experienced software developers, study finds
Sign up now: Get ST's newsletters delivered to your inbox The study found that using AI increased task completion time by 19 per cent. SAN FRANCISCO - Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found. AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with. Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24 per cent. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20 per cent. But the study found that using AI did the opposite: it increased task completion time by 19 per cent. The study's lead authors, Mr Joel Becker and Mr Nate Rush, said they were shocked by the results: prior to the study, Mr Rush had written down that he expected 'a 2x speed up, somewhat obviously'. The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development. AI is also expected to replace entry-level coding positions. Mr Dario Amodei, CEO of Anthropic, recently told Axios that AI could wipe out half of all entry-level white collar jobs in the next one to five years. Prior literature on productivity improvements has found significant gains: one study found using AI sped up coders by 56 per cent, another study found developers were able to complete 26 per cent more tasks in a given time. But the new METR study shows that those gains don't apply to all software development scenarios. In particular, this study showed that experienced developers intimately familiar with the quirks and requirements of large, established open source codebases experienced a slowdown. Other studies often rely on software development benchmarks for AI, which sometimes misrepresent real-world tasks, the study's authors said. The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested. 'When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what's needed,' Mr Becker said. The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren't familiar with. Still, the majority of the study's participants, as well as the study's authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page. 'Developers have goals other than completing the task as soon as possible,' Mr Becker said. 'So they're going with this less effortful route.' REUTERS


Reuters
7 days ago
- Business
- Reuters
AI slows down some experienced software developers, study finds
SAN FRANCISCO, July 10 (Reuters) - Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found. AI research nonprofit METR conducted the in-depth study, opens new tab on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with. Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%. The study's lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected 'a 2x speed up, somewhat obviously.' The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development. AI is also expected to replace entry-level coding positions. Dario Amodei, CEO of Anthropic, recently told Axios that AI could wipe out half of all entry-level white collar jobs in the next one to five years. Prior literature on productivity improvements has found significant gains: one study found using AI sped up coders by 56%, opens new tab, another study found developers were able to complete 26% more tasks, opens new tab in a given time. But the new METR study shows that those gains don't apply to all software development scenarios. In particular, this study showed that experienced developers intimately familiar with the quirks and requirements of large, established open source codebases experienced a slowdown. Other studies often rely on software development benchmarks for AI, which sometimes misrepresent real-world tasks, the study's authors said. The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested. 'When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what's needed,' Becker said. The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren't familiar with. Still, the majority of the study's participants, as well as the study's authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page. 'Developers have goals other than completing the task as soon as possible,' Becker said. 'So they're going with this less effortful route.'