logo
#

Latest news with #METR

AI is slowing down (not speeding up) software developers, study finds
AI is slowing down (not speeding up) software developers, study finds

The Star

timea day ago

  • The Star

AI is slowing down (not speeding up) software developers, study finds

AI tools have widely been adopted by programmers in efforts to speed up production, however new research is casting doubt on the assumption that it generally helps developers work faster. — Photo: Sebastian Gollnow/dpa LOS ANGELES: Open-source developers who lean on artificial intelligence tools are less productive and efficient than those who do not use the much-hyped technology, according to researchers investigating AI assistance in software development. Using AI means the work takes 19% longer to finish than without, said a team at METR (Model Evaluation and Threat Research) after a randomized control trial involving 16 developers who undertook a series of typical tasks their job entails, such as bug fixes. AI tools have widely been adopted by programmers in efforts to speed up production, and in May ChatGPT maker OpenAI released a dedicated software engineering agent called Codex for software developers. However: "AI makes them slower," the Californian non-profit METR said, deeming their outcome surprising - as did the developers, who said in advance that they expected using AI to make them around 25% faster at getting through their tasks. Not only was the "significant slowdown" contrary to developer beliefs and experts' forecasts, the "gap between perception and reality" was "striking," according to the researchers. Perhaps the most striking finding of all was that despite AI throwing sand in their gears as they worked, the developers nonetheless believed it "had sped them up by 20%," according to METR's team. "They were mistaken about AI's impact on their productivity," the METR team said of the participants. But the researchers by no means wanted to rule out AI as a helpful tool for coders and said it could still be useful for less experienced developers or those "working in an unfamiliar codebase." The findings echo research published by Stanford University earlier this year showing AI as largely hindering workers of experience, while it boosts those with less skill. The effects of AI on productivity "vary significantly," the researchers said. – dpa

Does AI Speed Up Coding? New Study Finds Surprising Results
Does AI Speed Up Coding? New Study Finds Surprising Results

NDTV

time3 days ago

  • NDTV

Does AI Speed Up Coding? New Study Finds Surprising Results

A new research has found that using artificial intelligence (AI) tools to write code actually takes more time for experienced software developers. The study, conducted by the nonprofit research group METR, found that the software engineers took 19 per cent longer to complete tasks when using Cursor, a widely used AI-powered coding assistant. For the study, METR measured the speed of 16 developers, having an average experience of five years, working on complex software projects, both with and without AI assistance. When the use of AI tools was allowed, the developers primarily used Cursor Pro, a popular code editor, and Claude 3.5/3.7 Sonnet. "Before starting tasks, developers forecast that allowing AI will reduce completion time by 24 per cent. After completing the study, developers estimate that allowing AI reduced completion time by 20 per cent," the study highlighted. However, the results were surprisingly opposite. The researchers found that when developers use AI tools, they take 19 per cent longer than without -- suggesting AI was making them slower. The study's authors urged readers not to generalise too broadly from the results. For one, the study only measured the impact of Large Language Models (LLMs) on experienced coders, not new ones, who might benefit more from their help. "Although the influence of experimental artifacts cannot be entirely ruled out, the robustness of the slowdown effect across our analyses suggests it is unlikely to primarily be a function of our experimental design." The rapid advancement of artificial intelligence (AI) in recent years has led experts to claim that software engineering jobs could soon be fully outsourced to AI agents. Despite the study suggesting that coding with AI was taking more time, companies are unlikely to stop spending resources on perfecting AI coding. Last year, during Google's Q3 2024 earnings call, CEO Sundar Pichai revealed that AI systems now generate more than a quarter of new code for its products, with human programmers overseeing the computer-generated contributions. "Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster," said Mr Pichai at the time.

Rethinking AI in enterprise, blockchain development
Rethinking AI in enterprise, blockchain development

Coin Geek

time3 days ago

  • Coin Geek

Rethinking AI in enterprise, blockchain development

Getting your Trinity Audio player ready... A recent Model Evaluation & Threat Research (METR) study found artificial intelligence (AI) coding assistants slowed experienced developers by 19% on familiar codebases, underscoring the cognitive friction of tool-context shifts. Conscious Stack Design™ (CSD) calls for intentional, context-aware integration of AI, reserving assistants for scaffold tasks while preserving flow on legacy work. In blockchain domains like BSV, the precision demands and security stakes amplify this effect, suggesting teams calibrate AI for documentation and test scaffolding rather than core consensus logic. What drives the slowdown among veteran developers? Experienced software engineers build rich mental models of their projects over time. These internal frameworks let them navigate complex code with minimal cognitive overhead. Introducing an AI assistant such as Cursor interrupts that fluency in two main ways: Context switching and evaluation overhead. Every AI suggestion must be read, interpreted, and validated against the developer's intent. Even when suggestions are directionally correct, developers spend precious seconds confirming variable names, API contracts, and edge cases. Over dozens of small interactions, these validation steps accumulate, eroding any raw time saved by auto-completion. Perception versus reality disconnect. In the METR trial, participants believed they worked 24% faster with AI—but objective measures showed a 19% slowdown. This gap arises because AI makes code authoring feel easier—akin to editing a draft rather than writing from scratch—even though each edit requires scrutiny. Seasoned developers' self-assessment skews toward perceived ease, masking the hidden review costs. By contrast, less experienced coders often lack deep familiarity and lean on AI for boilerplate or syntax. Their cognitive load falls more steeply, so they register net gains even if they invest similar time in validation. How does Conscious Stack Design inform AI adoption? Conscious Stack Design™ emphasizes harmony between tools, workflows, and human cognition. It recognizes that adding a new layer—no matter how powerful—can fragment a mature stack if not introduced with intention. Three CSD tenets guide AI integration: Align tools with task context: Not all tasks benefit equally from AI. Use assistants for greenfield development—scaffolding new modules, generating test harnesses, or spinning up documentation templates. For maintenance on established code, default to native IDE features and keyboard-driven workflows. Not all tasks benefit equally from AI. Use assistants for greenfield development—scaffolding new modules, generating test harnesses, or spinning up documentation templates. For maintenance on established code, default to native IDE features and keyboard-driven workflows. Establish clear 'AI boundaries': Define rules such as 'AI for initial drafts only' or 'Disable AI in production branches.' Embedding these policies into version-control hooks or team guidelines prevents ad hoc toggling that disrupts flow. Define rules such as 'AI for initial drafts only' or 'Disable AI in production branches.' Embedding these policies into version-control hooks or team guidelines prevents ad hoc toggling that disrupts flow. Monitor and iterate on AI resonance: Track lead metrics like code-review time, bug-fix rates, and developer satisfaction. If AI assistance correlates with longer reviews or higher defect density in certain contexts, adjust usage rules. This iterative feedback loop preserves resonance—maximizing benefit while minimizing noise. In practice, a CSD-aligned team might enable AI suggestions only when creating unit tests or prototyping a novel service, then turn it off when working on critical legacy functions. This selective approach prevents cognitive tax while still capturing AI's generative power. What does this mean for blockchain developers on BSV? Blockchain engineering combines high-stakes correctness, domain-specific protocols, and often tight coupling between smart contracts and consensus rules. For BSV developers, the METR findings carry particular weight: Security and auditability demands. Smart-contract errors can lead to on-chain losses or protocol vulnerabilities. AI-generated code must undergo rigorous formal verification and peer review. Each AI suggestion introduces an audit checkpoint, compounding time spent on validation and diminishing the allure of instant snippets. Protocol evolution and unfamiliarity. When BSV protocols evolve, even veteran blockchain engineers face new interfaces. In these scenarios—akin to 'greenfield' work—AI can excel at generating boilerplate for transaction parsing or RPC wrappers. Here, novices and experts alike may gain from AI scaffolding, aligning with CSD's recommendation to use AI in unfamiliar territories. Test and documentation acceleration. Rather than embedding AI in core contract code, blockchain teams can leverage assistants to auto-generate comprehensive test suites, API documentation, or example integrations. These peripheral artifacts accelerate onboarding and reduce manual drudgery, while keeping the critical path free from AI-induced friction. Ecosystem collaboration. In BSV's open-source environment, community contributions often come from varied experience levels. AI-driven style guides or linting suggestions can help standardize code quality across contributors. However, project maintainers should gate AI-assisted pull requests behind stricter review rules to safeguard protocol integrity. By mapping AI use cases to the stages of blockchain development—innovation, deployment, maintenance—teams can apply CSD principles to optimize where AI amplifies productivity and where it introduces undue overhead. Toward a balanced AI-augmented developer stack As AI tools mature, their integration into enterprise workflows demands more than flip-of-a-switch adoption. The METR study serves as a cautionary tale: even promising technologies can backfire when they collide with entrenched expertise. Conscious Stack Design™ offers a roadmap: Audit current workflows Document where context-switching costs are highest. Are developers spending excessive time reviewing pull requests? Which tasks feel most tedious? Pilot targeted AI interventions Roll out AI in narrow, well-defined contexts—new component creation, test writing, API client generation. Measure impact on cycle time and code quality. Codify AI usage policies Establish team standards: when to enable AI, how to label AI-generated code, and what review thresholds apply. Embed checks into CI/CD pipelines. Iterate with feedback loops Use metrics (e.g., mean time to repair, review durations) and qualitative surveys to refine AI boundaries. Continuously adjust to preserve developer flow. Educate and enable all skill levels Offer training on effective prompt crafting and AI-tool configurations. Equip junior engineers to leverage AI safely, while showing seniors how to integrate suggestions without undue scrutiny. In conclusion, AI coding assistants hold transformative potential—but only when woven into a stack with conscious intent. For enterprise teams and blockchain specialists alike, the road to AI-augmented productivity lies in respecting human cognition, aligning tool use with task context, and iterating based on real-world feedback. Explore Conscious Stack Design™ frameworks and pilot targeted AI interventions in your next sprint. You might discover that the smartest way to speed up development is knowing when to hit 'disable.' In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI . Watch: Demonstrating the potential of blockchain's fusion with AI title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="">

AI is fueling job cuts, but is it really making companies more efficient?
AI is fueling job cuts, but is it really making companies more efficient?

NBC News

time4 days ago

  • Business
  • NBC News

AI is fueling job cuts, but is it really making companies more efficient?

With news swirling about multibillion-dollar deals for artificial intelligence startups and multimillion-dollar AI worker salaries, it was a study from a small research nonprofit group that turned some heads in the tech world last week. Its findings were simple but surprising: AI made software engineers slower. 'When developers are allowed to use AI tools, they take 19% longer to complete issues — a significant slowdown that goes against developer beliefs and expert forecasts,' the nonprofit group, METR, which specializes in evaluating AI models, said in its report. 'This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%,' the METR authors added. The results may simply reflect the limits of current technology, they said — but they still offer a reality check for what is arguably the buzziest part of the broadly euphoric AI rush: coding. In the past year, AI startups focused on generating software code have been the subject of an intense bidding war that has only escalated in recent weeks. On Monday, AI coding company Windsurf was acquired by another AI startup, Cognition, after a deal with OpenAI reportedly fell through. Google poached Windsurf's CEO while signing a $2.4 billion licensing deal. Cursor, which also focuses on AI code generation, was valued at $10 billion in a May funding round that brought in $900 million. Vibe coding — a style of coding that is entirely reliant on AI — has already become part of the tech lexicon, and discussions about the future of developer jobs can be found on most every online forum dedicated to tech. AI talent, too, remains in high demand, with Facebook parent Meta offering multimillion-dollar paydays. LinkedIn found that 'AI engineer' is the fastest growing job title among recent college graduates — with two related roles, data center technician and system engineer, coming in at Nos. 3 and 4. The AI gold rush has come as overall job openings for software developers hit a five-year low earlier this year, raising questions about AI's responsibility for the slowdown. Among the most prominent firms announcing large rounds of layoffs has been Microsoft, whose CEO, Satya Nadella, has stated that as much as 30% of Microsoft code is now written by AI. Bloomberg News found that in a recent round of layoffs that occurred in Microsoft's home state of Washington, software engineering was by far the largest single job category to receive pink slips, making up more than 40% of the roughly 2,000 positions cut. While it's clear that AI can write code, it's far less certain whether the technology poses a direct threat to coding jobs in the short term. In a paper released Wednesday, MIT researchers laid out the 'many' challenges that still exist before AI can truly begin replacing software engineers wholesale. The main obstacles come when AI programs are asked to develop code at scale, or with more complex logic, the authors found. 'Everyone is talking about how we don't need programmers anymore, and there's all this automation now available,' Armando Solar‑Lezama, an MIT professor and the senior author of the study, said in a press release. 'On the one hand, the field has made tremendous progress. We have tools that are way more powerful than any we've seen before. But there's also a long way to go toward really getting the full promise of automation that we would expect.' What trouble exists in the current coder job market may have more to do with the broader economic slowdown than with abrupt technological changes, experts say. 'Teams are getting smaller,' said Heather Doshay, a partner at SignalFire, a venture capital firm that invests in AI companies. 'Not necessarily because of AI, but because of market demands and operating expenses. What's happening is companies are asking, 'How can we stay lean and hire fewer people while still extending our runway financially?'' However limited AI may be, many coders remain anxious. A popular website that tracks tech layoffs shows that the pace of separations has increased for the past three quarters after seeing steady declines over the previous six — though they remain well below a 2023 Blind, an anonymous message board app popular among tech workers, the topic of AI taking coding jobs is a hot one, with plenty of skepticism about whether it's actually happening — or whether the narrative is an excuse that has allowed companies to cut staff. Gareth Patterson, a 25-year-old New York City resident, says he was able to transition from a sales role into an engineering one only after putting himself through a grueling, nonstop studying regimen that came at the temporary cost of most of his social life, not to mention his workout schedule. He says the payoff has been worth it because his salary now allows him to have disposable income in one of the most expensive cities in the world. But he does not envy those trying to break in or even adapt to the new era. 'The expectations for an engineer are way up,' said a senior software engineer at a tax and auditing firm. 'We're now only seeing the top talent get hired. It's intimidating.'

AI was supposed to speed up coders, new study says it did the opposite
AI was supposed to speed up coders, new study says it did the opposite

India Today

time4 days ago

  • India Today

AI was supposed to speed up coders, new study says it did the opposite

Contrary to popular belief, new research has found that using AI tools can actually slow down experienced software developers, especially when working in codebases they already know well. The study, conducted by the nonprofit research group METR, revealed that seasoned open-source developers took 19 per cent longer to complete tasks when using Cursor, a widely used AI-powered coding assistant. As per the study, the result was based on a randomised controlled trial, which involved contributors working on their own open-source projects. advertisementBefore the trial began, developers believed AI would significantly increase their speed, which is estimated at a 24 per cent improvement in task completion time. Even after finishing their tasks, many still believed the AI had helped them work faster, estimating a 20 per cent improvement. But the real data showed otherwise.'We found that when developers use AI tools, they take 19 per cent longer than without, AI makes them slower,' the researchers wrote. The lead authors of the study, Joel Becker and Nate Rush, admitted the results came as a surprise. Rush had initially predicted 'a 2x speed up, somewhat obviously.' But the study told a different story. The findings challenge the widespread notion that AI tools automatically make human coders more efficient, a belief that has attracted billions of dollars in investment and sparked predictions that AI could soon replace many junior engineering studies have shown strong productivity gains with AI. One found that AI helped developers complete 56 per cent more code, while another claimed a 26 per cent boost in task volume. But the METR study suggests that those gains don't apply to all situations, especially where developers already have deep familiarity with the of streamlining work, the AI often made suggestions that were only 'directionally correct,' said Becker. 'When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what's needed.'As a result, developers spent additional time reviewing and correcting AI-generated code, which ultimately slowed them down. However, the researchers do not believe this slowdown would apply to all coding scenarios, such as those involving junior developers or unfamiliar the results, both the study's authors and most participants continue to use Cursor. Becker suggested that while the tool may not speed up work, it can still make development feel easier and more enjoyable.'Developers have goals other than completing the task as soon as possible,' he said. 'So they're going with this less effortful route.'The authors also emphasised that their findings should not be over-generalised. The slowdown only reflects a snapshot of AI's capabilities as of early 2025, and further improvements in prompting, training, and tool design could lead to different outcomes in AI systems continue to evolve, METR plans to repeat such studies to better understand how AI might accelerate, or hinder, human productivity in real-world development settings.- Ends

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store