Latest news with #Copilot


Time Magazine
an hour ago
- Science
- Time Magazine
AI Promised Faster Coding. This Study Disagrees
Welcome back to In the Loop, TIME's new twice-weekly newsletter about the world of AI. We're publishing installments both as stories on and as emails. If you're reading this in your browser, you can subscribe to have the next one delivered straight to your inbox. What to Know:Could coding with AI slow you down? In just the last couple of years, AI has totally transformed the world of software engineering. Writing your own code (from scratch, at least,) has become quaint. Now, with tools like Cursor and Copilot, human developers can marshal AI to write code for them. The human role is now to understand what to ask the models for the best results, and to iron out the inevitable problems that crop up along the way. Conventional wisdom states that this has accelerated software engineering significantly. But has it? A new study by METR, published last week, set out to measure the degree to which AI speeds up the work of experienced software developers. The results were very unexpected. What the study found — METR measured the speed of 16 developers working on complex software projects, both with and without AI assistance. After finishing their tasks, the developers estimated that access to AI had accelerated their work by 20% on average. In fact, the measurements showed that AI had slowed them down by about 20%. The results were roundly met with surprise in the AI community. 'I was pretty skeptical that this study was worth running, because I thought that obviously we would see significant speedup,' wrote David Rein, a staffer at METR, in a post on X. Why did this happen? — The simple technical answer seems to be: while today's LLMs are good at coding, they're often not good enough to intuit exactly what a developer wants and answer perfectly in one shot. That means they can require a lot of back and forth, which might take longer than if you just wrote the code yourself. But participants in the study offered several more human hypotheses, too. 'LLMs are a big dopamine shortcut button that may one-shot your problem,' wrote Quentin Anthony, one of the 16 coders who participated in the experiment. 'Do you keep pressing the button that has a 1% chance of fixing everything? It's a lot more enjoyable than the grueling alternative.' (It's also easy to get sucked into scrolling social media while you wait for your LLM to generate an answer, he added.) What it means for AI — The study's authors urged readers not to generalize too broadly from the results. For one, the study only measures the impact of LLMs on experienced coders, not new ones, who might benefit more from their help. And developers are still learning how to get the most out of LLMs, which are relatively new tools with strange idiosyncrasies. Other METR research, they noted, shows the duration of software tasks that AI is able to do doubling every seven months—meaning that even if today's AI is detrimental to one's productivity, tomorrow's might not be. Who to Know:Jensen Huang, CEO of Nvidia Huang finds himself in the news today after he proclaimed on CNN that the U.S. government doesn't 'have to worry' about the possibility of the Chinese military using the market-leading AI chips that his company, Nvidia, produces. 'They simply can't rely on it,' he said. 'It could be, of course, limited at any time.' Chipping away — Huang was arguing against policies that have seen the U.S. heavily restrict the export of graphics processing units, or GPUs, to China, in a bid to hamstring Beijing's military capabilities and AI progress. Nvidia claims that these policies have simply incentivized China to build its own rival chip supply chain, while hurting U.S. companies and by extension the U.S. economy. Self-serving argument — Huang of course would say that, as CEO of a company that has lost out on billions as a result of being blocked from selling its most advanced chips to the Chinese market. He has been attempting to convince President Donald Trump of his viewpoints in a recent meeting at the White House, Bloomberg reported. In fact… The Chinese military does use Nvidia chips, according to research by Georgetown's Center for Security and Emerging Technology, which analyzed 66,000 military purchasing records to come to that conclusion. A large black market has also sprung up to smuggle Nvidia chips into China since the export controls came into place, the New York Times reported last year. AI in Action Anthropic's AI assistant, Claude, is transforming the way the company's scientists keep up with the thousands of pages of scientific literature published every day in their field. Instead of reading papers, many Anthropic researchers now simply upload them into Claude and chat with the assistant to distill the main findings. 'I've changed my habits of how I read papers,' Jan Leike, a senior alignment researcher at Anthropic, told TIME earlier this year. 'Where now, usually I just put them into Claude, and ask: can you explain?' To be clear, Leike adds, sometimes Claude gets important stuff wrong. 'But also, if I just skim-read the paper, I'm also gonna get important stuff wrong sometimes,' Leike says. 'I think the bigger effect here is, it allows me to read much more papers than I did before.' That, he says, is having a positive impact on his productivity. 'A lot of time when you're reading papers is just about figuring out whether the paper is relevant to what you're trying to do at all,' he says. 'And that part is so fast [now], you can just focus on the papers that actually matter.' What We're Reading Microsoft and OpenAI's AGI Fight Is Bigger Than a Contract — By Steven Levy in Wired Steven Levy goes deep on the 'AGI' clause in the contract between OpenAI and Microsoft, which could decide the fate of their multi-billion dollar partnership. It's worth reading to better understand how both sides are thinking about defining AGI. They could do worse than Levy's own description: 'a technology that makes Sauron's Ring of Power look like a dime-store plastic doodad.'

Engadget
an hour ago
- Business
- Engadget
Laid off Candy Crush studio staff reportedly replaced by the AI tools they helped build
Microsoft's extensive gaming portfolio was hit hard by sweeping layoffs earlier this month. The situation appears to have been particularly galling for staff at Candy Crush developer King who are reportedly set to be replaced by AI tools they worked on. Multiple anonymous sources have told that a number of narrative, UX, level design and user research staffers at King have spent several years helping to build and train AI models that can do their jobs more quickly. Those same employees are now being told their jobs are at risk. They added that the copywriting team is facing the same fate, with the London-based group working on Farm Heroes Saga expected to effectively be cut in half. "The fact AI tools are replacing people is absolutely disgusting but it's all about efficiency and profits even though the company is doing great overall," a source told the mobile gaming-focused outlet. "If we're introducing more feedback loops then it's crazy to remove the developers themselves, we need more hands and less leadership." The same source estimated that the company-wide staff cuts could end up being more than 200, which was the number reported by Bloomberg when it broke the news of the broader layoffs. The impact of the recent staffing upheaval is being felt across Microsoft's gaming division. Engadget's Jessica Conditt recently spoke to employees at Halo Studios, with one developer telling us they were "super pissed" about the layoffs. At least five people within Halo Studios were told they no longer had jobs shortly after receiving an all-staff email from Microsoft Gaming SEO Phil Spencer allegedly celebrating Xbox's current profitability. The same developer said Microsoft was trying its "damnest to replace as many jobs as [it] can with AI agents" as it increasingly pushes Copilot on its staff.


CNET
3 hours ago
- Business
- CNET
Today Only: Don't Miss Your Chance to Save $300 on a 14-Inch Asus Laptop
If you're shopping for a new laptop, it's usually wise to wait for a deal as you can usually save some serious cash. We saw some massive discounts at last week's Prime Day sale, but don't worry if you missed out. Best Buy is offering another chance to save with a one-day deal on this Asus Vivobook 14. For today only, you can save a whopping $300 on this 14-inch Windows laptop, which drops the price down to just $430. Just be sure to get your order in before this offer expires at 9:59 p.m. PT (12:59 a.m. ET) tonight if you don't want to miss out. This Asus Vivobook has some decent specs that make it a good option if you only need the basics -- especially at this price. It's equipped with an Intel Core i7 processor, along with 12GB of RAM and a 512GB SSD for solid performance. The screen is a 14-inch full HD display, and it has a 180-degree hinge so it can lay completely flat. Plus, it's got a USB-C, HDMI and multiple USB-A ports so you can easily connect to other displays and accessories. And with Microsoft Copilot built-in, it also support tons of helpful AI features like text summarization, image generation and noise cancellation on video calls. It's also just 0.7 inches thick and weighs in at around three pounds so it's easy to take on the go. Why this deal matters With onboard AI and a lightweight 0.7-inch thick design, this 14-inch Asus laptop is great for productivity on the go. It's got midrange specs including 12GB of RAM and an Intel Core i7 processor, which make it a pretty great value at just $430. Just be sure to get your order in before this one-day deal expires tonight.


Daily Mail
3 hours ago
- Business
- Daily Mail
Chatbots could be helping hackers to steal data from people and companies
Generative artificial intelligence is the revolutionary new technology that is transforming the world of work. It can summarize and stores reams of data and documents in seconds, saving workers valuable time and effort, and companies lots of money, but as the old saying goes, you don't get something for nothing. As the uncontrolled and unapproved use of unvetted AI tools such as ChatGPT and Copilot soars, so too does the risk that company secrets or sensitive personal information such as salaries or health records are being unwittingly leaked. Time saver: But there are increasing concerns that using tools such as ChatGPT in a business setting could leave sensitive information exposed This hidden and largely unreported risk of serious data breaches stems from the default ability of AI models to record and archive chat history, which is used to help train the AI to better respond to questions in the future. As these conversations become part of the AI's knowledge base, retrieval or deletion of data becomes almost impossible. 'It's like putting flour into bread,' said Ronan Murphy, a tech entrepreneur and AI adviser to the Irish government. 'Once you've done it, it's very hard to take it out.' This 'machine learning' means that highly sensitive information absorbed by AI could resurface later if prompted by someone with malicious intent. Experts warn that this silent and emerging threat from so-called 'shadow AI' is as dangerous as the one already posed by scammers, where hackers trick company insiders into giving away computer passwords and other codes. But cyber criminals are also using confidential data voraciously devoured by chatbots like ChatGPT to hack into vulnerable IT systems. 'If you know how to prompt it, the AI will spill the beans,' Murphy said. The scale of the problem is alarming. A recent survey found that nearly one in seven of all data security incidents is linked to generative AI. Another found that almost a quarter of 8,000 firms surveyed worldwide gave their staff unrestricted access to publicly available AI tools. That puts confidential data such as meeting notes, disciplinary reports or financial records 'at serious risk' that 'could lead employees to inadvertently propagate threats', a report from technology giant Cisco said. 'It's like the invention of the internet – it's just arrived and it's the future – but we don't understand what we are giving to these systems and what's happening behind the scenes at the back end,' said Cisco cyber threat expert Martin Lee. One of the most high-profile cybersecurity 'own-goals' in recent years was scored by South Korean group Samsung. The consumer electronics giant banned employees from using popular chatbots like ChatGPT after discovering in 2023 that one of its engineers had accidentally pasted secret code and meeting notes onto an AI platform. Banks have also cracked down on the use of ChatGPT by staff amid concerns about the regulatory risks they face from sharing sensitive financial information. But as organisations put guardrails in place to keep their data secure, they also don't want to miss out on what may be a once-in-a-generation chance to steal a march on their rivals. 'We're seeing companies race ahead with AI implementation as a means of improving productivity and staying one step ahead of competitors,' said Ruben Miessen, co-founder of compliance software group Legalfly, whose clients include banks, insurers and asset managers. 'However, a real risk is that the lack of oversight and any internal framework is leaving client data and sensitive personal information potentially exposed,' he added. The answer though, isn't to limit AI usage. 'It's about enabling it responsibly,' Miessen said. Murphy added: 'You either say no to everything or figure out a plan to do it safely. 'Protecting sensitive data is not sexy, it's boring and time-consuming.' But unless adequate controls are put in place, 'you make a hacker's job extremely easy'.

Business Insider
10 hours ago
- Business
- Business Insider
Microsoft salaries revealed: How much the tech giant pays software engineers, product managers, and more
Microsoft epitomizes a familiar dichotomy in tech: layoffs on the one hand and big paydays for AI talent on the other. Microsoft has invested massively in AI, spending billions on its flagship Copilot tool. It's also the largest investor in OpenAI, even as the relationship has shown some cracks. The tech giant is urging staff to use internal AI tools more frequently and also allows managers to offer retention bonuses to employees, including those who contribute to AI initiatives, per an internal document obtained by Business Insider. A spreadsheet obtained by Business Insider in 2024 showed that employees within Microsoft's AI organization were making more than their non-AI colleagues. Meanwhile, Microsoft has announced multiple rounds of layoffs this year, affecting thousands of employees. It has also sought to weed out low performers, including with performance improvement plans that include payout offers. Some of the layoffs have included traditional sales staff in favor of technical salespeople to better sell AI tools. The shift comes as Microsoft faces increased competition from Google and even its partner OpenAI for enterprise AI customers. The company is still hiring, though. While Microsoft keeps compensation information close to the vest, publicly available work visa data glimpses the kind of pay it can offer employees. The figures only refer to foreign hires and only account for base pay, not the bonuses and stock awards that employees also receive. We looked at the roles where Microsoft most frequently hired from abroad. Software engineers can make as much as $284,000 in base salary, and product managers can pull in as much as $250,000. Microsoft subsidiary LinkedIn is also hiring foreign workers, including within the AI subset of machine learning. A senior software engineer in machine learning at LinkedIn can make as much as $278,000, while a staff software engineer in machine learning can take home as much as $336,000. Microsoft did not immediately respond to a request for comment from Business Insider. Here's what Microsoft is paying across key roles, based on roughly 5,400 applications from the first quarter of 2025. Microsoft software engineers can take home up to $284,000 Applied Sciences: $127,200 to $261,103 Business Analytics: $159,300 to $191,580 Business Planning: $117,200 to $201,900 Business Program Management: $102,380 to $195,100 Cloud Network Engineering: $122,700 to $220,716 Construction Project Management: $150,000 to $193,690 Customer Experience Engineering: $126,422 to $239,585 Customer Experience Program Management: $141,865 to $201,508 Data Analytics: $132,385 to $205,000 Data Center Operations Management: $115,000 to $176,900 Data Engineering: $144,855 to $264,000 Data Science: $121,200 to $274,500 Demand Planning: $147,000 to $204,550 Digital Cloud Solution Architecture: $155,085 to $217,589 Electrical Engineering: $138,995 to $247,650 Hardware Engineering: $136,000 to $270,641 Product Design: $125,100 to $208,058 Product Management: $122,800 to $250,000 Product Marketing: $113,350 to $213,200 Research Sciences: $146,054 to $208,000 Research, Applied and Data Sciences: $85,821 to $208,800 Service Engineering: $130,080 to $182,500 Silicon Engineering: $116,334 to $275,000 Site Reliability Engineering: $135,100 to $236,670 Software Engineering: $82,971 to $284,000 Solution Area Specialists: $144,000 to $209,300 Supply Planning: $131,300 to $193,270 Technical Program Management: $120,900 to $238,000 Technical Support Advisory: $114,290 to $153,984 Technology Specialists: $168,800 to $200,000 UX Research: $138,560 to $177,148 LinkedIn staff software engineers in machine learning can make up to $336,000 Manager, Software Engineering: $197,185 to $301,000 Product Manager: $141,000 to $252,000 Software Engineer: $108,826 to $205,000 Software Engineer, Machine Learning: $135,000 to $231,000 Software Engineer, Systems Infrastructure: $135,000 to $231,000 Senior Software Engineer: $121,000 to $249,000 Senior Software Engineer, Machine Learning: $154,000 to $278,000 Senior Software Engineer, Systems Infrastructure: $144,000 to $278,000 Staff Software Engineer: $158,000 to $301,000 Staff Software Engineer, Machine Learning: $190,486 to $336,000 Staff Software Engineer, Systems Infrastructure: $190,486 to $336,000