Latest news with #NaveenRao

Mint
14-05-2025
- Business
- Mint
Code junkies make way for AI pros as skills landscape shifts
Employers are keen on hiring tech pros skilled in artificial intelligence rather than traditional coding, even as roles diminish in project management, data analysis and content marketing, a Mint+Shine Talent Insights study found. The study, based on responses from 1,300 job seekers and 251 HR executives in the January-March period, reflects the changing landscape of skills sought by Indian companies. "29% survey respondents reported a decline in demand for traditional coding roles compared to last year. Entry-level coding and support jobs may decline, but high-value, AI-assisted engineering and product roles are on the rise," the study said. Despite the crests and troughs in the job market, the IT sector remained one of India's top recruiters in 2024, accounting for 37% of the total hiring. "Rising demand for IT services, digital transformation across industries, emerging tech and startups, rapid adoption of AI, remote working supporting global hiring and ample talent pool are key growth drivers for this industry," the study noted. "The job market is evolving faster than ever, and so are the expectations. AI is transforming how we work, but it's also redefining what we value. Only those who grow with the change will stay relevant in the new world of work," said Akhil Gupta, chief executive officer (CEO), Hence, companies catering to new-age skills have been a bright spot, witnessing continued demand and fundraising, in an otherwise gloomy edtech sector. Companies like upGrad and Eruditus have raised new rounds of funds, while others like Physics Wallah, which earlier operated in adjacent segments like test prep and K-12, have diversified into upskilling to cash in on the growing demand. Also read | Coding is currently GenAI's killer app, says Databricks AI head Naveen Rao Companies have seen the shift in the demand for specific skill sets, where some of the popular ones are getting phased out, also resulting in the need to upskill. "Technology has always evolved in cycles—from trials and proofs-of-concept to mass production and commoditization. Today, we are amid one such cycle, where digital technologies like AI and automation are transforming industries with unprecedented pace and breadth," said Amit Chadha, chief executive officer and managing director, L&T Technology Services. Chadha pointed out that skills that were "once central to traditional engineering such as standalone computer-aided design (CAD), computer aided manufacturing (CAM), or manual testing, are gradually giving way to integrated, intelligent, and automation-driven approaches. CADs and CAMs are often used to build prototypes. Banking, Financial Services, and Insurance (BFSI) and e-commerce are among sectors that remain attractive. In fact, the banking sector may have hired fewer than the previous year, but remains one of the top recruiters. Mint had reported in April that lenders had over-hired after the pandemic and underestimated the growth of digital services, and now recruitment in the lower orders has eased a bit. In the e-commerce industry, logistics and dark stores continue to recruit in good numbers. Read this | Talent shortage, candidates' demands delay hiring closures: Mint+Shine study Global capability centres (GCC), essentially dedicated technical centres for global companies, are another talent-guzzler. The study pointed out that GCCs in India are set to create 425,000-450,000 new jobs this year. RPG Group said it is using strategic skill mapping, customized learning programmes and cross-functional gigs to update talent. "Siloed technical expertise and professionals with narrow technical skills without adaptability may find their roles getting limited as interdisciplinary knowledge is becoming more valuable," cautioned Supratik Bhattacharyya, chief talent officer at the group. Whether established business houses or startups, companies are scanning the talent pool for AI-based skill sets. "The shift is clear: it's no longer about 'Do you know AI?' but "Can you apply AI to solve a business problem in your domain in real-world scenarios?" said Mayank Kumar, co-founder, upGrad. The upskilling platform says it has seen a 40% quarterly uptick in the March quarter for a host of AI-related courses. Similarly, Raman Khanduja, co-founder and CEO at MintOak Innovations, a company that helps banks innovate and compete with new-age fintechs, sees routine coding, manual operations and narrowly defined roles losing relevance. 'What's gaining ground are skills that harness the power of AI, not just in engineering, but also in areas like data analysis, customer insights and content syndication." While firms are scanning the job market for the right candidate, Dale Vaz, co-founder of stock trading platform Sahi, points out how AI models have boosted business. "Our brokerage, for example, is priced at just ₹10 per order—50% lower than many leading brokers—thanks to our AI-first cost structure," said Vaz, the former chief technology officer (CTO) of e-commerce company Swiggy. And read | AI to play a key role in hiring, but aspirants prefer human contact: report


Fast Company
08-05-2025
- Business
- Fast Company
Coding emerges as generative AI's breakout star
Welcome to AI Decoded, Fast Company 's weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. Half of all LLM usage is for writing computer code The tech industry insists that AI will 'transform' how companies, both large and small, operate. Tech VCs and AI founders predict that major business functions will be reshaped, one by one, to be handled by AI agents. For a while, many speculated which function would be transformed first. It wasn't customer service, legal, or marketing: it was software development. Generative AI's first killer app is coding. Tools like Cursor and Windsurf can now complete software projects with minimal input or oversight from human engineers. Businesses are rushing to capitalize on the efficiency gains offered by AI coding. Naveen Rao, chief AI officer at Databricks, estimates that coding accounts for half of all large language model usage today. A 2024 GitHub survey found that over 97% of developers have used AI coding tools at work, with 30% to 40% of organizations actively encouraging their adoption. (GitHub, owned by Microsoft, created one of the first such tools, Copilot.) Microsoft CEO Satya Nadella recently said AI now writes up to 30% of the company's code. Google CEO Sundar Pichai echoed that sentiment, noting more than 30% of new code at Google is AI-generated. The soaring valuations of AI coding startups underscore the momentum. Anysphere's Cursor just raised $900 million at a $9 billion valuation— up from $2.5 billion earlier this year. Meanwhile, OpenAI acquired Windsurf (formerly Codeium) for $3 billion. And the tools are improving fast. OpenAI's chief product officer, Kevin Weil, explained in a recent interview that just five months ago, the company's best model ranked around one-millionth on a well-known benchmark for competitive coders—not great, but still in the top two or three percentile. Today, OpenAI's top model, o3, ranks as the 175th best competitive coder in the world on that same test. The rapid leap in performance suggests an AI coding assistant could soon claim the number-one spot. 'Forever after that point computers will be better than humans at writing code,' he said. One reason for the progress: AI coding tools are gaining stronger reasoning abilities and can process much more information at once. While models retain general knowledge from pretraining, they depend on specific project-related input—such as a software description—provided by a human when it's time to build something. This information is stored in short-term memory, known as a context window. Currently, state-of-the-art tools can productively consider fewer than 100,000 tokens (units representing words and word parts) at once. But that number is bound to go up. Google DeepMind research scientist Nikolay Savinov said in a recent interview that AI coding tools will soon support 10 million-token context windows—and eventually, 100 million. With that kind of memory, an AI tool could absorb vast amounts of human instruction and even analyze an entire company's existing codebase for guidance on how to build and optimize new systems. 'I imagine that we will very soon get to superhuman coding AI systems that will be totally unrivaled, the new tool for every coder in the world,' Savinov said. Accenture research shows AI 'reinvention' of business still far away A large percentage of that first wave of AI projects, numerous industry sources have told me, ran into unforeseen problems—such as messy or incomplete data, missing infrastructure, outdated IT systems, and a lack of in-house expertise—and never made it into production. Many of the projects that did go live failed to prove they were worth the time, money, or effort. One AI company founder told me that, based on his conversations with C-level executives, he believes the success rate of first-wave AI projects was less than 10%. The global consulting firm Accenture recently published research on what separates the winners from the rest of the pack. The firm emphasizes the importance of 'thinking big'—that is, scaling AI systems aggressively across users and business functions—as well as securing executive buy-in, reskilling employees, and making significant investments in AI and cloud infrastructure. Accenture refers to companies that meet these criteria and see tangible results as 'front runners.' Yet Accenture's data shows that such companies are still in the minority. After surveying executives at nearly 2,000 companies with more than $1 billion in revenue, the firm found that only about one-third (34%) had made a long-term investment in a generative AI system focused on a core business function. 'Accenture's research revealed that a small minority of companies . . . are already achieving considerable success at reinventing their enterprises with gen AI,' the report states. It also found that among those surveyed, 15% are ready to 'reinvent' themselves with AI, 43% are 'progressing,' and another 43% are 'merely experimenting.' Some companies may have been better off ignoring the early AI hype and waiting for the models, tools, and infrastructure to mature. On the other hand, there's something to be said for learning by doing—even if the first attempt falls short. Google is putting AI models to work to protect against online and phone scams Online and phone scams, some of them powered by generative AI tools, surged in 2024 and continue to rise. Now, Google is deploying some of its latest AI models to help protect users from these threats. One such model is Gemini Nano, a lightweight AI that can run directly on a user's device. Now, when a Chrome user enables Enhanced Protection mode in Safe Browsing—the browser's highest security setting—the Nano model runs locally to scan web content for signs of fraud. It can recognize common scam tactics, such as bad actors posing as remote technical support staff, a tactic Google says is becoming increasingly common. The model is also capable of detecting novel scams it hasn't encountered before.

Japan Times
29-01-2025
- Business
- Japan Times
Why blocking China's DeepSeek from using AI made in U.S. may be difficult
Top White House advisers this week expressed alarm that China's DeepSeek may have benefited from a method that allegedly piggybacks off the advances of U.S. rivals called "distillation." The technique, which involves one AI system learning from another AI system, may be difficult to stop, according to executive and investor sources in Silicon Valley. DeepSeek this month rocked the technology sector with a new AI model that appeared to rival the capabilities of U.S. giants like OpenAI, but at a much lower cost. And the China-based company gave away the code for free. Some technologists believe that DeepSeek's model may have learned from U.S. models to make some of its gains. The distillation technique involves having an older, more established and powerful AI model evaluate the quality of the answers coming out of a newer model, effectively transferring the older model's learnings. That means the newer model can reap the benefits of the massive investments of time and computing power that went into building the initial model without the associated costs. This form of distillation, which is different from how most academic researchers previously used the word, is a common technique used in the AI field. However, it is a violation of the terms of service of some prominent models put out by U.S. tech companies in recent years, including OpenAI. The ChatGPT maker said that it knows of groups in China actively working to replicate U.S. AI models via distillation and is reviewing whether or not DeepSeek may have distilled its models inappropriately, a spokesperson said. Naveen Rao, vice president of AI at San Francisco-based Databricks, which does not use the technique when terms of service prohibit it, said that learning from rivals is "par for the course" in the AI industry. Rao likened this to how automakers will buy and then examine one another's engines. "To be completely fair, this happens in every scenario. Competition is a real thing, and when it's extractable information, you're going to extract it and try to get a win," Rao said. "We all try to be good citizens, but we're all competing at the same time." The DeepSeek app icon on a mobile phone | REUTERS Howard Lutnick, President Donald Trump's nominee for Secretary of Commerce who would oversee future export controls on AI technology, told the U.S. Senate during a confirmation hearing on Wednesday that it appeared DeepSeek had misappropriated U.S. AI technology and vowed to impose restrictions. "I do not believe that DeepSeek was done all above board. That's nonsense," Lutnick said. "I'm going to be rigorous in our pursuit of restrictions and enforcing those restrictions to keep us in the lead." David Sacks, the White House's AI and crypto czar, also raised concerns about DeepSeek distillation in a Fox News interview on Tuesday. DeepSeek did not immediately answer a request for comment on the allegations. OpenAI added it will work with the U.S. government to protect U.S. technology, though it did not detail how. "As the leading builder of AI, we engage in countermeasures to protect our IP, including a careful process for which frontier capabilities to include in released models," the company said in a statement. The most recent round of concern in Washington about China's use of U.S. products to advance its tech sector is similar to previous concerns about the semiconductor industry, where the U.S. has imposed restrictions on what chips and manufacturing tools can be shipped to China and is examining restricting work on certain open technologies. Technologists said blocking distillation may be harder than it looks. One of DeepSeek's innovations was showing that a relatively small number of data samples — fewer than 1 million — from a larger, more capable model could drastically improve the capabilities of a smaller model. When popular products like ChatGPT have hundreds of millions of users, such small amounts of traffic could be hard to detect — and some models, such as Meta Platforms' Llama and French startup Mistral's offerings, can be downloaded freely and used in private data centers, meaning violations of their terms of service may be hard to spot. "It's impossible to stop model distillation when you have open-source models like Mistral and Llama. They are available to everybody. They can also find OpenAI's model somewhere through customers," said Umesh Padval, managing director at Thomvest Ventures. The license for Meta's Llama model requires those using it for distillation to disclose that practice, a Meta spokesperson said. DeepSeek in a paper did disclose using Llama for some distilled versions of the models it released this month, but did not address whether it had ever used Meta's model earlier in the process. The Meta spokesperson declined to say whether the company believed DeepSeek had violated its terms of service. One source familiar with the thinking at a major AI lab said the only way to stop firms like DeepSeek from distilling U.S. models would be stringent know-your-customer requirements similar to how financial companies identify with whom they do business. But nothing like that is set in stone, the source said. The administration of former President Joe Biden had put forth such requirements, which President Donald Trump may not embrace. The White House did not immediately respond to a request for comment. Jonathan Ross, chief executive of Groq, an AI computing company that hosts AI models in its cloud, has taken the step of blocking all Chinese IP addresses from accessing its cloud to block Chinese firms from allegedly piggybacking off the AI models it hosts. "That's not sufficient, because people can find ways to get around it," Ross said. "We have ideas that would allow us to prevent that, and it's going to be a cat and mouse game ... I don't know what the solution is. If anyone comes up with it, let us know, and we'll implement it."


Reuters
29-01-2025
- Business
- Reuters
Focus: Why blocking China's DeepSeek from using US AI may be difficult
Jan 29 (Reuters) - Top White House advisers this week expressed alarm that China's DeepSeek may have benefited from a method that allegedly piggybacks off the advances of U.S. rivals called "distillation." The technique, which involves one AI system learning from another AI system, may be difficult to stop, according to executive and investor sources in Silicon Valley. DeepSeek this month rocked the technology sector with a new AI model that appeared to rival the capabilities of U.S. giants like OpenAI, but at much lower cost. And the China-based company gave away the code for free. Some technologists believe that DeepSeek's model may have learned from U.S. models to make some of its gains. The distillation technique involves having an older, more established and powerful AI model evaluate the quality of the answers coming out of a newer model, effectively transferring the older model's learnings. That means the newer model can reap the benefits of the massive investments of time and computing power that went into building the initial model without the associated costs. This form of distillation, which is different from how most academic researchers previously used the word, is a common technique used in the AI field. However, it is a violation of the terms of service of some prominent models put out by U.S. tech companies in recent years, including OpenAI. The ChatGPT maker said that it knows of groups in China actively working to replicate U.S. AI models via distillation and is reviewing whether or not DeepSeek may have distilled its models inappropriately, a spokesperson told Reuters. Naveen Rao, vice president of AI at San Francisco-based Databricks, which does not use the technique when terms of service prohibit it, said that learning from rivals is "par for the course" in the AI industry. Rao likened this to how automakers will buy and then examine one another's engines. "To be completely fair, this happens in every scenario. Competition is a real thing, and when it's extractable information, you're going to extract it and try to get a win," Rao said. "We all try to be good citizens, but we're all competing at the same time." Howard Lutnick, President Donald Trump's nominee for Secretary of Commerce who would oversee future export controls on AI technology, told the U.S. Senate during a confirmation hearing on Wednesday that it appeared DeepSeek had misappropriated U.S. AI technology and vowed to impose restrictions. "I do not believe that DeepSeek was done all above board. That's nonsense," Lutnick said. "I'm going to be rigorous in our pursuit of restrictions and enforcing those restrictions to keep us in the lead." David Sacks, the White House's AI and crypto czar, also raised concerns about DeepSeek distillation in a Fox News interview on Tuesday. DeepSeek did not immediately answer a request for comment on the allegations. OpenAI added it will work with the U.S. government to protect U.S. technology, though it did not detail how. "As the leading builder of AI, we engage in countermeasures to protect our IP, including a careful process for which frontier capabilities to include in released models," the company said in a statement. The most recent round of concern in Washington about China's use of U.S. products to advance its tech sector is similar to previous concerns about the semiconductor industry, where the U.S. has imposed restrictions on what chips and manufacturing tools can be shipped to China and is examining restricting work on certain open technologies. NEEDLE IN A HAYSTACK Technologists said blocking distillation may be harder than it looks. One of DeepSeek's innovations was showing that a relatively small number of data samples - fewer than one million - from a larger, more capable model could drastically improve the capabilities of a smaller model. When popular products like ChatGPT have hundreds of millions of users, such small amounts of traffic could be hard to detect - and some models, such as Meta Platforms' (META.O), opens new tab Llama and French startup Mistral's offerings, can be downloaded freely and used in private data centers, meaning violations of their terms of service may be hard to spot. "It's impossible to stop model distillation when you have open-source models like Mistral and Llama. They are available to everybody. They can also find OpenAI's model somewhere through customers," said Umesh Padval, managing director at Thomvest Ventures. The license for Meta's Llama model requires those using it for distillation to disclose that practice, a Meta spokesperson told Reuters. DeepSeek in a paper did disclose using Llama for some distilled versions of the models it released this month, but did not address whether it had ever used Meta's model earlier in the process. The Meta spokesperson declined to say whether the company believed DeepSeek had violated its terms of service. One source familiar with the thinking at a major AI lab said the only way to stop firms like DeepSeek from distilling U.S. models would be stringent know-your-customer requirements similar to how financial companies identify with whom they do business. But nothing like that is set in stone, the source said. The administration of former President Joe Biden had put forth such requirements, which President Donald Trump may not embrace. The White House did not immediately respond to a request for comment. Jonathan Ross, chief executive of Groq, an AI computing company that hosts AI models in its cloud, has taken the step of blocking all Chinese IP addresses from accessing its cloud to block Chinese firms from allegedly piggybacking off the AI models it hosts. "That's not sufficient, because people can find ways to get around it," Ross said. "We have ideas that would allow us to prevent that, and it's going to be a cat and mouse game ... I don't know what the solution is. If anyone comes up with it, let us know, and we'll implement it."