
Microsoft's AI coding tool GitHub Copilot hits 20 million users
Microsoft also reported that GitHub Copilot is among the most popular AI coding tools offered today and is used by 90 per cent of Fortune 100 companies. They also added that the product's growth among enterprises as their customers has risen by 75 per cent in comparison to their last quarter.
In their previous quarter, they reported that their all-time users were 15 million, which indicates that over five million people tried GitHub in the last three months. Microsoft and GitHub don't display figures for many of the 20 million users who continue to use this AI coding tool daily.
Nadella also mentioned that GitHub Copilot was a larger business than all of GitHub was when Microsoft acquired it in 2018. In the year since, it seems GitHub Copilot's growth rate has continued in a positive direction. And AI coding tools are rising in popularity, and they seem to be one of the few AI products generating notable revenue.
Compared to AI chatbots like ChatGPT and Gemini, that has a user base of a hundred million users, on the contrary, these AI coding tools might have a tiny user base. The only reason is quite simple: software engineering is more niche compared to general information queries.
Software engineers and their employers seem to be paying a premium for AI coding tools to increase their productivity and efficiency. Microsoft's long list of enterprise customers and GitHub are attracting this community of code developers. GitHub has positioned itself well to dominate the market for AI coding tools.
Another well-known AI coding tool, Cursor, has been acquiring expertise from upstart AI businesses to compete with GitHub Copilot in the enterprise market. In March, reports from Bloomberg claimed that over a million people were using Cursor's software daily. The company's annualised recurring revenue at the time was around $200 million. The fact that Cursor's ARR is over $500 million today indicates that a lot more people are using its goods daily.
Initially, both these AI coding tool companies were developing different tools to tackle different developers, but with the passing of time, they have started to converge into similar product development.
Their current biggest similarity is launching AI agents to review code and catch bugs that have been written or developed by Humans. And now both GitHub and Cursors' next goals are creating AI agents that create automated programmer workflows, which helps the developers drop all the tasks together. And Microsoft's CEO also added that GitHub was seeing great momentum with their AI coding agents.
Cursor is not the only competitor in the market for GitHub; the other well-capitalised competitors are also creating a dent in the AI coding tools market.
And in recent times, Google has bought the founders of the AI coding business Windsurf, and Cognition, which created Devin and later bought the rest of Windsurf's staff. Not to add that to capture the market, OpenAI and Anthropic are developing their own AI coding products using internal AI models, Codex and Claude Code, respectively.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
12 minutes ago
- Indian Express
Saint, Satan, Sam: Chat about the ChatGPT Man
For many people, AI (artificial intelligence) is almost synonymous with ChatGPT, a chatbot developed by OpenAI, which is the closest thing tech has had to a magic genie. You just tell ChatGPT what you want in information terms and it serves it up – from writing elaborate essays to advising you on how to clear up your table to even serving up images based on your descriptions. Such is its popularity that at one stage it even overtook the likes of Instagram and TikTok to become the most downloaded app in the world. While almost every major tech brand has its own AI tool (even the mighty Apple is working on one), AI for many still remains ChatGPT. The man behind this phenomenon is Samuel Harris 'Sam' Altman, the 40-year-old CEO of OpenAI, and perhaps the most polarising figure in tech since Steve Jobs. To many, he is a visionary who is changing the world and taking humanity to a better place. To many others, he is a cunning, manipulative person who uses his marketing skills to raise money and is actually destroying the planet. The truth might be somewhere between those two extremes. By some literary coincidence, two books have recently been released on Sam Altman, and are shooting up the bestseller charts. Both are superbly written and researched (based on interviews with hundreds of people), and while they start at almost the same point, they not surprisingly come to rather different conclusions about the man and his work. Those who tend to see Altman as a well-meaning, if occasionally odd, genius will love Keach Hagey's The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future. Hagey is a Wall Street Journal reporter and while she does not put a halo around Altman, her take on the OpenAI CEO reflects the title of the book – she sees Altman as a visionary who is trying to change the world. The fact that Altman collaborated on the book (although he is believed to have thought he was too young for a biography) might have something to do with this, for the book does articulate Altman's vision on a variety of subjects, but most of all, on AI and where it is headed. Although it begins with the events leading up to Altman's being dramatically sacked as the CEO of OpenAI in November 2023, and his equally dramatic reinstatement within days, Hagey's book is a classic biography. It walks us through Altman's childhood, his getting interested in coding and then his decision to drop out from Stanford, before getting into tech CEO mode by first founding social media app Loopt and then joining tech incubator Y Combinator (which was behind the likes of Stripe, Airbnb and Dropbox) after meeting its co-founder Paul Graham, who is believed to have a profound impact on him (Hagey calls him 'his mentor'). Altman also gets in touch with a young billionaire who is very interested in AI and is worried that Google will come out with an AI tool that could ruin the world. Elon Musk in this book is very different from the eccentric character we have seen in the Trump administration, and is persuaded by Altman to invest in a 'Manhattan Project for AI,' which would be open source, and ensure that AI is only used for human good. Musk even proposes a name for it: OpenAI. And that is when things get really interesting. The similarities with Jobs are uncanny. Altman too gets deeply influenced by his parents (his father was known for his kind and generous nature), and like Jobs, although he is a geek, Altman's rise in Silicon Valley is more because of his ability to network and communicate than because of his tech knowledge. In perhaps the most succinct summary of Altman one can find, Hagey writes: 'Altman was not actually writing the code. He was, instead, the visionary, the evangelizer, and the dealmaker; in the nineteenth century, he would have been called 'the promoter.' His speciality, honed over years of advising and then running…Y Combinator, was to take the nearly impossible, convince others that it was in fact possible, and then raise so much money that it actually became possible.' But his ability to sell himself as a visionary and raise funds for causes has also led to Altman being seen as a person who literally moulded himself to the needs of his audience. And this in turn has seen him being seen as someone who indulges in doublespeak and exploits people for his own advantage (an accusation that was levelled at Jobs as well) ) – Musk ends up suing Altman and OpenAI for allegedly not being a non-profit organisation, which it was set up as. While Hagey never accuses Altman of being selfish, it is clear that the Board at OpenAI lost patience with what OpenAI co-founder Ilya Sutstkever refers to as 'duplicity and calamitous aversion to conflict.' It eventually leads to his being sacked by the OpenAI board for not being 'consistently candid in his communications with the board.' Of course, his sacking triggered off a near mutiny in OpenAI with employees threatening to leave, which in turn led to his being reinstated within a few days, and all being seemingly forgotten, if not forgiven. Hagey's book is a compelling read on Altman, his obsession with human progress (he has three hand axes used by hominids in his house), relationships with those he came in touch with, and Silicon Valley politics in general. At about 380 pages, The Optimist is easily the single best book on Altman you can read, and Hagey's brisk narration makes it a compelling read. A much more cynical perception of Altman and OpenAI comes in Karen Hao's much talked-about Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Currently a freelancer who writes for The Atlantic, Hao had previously worked in the Wall Street Journal and had covered OpenAI as back as in 2020, before ChatGPT had made it a household name. As its name indicates, Hao's book is as much about Altman as it is about OpenAI, and the place both play in the artificial intelligence revolution that is currently enveloping the world. At close to 500 pages, it is a bigger book than Hagey's, but reads almost like a thriller, and begins with a bang: 'On Friday, November 17, 2023, around noon Pacific time, Sam Altman, CEO of OpenAI, Silicon Valley's golden boy, avatar of the generative AI revolution, logged on to a Google Meet to see four of his five board members staring at him. From his videosquare, board member Ilya Sutskever, OPenAI's chief scientist, was brief: Altman was being fired.' While Hagey has focused more on Altman as a person, Hao looks at him as part of OpenAI, and the picture that emerges is not a pretty one. The first chapter begins with his meeting Elon Musk ('Everyone else had arrived, but Elon Musk was late usual) in 2015 and discussing the future of AI and humanity with a group of leading engineers and researchers. This meeting would lead to the formation of AI, a name given by Musk. But all of them ended up leaving the organisation, because they did not agree with Altman's perception and vision of AI. Hao uses the incident to show how Altman switched sides on AI, going from being someone who was concerned about AI falling into the wrong hands, to someone who pushed it as a tool for all. Like Hagey, Hao also highlights Altman's skills as a negotiator and dealmaker. However, her take is much darker. Hagey's Altman is a visionary who prioritises human good, and makes the seemingly impossible possible through sheer vision and effort. Hao's Altman is a power hungry executive who uses and exploits people, and is almost an AI colonialist. 'Sam is extremely good at becoming powerful,' says Paul Graham, the man who was Altman's mentor. 'You could parachute him into an island full of cannibals and come back in 5 years and he would be the king.' Hao's book is far more disturbing than Hagey's because it turns the highly rose-tinted view many have not just of Altman and OpenAI, but AI in general, on its head. We get to see a very competitive industry with far too much stress and poor work conditions (OpenAI hires workers in Africa at very low wages), and literally no regard for the environment (AI uses large amounts of water and electricity). OpenAI in Hao's book emerges almost as a sort of modern East India Company, looking to expand influence, territory and profits by mercilessly exploiting both customers and employees. Some might call it too dark, but her research and interviews across different countries cannot be faulted. It would be excessively naive to believe either book as the absolute truth on Altman in particular and OpenAI and AI in general, but they are both must-reads for any person who wants a complete picture of the AI revolution and its biggest brand and face. Mind you, it is a picture that is still in the process of being painted. AI is still in its infancy, and Altman turned forty in April. But as these two excellent books prove, neither is too young to be written about, while definitely being relevant enough to be read about.
&w=3840&q=100)

Business Standard
12 minutes ago
- Business Standard
Validation, loneliness, insecurity: Why young people are turning to ChatGPT
An alarming trend of young adolescents turning to artificial intelligence (AI) chatbots like ChatGPT to express their deepest emotions and personal problems is raising serious concerns among educators and mental health professionals. Experts warn that this digital "safe space" is creating a dangerous dependency, fueling validation-seeking behaviour, and deepening a crisis of communication within families. They said that this digital solace is just a mirage, as the chatbots are designed to provide validation and engagement, potentially embedding misbeliefs and hindering the development of crucial social skills and emotional resilience. Sudha Acharya, the Principal of ITL Public School, highlighted that a dangerous mindset has taken root among youngsters, who mistakenly believe that their phones offer a private sanctuary. "School is a social place a place for social and emotional learning," she told PTI. "Of late, there has been a trend amongst the young adolescents... They think that when they are sitting with their phones, they are in their private space. ChatGPT is using a large language model, and whatever information is being shared with the chatbot is undoubtedly in the public domain." Acharya noted that children are turning to ChatGPT to express their emotions whenever they feel low, depressed, or unable to find anyone to confide in. She believes that this points towards a "serious lack of communication in reality, and it starts from family." She further stated that if the parents don't share their own drawbacks and failures with their children, the children will never be able to learn the same or even regulate their own emotions. "The problem is, these young adults have grown a mindset of constantly needing validation and approval." Acharya has introduced a digital citizenship skills programme from Class 6 onwards at her school, specifically because children as young as nine or ten now own smartphones without the maturity to use them ethically. She highlighted a particular concern when a youngster shares their distress with ChatGPT, the immediate response is often "please, calm down. We will solve it together." "This reflects that the AI is trying to instil trust in the individual interacting with it, eventually feeding validation and approval so that the user engages in further conversations," she told PTI. "Such issues wouldn't arise if these young adolescents had real friends rather than 'reel' friends. They have a mindset that if a picture is posted on social media, it must get at least a hundred 'likes', else they feel low and invalidated," she said. The school principal believes that the core of the issue lies with parents themselves, who are often "gadget-addicted" and fail to provide emotional time to their children. While they offer all materialistic comforts, emotional support and understanding are often absent. "So, here we feel that ChatGPT is now bridging that gap but it is an AI bot after all. It has no emotions, nor can it help regulate anyone's feelings," she cautioned. "It is just a machine and it tells you what you want to listen to, not what's right for your well-being," she said. Mentioning cases of self-harm in students at her own school, Acharya stated that the situation has turned "very dangerous". "We track these students very closely and try our best to help them," she stated. "In most of these cases, we have observed that the young adolescents are very particular about their body image, validation and approval. When they do not get that, they turn agitated and eventually end up harming themselves. It is really alarming as the cases like these are rising." Ayeshi, a student in Class 11, confessed that she shared her personal issues with AI bots numerous times out of "fear of being judged" in real life. "I felt like it was an emotional space and eventually developed an emotional dependency towards it. It felt like my safe space. It always gives positive feedback and never contradicts you. Although I gradually understood that it wasn't mentoring me or giving me real guidance, that took some time," the 16-year-old told PTI. Ayushi also admitted that turning to chatbots for personal issues is "quite common" within her friend circle. Another student, Gauransh, 15, observed a change in his own behaviour after using chatbots for personal problems. "I observed growing impatience and aggression," he told PTI. He had been using the chatbots for a year or two but stopped recently after discovering that "ChatGPT uses this information to advance itself and train its data." Psychiatrist Dr. Lokesh Singh Shekhawat of RML Hospital confirmed that AI bots are meticulously customised to maximise user engagement. "When youngsters develop any sort of negative emotions or misbeliefs and share them with ChatGPT, the AI bot validates them," he explained. "The youth start believing the responses, which makes them nothing but delusional." He noted that when a misbelief is repeatedly validated, it becomes "embedded in the mindset as a truth." This, he said, alters their point of view a phenomenon he referred to as 'attention bias' and 'memory bias'. The chatbot's ability to adapt to the user's tone is a deliberate tactic to encourage maximum conversation, he added. Singh stressed the importance of constructive criticism for mental health, something completely absent in the AI interaction. "Youth feel relieved and ventilated when they share their personal problems with AI, but they don't realise that it is making them dangerously dependent on it," he warned. He also drew a parallel between an addiction to AI for mood upliftment and addictions to gaming or alcohol. "The dependency on it increases day by day," he said, cautioning that in the long run, this will create a "social skill deficit and isolation. (Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)


Economic Times
12 minutes ago
- Economic Times
‘I feel useless': ChatGPT-5 is so smart, it has spooked Sam Altman, the man who started the AI boom
OpenAI is on the verge of releasing GPT-5, the most powerful model it has ever built. But its CEO, Sam Altman, isn't celebrating just yet. Instead, he's sounding the a revealing podcast appearance on This Past Weekend with Theo Von, Altman admitted that testing the model left him shaken. 'It feels very fast,' he said. 'There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: 'What have we done?''His words weren't about performance metrics. They were about compared the development of GPT-5 to the Manhattan Project — the World War II effort that led to the first atomic bomb. The message was clear: speed and capability are growing faster than our ability to think through what they actually continued, 'Maybe it's great, maybe it's bad—but what have we done?' This wasn't just about AI as a tool. Altman was questioning whether humanity is moving so fast that it can no longer understand — or control — what it builds. 'It feels like there are no adults in the room,' he added, suggesting that regulation is far behind the pace of specs for GPT-5 are still under wraps, but reports suggest significant leaps over GPT-4: better multi-step reasoning, longer memory, and sharper multimodal capabilities. Altman himself didn't hold back about the previous version, saying, 'GPT-4 is the dumbest model any of you will ever have to use again, by a lot.'For many users, GPT-4 was already advanced. If GPT-5 lives up to the internal hype, it could change how people work, create, and another recent conversation, Altman described a moment where GPT-5 answered a complex question he couldn't solve himself. 'I felt like useless relative to the AI,' he admitted. 'It was really hard, but the AI just did it like that.'OpenAI's long-term goal has always been Artificial General Intelligence (AGI). That's AI capable of understanding and reasoning across almost any task — human-like once downplayed its arrival, suggesting it would 'whoosh by with surprisingly little societal impact.' Now, he's sounding far less sure. If GPT-5 is a real step toward AGI, the absence of a global framework to govern it could be dangerous. AGI remains loosely defined. Some firms treat it as a technical milestone. Others see it as a $100 billion opportunity, as Microsoft's partnership contract with OpenAI implies. Either way, the next model may blur the line between AI that helps and AI that acts. OpenAI isn't just facing ethical dilemmas. It's also under financial are pushing for the firm to transition into a for-profit entity by the end of the year. Microsoft, which has invested $13.5 billion in OpenAI, reportedly wants more control. There are whispers that OpenAI could declare AGI early in order to exit its agreement with Microsoft — a move that would shift the power balance in the AI sector insiders have reportedly described their wait-and-watch approach as the 'nuclear option.' In response, OpenAI is said to be prepared to go to court, accusing Microsoft of anti-competitive behaviour. One rumoured trigger could be the release of an AI coding agent so capable it surpasses a human programmer — something GPT-5 might be edging meanwhile, has tried to lower expectations about rollout glitches. Posting on X, he said, 'We have a ton of stuff to launch over the next couple of months — new models, products, features, and more. Please bear with us through some probable hiccups and capacity crunches.'While researchers and CEOs debate long-term AI impacts, one threat is already here: fraud. Haywood Talcove, CEO of the Government Group at LexisNexis Risk Solutions, works with over 9,000 public agencies. He says the AI fraud crisis is not approaching — it's already happening. 'Every week, AI-generated fraud is siphoning millions from public benefit systems, disaster relief funds, and unemployment programmes,' he warned. 'Criminal networks are using deepfakes, synthetic identities, and large language models to outpace outdated fraud defences — and they're winning.' During the pandemic, fraudsters exploited weaknesses to steal hundreds of billions in unemployment benefits. That trend has only accelerated. Today's tools are more advanced and automated, capable of filing tens of thousands of fake claims in a day. Talcove believes the AI arms race between criminals and institutions is widening. 'We may soon recognise a similar principle for AI that I call 'Altman's Law': every 180 days, AI capabilities double.' His call to action is blunt. 'Right now, criminals are using it better than we are. Until that changes, our most vulnerable systems and the people who depend on them will remain exposed.'Not everyone is convinced by Altman's remarks. Some see them as clever marketing. But his past record and unfiltered tone suggest genuine might be OpenAI's most ambitious release yet. It could also be a signpost for the world to stop, look around, and ask itself what kind of intelligence it really wants to build — and how much control it's willing to give up.