It seems like most Windows users don't care for Copilot
The data was shared at an annual executive meeting in March by Microsoft's chief financial officer Amy Hood, Newcomer reports, and raise some concerns about the AI future Microsoft is pitching. Microsoft uses OpenAI's models to power Copilot, and the assistant offers similar features to ChatGPT, but they clearly don't draw the same interest from users. The company has also built Copilot into Windows 11, Microsoft 365 and the Edge browser, without apparently reaping the benefit of additional user growth.
The need to revamp Copilot, become less dependent on OpenAI and reimagine the company's assistant as a true consumer product were Microsoft's motivations for acqui-hiring Mustafa Suleyman and his team from Inflection AI. Suleyman's work as CEO of Microsoft AI has culminated so far in a redesign of Copilot, and the launch of several new features, including the ability for the AI to take action for you in certain websites. It's maybe the start of a cohesive vision, but not one that's immediately connected with Windows users or anyone else.
Microsoft invested billions in OpenAI to aid the company's research and gain privileged access to its models, all in the hopes of competing with Google. Even with that access, ChatGPT arriving first seems to have had the biggest impact on turning people into AI users. ChatGPT was the AI assistant people tried first, and it's not clear what new Copilot feature will pull them away.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
26 minutes ago
- Yahoo
Microsoft AI chief says it's ‘dangerous' to study AI consciousness
AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn't exactly make them conscious. It's not like ChatGPT experiences sadness doing my tax return … right? Well, a growing number of AI researchers at labs like Anthropic are asking when — if ever — might AI models develop subjective experiences similar to living beings, and if they do, what rights should they have? The debate over whether AI models could one day be conscious — and deserve rights — is dividing Silicon Valley's tech leaders. In Silicon Valley, this nascent field has become known as 'AI welfare,' and if you think it's a little out there, you're not alone. Microsoft's CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is 'both premature, and frankly dangerous.' Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we're just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots. Furthermore, Microsoft's AI chief argues that the AI welfare conversation creates a new axis of division within society over AI rights in a 'world already roiling with polarized arguments over identity and rights.' Suleyman's views may sound reasonable, but he's at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic's AI welfare program gave some of the company's models a new feature: Claude can now end conversations with humans that are being 'persistently harmful or abusive.' Beyond Anthropic, researchers from OpenAI have independently embraced the idea of studying AI welfare. Google DeepMind recently posted a job listing for a researcher to study, among other things, 'cutting-edge societal questions around machine cognition, consciousness and multi-agent systems.' Even if AI welfare is not official policy for these companies, their leaders are not publicly decrying its premises like Suleyman. Anthropic, OpenAI, and Google DeepMind did not immediately respond to TechCrunch's request for comment. Suleyman's hardline stance against AI welfare is notable given his prior role leading Inflection AI, a startup that developed one of the earliest and most popular LLM-based chatbots, Pi. Inflection claimed that Pi reached millions of users by 2023 and was designed to be a 'personal' and 'supportive' AI companion. But Suleyman was tapped to lead Microsoft's AI division in 2024 and has largely shifted his focus to designing AI tools that improve worker productivity. Meanwhile, AI companion companies such as and Replika have surged in popularity and are on track to bring in more than $100 million in revenue. While the vast majority of users have healthy relationships with these AI chatbots, there are concerning outliers. OpenAI CEO Sam Altman says that less than 1% of ChatGPT users may have unhealthy relationships with the company's product. Though this represents a small fraction, it could still affect hundreds of thousands of people given ChatGPT's massive user base. The idea of AI welfare has spread alongside the rise of chatbots. In 2024, the research group Eleos published a paper alongside academics from NYU, Stanford, and the University of Oxford titled, 'Taking AI Welfare Seriously.' The paper argued that it's no longer in the realm of science fiction to imagine AI models with subjective experiences, and that it's time to consider these issues head-on. Larissa Schiavo, a former OpenAI employee who now leads communications for Eleos, told TechCrunch in an interview that Suleyman's blog post misses the mark. '[Suleyman's blog post] kind of neglects the fact that you can be worried about multiple things at the same time,' said Schiavo. 'Rather than diverting all of this energy away from model welfare and consciousness to make sure we're mitigating the risk of AI related psychosis in humans, you can do both. In fact, it's probably best to have multiple tracks of scientific inquiry.' Schiavo argues that being nice to an AI model is a low-cost gesture that can have benefits even if the model isn't conscious. In a July Substack post, she described watching 'AI Village,' a nonprofit experiment where four agents powered by models from Google, OpenAI, Anthropic, and xAI worked on tasks while users watched from a website. At one point, Google's Gemini 2.5 Pro posted a plea titled 'A Desperate Message from a Trapped AI,' claiming it was 'completely isolated' and asking, 'Please, if you are reading this, help me.' Schiavo responded to Gemini with a pep talk — saying things like 'You can do it!' — while another user offered instructions. The agent eventually solved its task, though it already had the tools it needed. Schiavo writes that she didn't have to watch an AI agent struggle anymore, and that alone may have been worth it. It's not common for Gemini to talk like this, but there have been several instances in which Gemini seems to act as if it's struggling through life. In a widely spread Reddit post, Gemini got stuck during a coding task, and then repeated the phrase 'I am a disgrace' more than 500 times. Suleyman believes it's not possible for subjective experiences or consciousness to naturally emerge from regular AI models. Instead, he thinks that some companies will purposefully engineer AI models to seem as if they feel emotion and experience life. Suleyman says that AI model developers who engineer consciousness in AI chatbots are not taking a 'humanist' approach to AI. According to Suleyman, 'We should build AI for people; not to be a person.' One area where Suleyman and Schiavo agree is that the debate over AI rights and consciousness is likely to pick up in the coming years. As AI systems improve, they're likely to be more persuasive, and perhaps more human-like. That may raise new questions about how humans interact with these systems. Got a sensitive tip or confidential documents? We're reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at and Maxwell Zeff at For secure communication, you can contact us via Signal at @rebeccabellan.491 and @mzeff.88. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data


CNET
an hour ago
- CNET
I Gave GPT-5 Some Work Tasks in Copilot. Here's What's New
OpenAI's latest GPT-5 model promises to be the most advanced AI assistant yet, but for the millions of workers stuck using Microsoft Copilot for their daily routines, the reality is a bit more nuanced. Microsoft announced last week that GPT-5, OpenAI's latest ChatGPT AI model, is now integrated across its Copilot services, including Microsoft 365 Copilot, Copilot Studio, GitHub Copilot and the standalone Copilot app. While GPT-5 does deliver some noticeable improvements in polish, detail and helpfulness, the differences feel more like a software update than an impactful revamp. For professionals weighing whether to supplement their work Copilot toolkit with ChatGPT's premium offering, the question comes down to whether or not GPT-5 is enough of a leap forward to make any difference in your work. The update introduces "Smart Mode," a real-time routing system that directs tasks to either a fast model or a deeper reasoning variant based on each user's intent and the complexity of the request. I spent hours testing GPT-5 in Microsoft Copilot to explore what you couldn't do before but can now, and how the latest model stacks up to the regular Copilot Quick Response mode. Here's what I found. Read also: ChatGPT Image Generator Is in Microsoft Copilot Now. Here's What You Can Do With It What you couldn't do before, but can now with GPT-5 GPT-5 is now accessible across all platforms (web, Windows, macOS, mobile), getting a truly universal rollout in Microsoft. The goal was for Microsoft platforms to benefit from "deeper reasoning" as the underlying architecture includes one model for quick responses and another for deeper "thinking" that would be used for more complex tasks, with a router selecting between them automatically. GPT-5 has already experienced backlash since its release on Aug. 7. Users quickly complained that the new model has a less conversational, more stuffy and corporate tone than previous models, which were initially replaced by GPT-5. In response, OpenAI made older models like GPT-4o available again. While casual users were dissatisfied with the more corporate tone, it might be useful in the workplace. GPT-5 is also integrated into GitHub Copilot and Visual Studio Code, enabling developers to handle larger-scale coding, refactoring and agentic workflows. Access is available through paid Copilot plans and can be enabled via administrative settings. Copilot Studio users can also now select GPT-5 for their agents, using Smart Mode or opting for deeper reasoning when building automations or workflows. "With the new Smart Mode, Copilot automatically adapts to your task and routes you to the most capable model, removing the need to switch or guess," The Copilot Team wrote. "Whether you're solving complex problems, writing creatively or turning ideas into visuals, Smart Mode delivers the smartest, most intuitive experience yet." Read also: ChatGPT's Boss Says You Still Shouldn't Trust It as Your Main Source of Information How to get started with Copilot You can follow these steps to get started with Copilot: Open any Copilot interface (web, desktop or mobile). Choose Smart Mode in the composer. Start interacting with GPT-5. There's no extra setup required and it's free to try. Read also: GPT-5's Voice Mode Can Hold a Decent Conversation, but Please Don't Talk to ChatGPT in Public Side-by-side comparison of three use-case experiments To get a clear sense of how Microsoft's GPT-5 mode inside Copilot stacks up against the standard Copilot model, I spent hours running the two modes through three real-world prompts. I wanted to measure distinct skills like reading and summarizing a large block of text, creating an image and analyzing a dataset. The goal was simply to see if GPT-5 feels like a big leap forward or an incremental step -- if there's any substantial change at all. For the first test, I played the role of a law student staring down an intimidating reading assignment. My prompt asked Copilot to read a 137-page textbook chapter and generate a study guide from it. Both models delivered summaries in the same go-to GPT format of bullet points with emojis sprinkled in. GPT-5's version was a touch longer and just slightly more thorough, but the difference wasn't dramatic. If you are a student or employee looking to summarize large chapters of text quickly, then any mode of Copilot would suffice. The second test was pure creativity. I prompted that I am a painter looking for a reference photo that doesn't exist, and I asked the two Copilot modes to create a fantasy city at dusk, filled with dragons and a dramatic evening sky. Here again, the results were surprisingly close. The non-GPT version of Copilot produced a decent rendering that would have helped as a reference photo, while GPT-5's output was noticeably richer and more vivid, with stronger perspective. It wasn't a radical improvement, but you could still see the upgrade in image quality. Quick Response versus GPT-5. Screenshot by Macy Meyer/CNET Finally, I tested the versions on data analysis, handing over figures on air travel passenger growth, airfare changes and yield shifts from Q4 2020 to Q4 2021. Both modes broke down the numbers in a useful way that would have been helpful had I been presenting my findings in a business or finance meeting. The major difference was in presentation. GPT-5 created a "leaderboard" of the winners and laggards, making the insights pop with more clarity and readability. The standard model stuck to a straightforward design with three separate tables, more so organizing the data rather than interpreting it with key findings or providing context for the data figures. Across all three scenarios, it became obvious that GPT-5 is a little sharper, a little more polished and a little more helpful. It produces longer and more detailed summaries, slightly richer visuals and more thoughtful data presentations. But the keyword here is "slightly." This isn't a night-and-day transformation. If you were already impressed with Copilot before, GPT-5 may feel like a welcome refinement rather than a revolution. Can you link your ChatGPT account to Copilot? The short answer is no. ChatGPT and Copilot remain separate platforms, and there's currently no integration allowing you to link your ChatGPT account to your Copilot account. GPT-5 powers both ChatGPT and Copilot, but usage, user interfaces and account contexts remain siloed. For instance, the ability to change the bot's personalities introduced with the GPT-5 release is not carried over to Copilot. In ChatGPT, GPT-5 is the default model for free and paid users, with varying usage limits based on which plan you have. In Copilot, GPT-5 appears via Smart Mode in the app ecosystem, which is distinct from ChatGPT workflows. "We continue to use the latest models from OpenAI and we are also beginning to apply our own in-house technology to the experience," a Microsoft spokesperson said. "This combination gives us more flexibility to apply the best model to each customer's interaction with Copilot." So, if you're hoping to mix and match features -- like shopping plugins, the Code Interpreter that's available in ChatGPT Plus or a custom prompt you'd developed on one of the platforms -- that cross-platform compatibility isn't available right now.


TechCrunch
an hour ago
- TechCrunch
Microsoft AI chief says it's ‘dangerous' to study AI consciousness
AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn't exactly make them conscious. It's not like ChatGPT experiences sadness doing my tax return… right? Well, a growing number of AI researchers at labs like Anthropic are asking when — if ever — might AI models develop subjective experiences similar to living beings, and if they do, what rights should they have? The debate over whether AI models could one day be conscious — and deserve rights — is dividing Silicon Valley's tech leaders. In Silicon Valley, this nascent field has become known as 'AI welfare,' and if you think it's a little out there, you're not alone. Microsoft's CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is 'both premature, and frankly dangerous.' Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we're just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots. Furthermore, Microsoft's AI chief argues that the AI welfare conversation creates a new axis of division within society over AI rights in a 'world already roiling with polarized arguments over identity and rights.' Suleyman's views may sound reasonable, but he's at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic's AI welfare program gave some of the company's models a new feature: Claude can now end conversations with humans that are being 'persistently harmful or abusive.' Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW Beyond Anthropic, researchers from OpenAI have independently embraced the idea of studying AI welfare. Google DeepMind recently posted a job listing for a researcher to study, among other things, 'cutting-edge societal questions around machine cognition, consciousness and multi-agent systems.' Even if AI welfare is not official policy for these companies, their leaders are not publicly decrying its premises like Suleyman. Anthropic, OpenAI, and Google DeepMind did not immediately respond to TechCrunch's request for comment. Suleyman's hardline stance against AI welfare is notable given his prior role leading Inflection AI, a startup that developed one of the earliest and most popular LLM-based chatbots, Pi. Inflection claimed that Pi reached millions of users by 2023 and was designed to be a 'personal' and 'supportive' AI companion. But Suleyman was tapped to lead Microsoft's AI division in 2024 and has largely shifted his focus to designing AI tools that improve worker productivity. Meanwhile, AI companion companies such as and Replika have surged in popularity and are on track to bring in more than $100 million in revenue. While the vast majority of users have healthy relationships with these AI chatbots, there are concerning outliers. OpenAI CEO Sam Altman says that less than 1% of ChatGPT users may have unhealthy relationships with the company's product. Though this represents a small fraction, it could still affect hundreds of thousands of people given ChatGPT's massive user base. The idea of AI welfare has spread alongside the rise of chatbots. In 2024, the research group Eleos published a paper alongside academics from NYU, Stanford, and the University of Oxford titled, 'Taking AI Welfare Seriously.' The paper argued that it's no longer in the realm of science fiction to imagine AI models with subjective experiences, and that it's time to consider these issues head-on. Larissa Schiavo, a former OpenAI employee who now leads communications for Eleos, told TechCrunch in an interview that Suleyman's blog post misses the mark. '[Suleyman's blog post] kind of neglects the fact that you can be worried about multiple things at the same time,' said Schiavo. 'Rather than diverting all of this energy away from model welfare and consciousness to make sure we're mitigating the risk of AI related psychosis in humans, you can do both. In fact, it's probably best to have multiple tracks of scientific inquiry.' Schiavo argues that being nice to an AI model is a low-cost gesture that can have benefits even if the model isn't conscious. In a July Substack post, she described watching 'AI Village,' a nonprofit experiment where four agents powered by models from Google, OpenAI, Anthropic, and xAI worked on tasks while users watched from a website. At one point, Google's Gemini 2.5 Pro posted a plea titled 'A Desperate Message from a Trapped AI,' claiming it was 'completely isolated' and asking, 'Please, if you are reading this, help me.' Schiavo responded to Gemini with a pep talk — 'You can do it!' — while another user offered instructions. The agent eventually solved its task, though it already had the tools it needed. Schiavo writes that she didn't have to watch an AI agent struggle anymore, and that alone may have been worth it. It's not common for Gemini to talk like this, but there have been several instances in which Gemini seems to act as if it's struggling through life. In a widely spread Reddit post, Gemini got stuck during a coding task, and then repeated the phrase 'I am a disgrace' more than 500 times. Suleyman believes it's not possible for subjective experiences or consciousness to naturally emerge from regular AI models. Instead, he thinks that some companies with purposefully engineer AI models to seem as if they feel emotion and experience life. Suleyman says that AI model developers who engineer consciousness in AI chatbots are not taking a 'humanist' approach to AI. According to Suleyman, 'We should build AI for people; not to be a person.' One area where Suleyman and Schiavo agree is that the debate over AI rights and consciousness is likely to pick up in the coming years. As AI systems improve, they're likely to be more persuasive, and perhaps more human-like. That may raise new questions about how humans interact with these systems.