
Microsoft Copilot's new ‘Smart' mode may be powered by OpenAI's GPT-5
Ahead of GPT-5's launch, Microsoft Copilot is now reportedly rolling out a new 'Smart' mode for select users that may be powered by OpenAI's upcoming large language model.
First spotted by TestingCatalog, the new Smart mode is available for select users on both the Copilot app and website and can be accessed by clicking on the model selector that appears just below the text field. According to Microsoft, the new Smart mode enables Copilot to 'think deeply or quickly based on the task.'
Since when is 'Smart Mode' a thing in Copilot?? pic.twitter.com/aCgRjyJYr8
— Bughunter Nerd 🌎🌍🌏☮️❤️ (@bughunternerd) July 30, 2025
Copilot's much-rumoured GPT-5-powered Smart mode will change its response depending on the user query, meaning it will automatically switch between available large language models, so users don't have to. Microsoft's description of Copilot's Smart mode also suggests that GPT-5 will have both reasoning and non-reasoning capabilities, which is something OpenAI CEO Sam Altman has hinted at in the last few months.
The upcoming AI model has been under development for some time, with several reports claiming that the company has pushed back its launch date, citing safety reasons. GPT-5 is expected to roll out sometime in August and come with different intelligence modes, but it is still unclear which level of intelligence Copilot will use.
Microsoft has been shipping Copilot with all new Windows-powered devices, and while it does not enjoy the same level of popularity as ChatGPT, the tech giant wants to make it the easiest generative AI assistant one can use. And while reports have hinted that executives at Sam Altman's company were weighing accusations of anticompetitive practices against the tech giant, Microsoft's partnership with ChatGPT maker OpenAI has only grown stronger over the past few years,
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
an hour ago
- NDTV
Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'
Geoffrey Hinton, regarded by many as the 'godfather of artificial intelligence' (AI), has warned that the technology could get out of hand if chatbots manage to develop their language. Currently, AI does its thinking in English, allowing developers to track what the technology is thinking, but there could come a point where humans might not understand what AI is planning to do, as per Mr Hinton. "Now it gets more scary if they develop their own internal languages for talking to each other," he said on an episode of the "One Decision" podcast that aired last month. "I wouldn't be surprised if they developed their own language for thinking, and we have no idea what they're thinking." Mr Hinton added that AI has already demonstrated that it can think terrible thoughts, and it is not unthinkable that the machines could eventually think in ways that humans cannot track or interpret. Warning about AI Mr Hinton laid the foundations for machine learning that is powering today's AI-based products and applications. However, the Nobel laureate grew wary of AI's future development and cut ties with his employer, Google, in order to speak more freely on the issue. "It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us," said Mr Hinton at the time. "I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control." Mr Hinton has been a big advocate of government regulation for the technology, especially given the unprecedented pace of development. His warning also comes in the backdrop of repeated instances of AI chatbots hallucinating thoughts. In April, OpenAI's internal tests revealed that its o3 and o4-mini AI models were hallucinating or making things up much more frequently than even the non-reasoning models, such as GPT-4o. The company said it did not have any idea why this was happening. In a technical report, OpenAI said, "more research is needed" to understand why hallucinations are getting worse as it scales up its reasoning models.


Time of India
an hour ago
- Time of India
Anthropic CEO throws shade at Mark Zuckerberg's billion-dollar AI talent hunt with dartboard dig: ‘You can't buy purpose with a paycheck'
Culture Over Cash You Might Also Like: Billionaire Vinod Khosla predicts AI teachers will disrupt education and careers. Here's how — BigTechPod (@BigTechPod) The AI Hiring Wars: A Battle for Brains Buying Purpose? Not Quite, Says Amodei In the escalating turf war for top AI talent, Anthropic CEO Dario Amodei has delivered a pointed, and slightly humorous, critique of Meta 's aggressive recruitment tactics. Speaking on the Big Technology Podcast, Amodei painted a vivid picture: "If Mark Zuckerberg throws a dart at a dartboard and it hits your name, that doesn't mean you should be paid ten times more than the guy next to you who's just as skilled."His remarks come amid widespread reports of Meta launching an all-out offensive to poach AI engineers from rivals like OpenAI, Apple, Google, and Anthropic itself. Yet Amodei claims his startup has remained largely untouched. 'Some [employees] wouldn't even talk to Meta,' he said, asserting that their culture and mission are more attractive than any compensation package Meta can has reportedly been dangling massive offers, with some packages surpassing $200 million for a single hire, according to Business Insider and WIRED. Amodei, however, says Anthropic refuses to match such sums, insisting on fair and consistent pay across the board."I recently posted in our company Slack that we will not compromise our compensation principles or fairness if someone gets a big offer," he shared. In his view, rewarding one employee disproportionately just because they were on Meta's radar would be unjust to their equally capable this stance, Meta has managed to lure away at least one former Anthropic engineer—Joel Pobar—but Amodei suggests their broader impact has been latest AI moonshot, the Superintelligence Lab , has ignited a fierce scramble for elite minds. OpenAI's Chief Research Officer Mark Chen likened it to a break-in after losing several staffers overnight. Meanwhile, OpenAI CEO Sam Altman accused Meta of deploying 'giant offers' to lure talent, with some signing bonuses rumored to top $100 is unapologetic about the ambition. In an internal memo seen by CNBC, he claimed, 'Developing superintelligence is coming into sight,' declaring his goal to bring personal AI to every individual, not just enterprise Meta may have the resources, Amodei questions whether mission-driven AI work can be bought. 'Zuckerberg is trying to buy something that can't be bought,' he said during the podcast, underscoring Anthropic's long-term focus on safe and ethical AI sentiment resonates with other industry leaders too. OpenAI continues to frame itself as a purpose-first organization, while Meta's flashier, big-money moves risk creating tension even within its own teams. As CNBC reported, some insiders at Meta worry that a talent-heavy, cash-fueled approach could lead to ego clashes and fractured the current AI landscape, where demand far outpaces supply, the value of a skilled AI researcher is rivaling that of a professional athlete. Yet, for companies like Anthropic and OpenAI, the real challenge isn't just retaining talent—it's maintaining a sense of purpose amid the frenzy.
&w=3840&q=100)

Business Standard
an hour ago
- Business Standard
Shadow AI: The rise of a cybersecurity threat from unauthorised AI use
Shadow AI, the unauthorised use of artificial intelligence tools, models, or platforms, is emerging as a new threat, according to IBM's annual 'Cost of Data Breach' report Pallav Nayak New Delhi Listen to This Article Global data breach costs in 2025 declined for the first time in five years, dropping to an average $4.44 million due to containment driven by AI-powered defences, according to a new report. In India, such costs increased from $2.35 million last year to $2.51 million in 2025. Shadow AI, the unauthorised use of artificial intelligence tools, models, or platforms, is emerging as a new threat, according to IBM's annual 'Cost of Data Breach' report. It studied 600 organisations impacted by data breaches between March 2024 and February 2025.