logo
A new Microsoft 365 Copilot app starts rolling out today

A new Microsoft 365 Copilot app starts rolling out today

Engadget19-05-2025
Surprising no one, Microsoft's Build 2025 conference is mostly centered around its Copilot AI. Today, the company announced that it has begun rolling out its "Wave 2 Spring release," which includes a revamped Microsoft 365 Copilot app. It's also unveiled Copilot Tuning, a "low-code" method of building AI models that work with your company's specific data and processes. The goal, it seems, isn't to just make consumers reliant on OpenAI's ChatGPT model, which powers Copilot. Instead, Microsoft is aiming to empower businesses to make tools for their own needs. (For a pricey $30 per seat subscription, on top of your existing MS 365 subscription, of course.)
Microsoft claims that Copilot Tuning, which arrives in June for members of an early adopter program, could let a law firm make AI agents that "reflect its unique voice and expertise" by drafting documents and arguments automatically without any coding. Copilot Studio, the company's existing tool for developing AI agents, will also exchange be able to "exchange data, collaborate on tasks, and divide their work based on each agent's expertise." Conceivably, a company could have its HR and IT agents collaborating together, rather than being siloed off in their own domains. To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so.
With the new Microsoft 365 Copilot app, Microsoft has centered chatting with its AI to accomplish specific tasks. The layout looks fairly simple, and it appears that you'll also be able to tap into your existing agents and collaborative pages as well. As Microsoft announced in April, you'll also be able to purchase new agents in a built-in store, as well as build up Copilot Notebooks to collect your digital scraps. Like an AI version of OneNote or Evernote, Notebooks could potentially help you surface thoughts across a variety of media, and it can also produce two-person podcasts to summarize your notes. (It's unclear if they'll actually sound good enough to be useful, though.) If you buy something through a link in this article, we may earn commission.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

GPT-5's most useful upgrade isn't speed — it's the multimodal improvements that matter
GPT-5's most useful upgrade isn't speed — it's the multimodal improvements that matter

Yahoo

time37 minutes ago

  • Yahoo

GPT-5's most useful upgrade isn't speed — it's the multimodal improvements that matter

When you buy through links on our articles, Future and its syndication partners may earn a commission. What is 'Multimodality'? In the case of an AI, multimodality is the ability to understand and interact with input beyond just text. That means voice, image or video input. A multimodal chatbot can work with multiple types of input and output. This week's GPT-5 upgrade to ChatGPT dramatically raises the chatbot's speed and performance when it comes to coding, math and response accuracy. But arguably the most useful improvement in the grand scheme of AI development will be its multimodal capabilities. ChatGPT-5 brings an enhanced voice mode and a better ability to process visual information. While Sam Altman didn't go into details on multimodality specifically in this week's GPT-5 reveal livestream, he previously confirmed to Bill Gates on an episode of the latter's podcast that ChatGPT is moving towards "speech in, speech out. Images. Eventually video." The improved voice mode courtesy of GPT-5 now works with custom GPTs and will adapts its tone and speech style based on user instruction. For example, you could ask it to slow down if it's going to fast or make the voice style a bit warmer if you feel the tone is too harsh. OpenAI has also confirmed the old Standard Voice Mode across all its models is being phased out over the next 30 days. Of course, the majority of interaction with ChatGPT, or any of its best alternatives, will be through text. But as AI becomes an increasing part of every human's digital lives, it will need to transition fully into predominantly multimodal input. We've seen this before; social media only really got going when it moved off laptops and desktops and onto smartphones. Suddenly, users could snap pictures and upload them with the same device. Whether or not it's your phone or — as Zuckerberg will have you believe — a set of the best smart glasses is beside the point. The most successful AI will be the one that can make sense of the world around it. Why does this matter? GPT‑5 has been designed to natively handle (and generate) across multiple different types of data within a single model. Previous iterations had used a plugin-style approach so moving away from that should result in more seamless interactions, whichever type of input you choose. There are a huge amount of benefits to a more robust multimodal AI, including for users who may have hearing or sight impairments. The ability to refine the responses from the chatbot to suit disabilities will do wonders for tech accessibility. There are a huge amount of benefits to a more robust multimodal AI, including for users who may have hearing or sight impairments. The increasing use of voice mode could be what drives the adoption of ChatGPT Plus, since the premium tier has unlimited responses while free users are still limited to a select number of hours. Meanwhile, improved image understanding means that, for example, the AI will be less prone to hallucinations when analyzing a chart or a picture you give it. That works in tandem with the tool's "Visual Workspace" feature that means it can interact with charts and diagrams. In turn, this will also train ChatGPT to produce better and more accurate images when prompted. If you think about this in an educational context, it's going to be a huge help. Especially since GPT-5 can now understand information across much longer stretches of conversation — users can refer back to images earlier in the conversation and it will remember them. While everyone knows that AI image generation has a dark side, there's no doubt that effective multimodality is the future of AI models and it'll be interesting to see what Google Gemini's response is to these GPT-5 upgrades. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button. More from Tom's Guide ChatGPT-5 is here — 7 biggest upgrades you need to know I'm a ChatGPT power user — these are the ChatGPT-5 upgrades that I plan on using the most ChatGPT-5 features — here's the 5 upgrades I would try first

ChatGPT-5 just got 4 new personalities — here's how to use them (and why you should)
ChatGPT-5 just got 4 new personalities — here's how to use them (and why you should)

Yahoo

time39 minutes ago

  • Yahoo

ChatGPT-5 just got 4 new personalities — here's how to use them (and why you should)

When you buy through links on our articles, Future and its syndication partners may earn a commission. With the launch of OpenAI's newest model, the company has introduced four distinct personality modes for ChatGPT-5. As the company's most advanced large language model to date, it delivers major upgrades in reasoning, memory and multimodal capabilities. It's better at sustaining complex conversations, understanding context across chats and producing more accurate and creative responses. Now, during text-based conversations, users can customize how the chatbot sounds and responds. It's similar to custom GPTs, but these modes were created by OpenAI to give users a new way to tailor the assistant's tone and demeanor with a single click. Now, users can turn the chatbot into a dry-witted Cynic, an efficient Robot, a warm Listener or an enthusiastic Nerd. These personality styles are available to users running GPT-5 in text mode and can be changed at any time during a chat. What are the four ChatGPT-5 personality modes? Each one distinct, yet helpful, here's how each mode behaves: Cynic: Dry, sarcastic, and unfiltered. Offers practical advice with a side of side-eye. Great for humor or cutting through fluff. Robot: Emotionless, direct, and to the point. No extra words — just clean, accurate responses. Listener: Calm, warm and empathetic. Reflects your thoughts gently and adds emotional nuance to its replies. Nerd: Curious, excitable, and deeply knowledgeable. Explains things thoroughly and loves sharing fun facts. Each personality mode brings a different tone to your conversations and can even shift how you brainstorm, write or learn with AI. How to turn the GPT-5 personalities on Activating these modes is easy: Open ChatGPT in your browser or mobile app Start a new chat Click the sparkle/star icon (sometimes labeled 'Style' or 'Personality') near the model name Choose Cynic, Robot, Listener, or Nerd The personality will apply to your current chat, but you can switch or turn off the personality mode at any time. How it sounds in action Here's how each personality mode answers the same prompt: 'Why is the sky blue?' Cynic: Oh, you mean besides the fact that Mother Nature likes to mess with us? Fine. Sunlight hits the atmosphere, the shorter blue wavelengths scatter more than the other colors, so the sky looks blue to us simple humans. You'll get your nice orange or red at sunset when the light's taking the scenic route. Happy now? Robot: The sky appears blue due to Rayleigh scattering. When sunlight enters Earth's atmosphere, shorter wavelengths (blue) scatter more efficiently than longer wavelengths (red). This selective scattering causes the human eye to perceive the sky as blue during daylight hours. Listener: That's such a great question! It's actually because sunlight is made up of many colors, and when it passes through our atmosphere, the tiny air particles scatter the shorter blue wavelengths more than the others. So when we look up, our eyes catch mostly that soft, blue light. It's kind of peaceful when you think about it, right? Nerd: Oh, you've unlocked one of my favorite topics! The blue sky is thanks to Rayleigh scattering, which is inversely proportional to the fourth power of wavelength. That means shorter wavelengths like blue and violet scatter much more than red or yellow. Our eyes are more sensitive to blue than violet, and some violet is absorbed by the upper atmosphere, so we see a rich blue. Fun fact: the same physics makes Mars' sunsets blue and ours red! Why you should try it Switching between ChatGPT's personalities is fun, but it can also seriously boost your productivity and creativity. Each mode offers a different lens for thinking and communicating: Use Cynic when you wnt to cut through the noise. It's good for brainstorming hot takes or injecting humor into dry topics. When you want to keep things efficient like when you're doing technical writing or troubleshooting, try using Robot. It's also a useful companion when coding. Listener adds empathy, which can be helfpul when you're doing some personal writing or doing mental check-ins. It could also be useful for writing to customers if you run a business. Nerd is a useful personality when you want to make learning fun. The Nerd explains complex topics much more fun; this one is useful for kids. Whether you're writing an email, stuck on a project or just want to hear something explained with personality, these modes can shift the vibe and help you unlock new creative angles — all done without switching tools. The takeaway These new personality styles give ChatGPT-5 a more human-like edge and give you more control. As in the example above, you'll see that they all respond differently. This is an opportunity to choose how your AI sounds, thinks and helps, instead of the one-size-fits-all assistant that we got with GPT-4. Try them all. You might be surprised which one becomes your Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button. More from Tom's Guide ChatGPT-5 users are not impressed — here's why it 'feels like a downgrade' I'm a ChatGPT power user — these are the ChatGPT-5 upgrades that I plan on using the most Alexa+ accidentally ruined my kid's birthday — here's how to stop it from happening to you

From hyper-personal assistants to mind-reading tech — this is how AI will transform everything by 2035
From hyper-personal assistants to mind-reading tech — this is how AI will transform everything by 2035

Yahoo

time39 minutes ago

  • Yahoo

From hyper-personal assistants to mind-reading tech — this is how AI will transform everything by 2035

When you buy through links on our articles, Future and its syndication partners may earn a commission. Explore The World in 2035 AI | Smart Glasses | Wearable TechSmartphones | iPhones | Robots | Cars | TVs Picture a morning in 2035. Your AI assistant adjusts the lights based on your mood, reschedules your first meeting, reminds your child to take allergy medicine; all without a prompt. It's not science fiction, it's a likely reality driven by breakthroughs in ambient computing, emotional intelligence and agentic AI. Just five years ago, ChatGPT was an unfamiliar name to most, let alone a daily assistant for summarization, search, reasoning and problem-solving. Siri and Alexa were the top names that came to mind when we wanted to call a friend, place an order or dim the lights. Yet now, in 2025, we have a plethora of AI assistants and chatbots to choose from, many of which are free, and which can do a lot more than controlling smart home devices. What feels advanced now may seem utterly simplistic in a decade, reminding us that the most mind-blowing AI capabilities of 2035 might still be beyond our current imagination. Your AI assistant in 2035: Omnipresent and intuitive By 2035, your AI assistant won't just respond — it will anticipate. This evolution marks the rise of agentic AI, where assistants proactively act on your behalf using predictive analytics, long-term memory and emotion-sensing. These systems can forecast your needs by analyzing historical and real-time data, helping stay one step ahead of your requests. 'Alexa will be able to proactively anticipate needs based on patterns, preferences, and context — preparing your home before you arrive, suggesting adjustments to your calendar when conflicts arise, handling routine tasks before you even ask.' — Daniel Rausch, VP of Alexa and Echo, Amazon One assistant that's undergoing such a change is Amazon's Alexa. According to Daniel Rausch, Amazon's VP of Alexa and Echo, 'Alexa will be able to proactively anticipate needs based on patterns, preferences, and context — preparing your home before you arrive, suggesting adjustments to your calendar when conflicts arise, or handling routine tasks before you even think to ask.' The AI will remember your child's travel soccer team schedule, reschedule your meetings when it detects stress in your voice and even dim your AR glasses when you appear fatigued. 'By 2035, AI won't feel like a tool you 'use',' Rutgers professor Ahmed Elgammal says. 'It'll be more like electricity or Wi-Fi: always there, always working in the background.' And AIs will respond to more than just your speech. Chris Ullrich, CTO of Cognixion, a Santa Barbara based tech company, is currently developing a suite of AI-powered Assisted Reality AR applications that can be controlled with your mind, your eyes, your head pose, and combinations of these input methods. 'We strongly believe that agent technologies, augmented reality and biosensing technologies are the foundation for a new kind of human-computer interaction,' he says. Multimodal intelligence and hyper-personalization AI in 2035 will see, hear and sense — offering real-time support tailored to you. With multimodal capabilities, assistants will blend voice, video, text and sensor inputs to understand emotion, behavior and environment. This will create a form of digital empathy. Ullrich notes that these advanced inputs shouldn't aim to replicate human senses, but exceed them. 'In many ways, it's easier to provide superhuman situational awareness with multimodal sensing,' he says. 'With biosensing, real-time tracking of heart rate, eye muscle activation and brain state are all very doable today.' Amazon is already building toward this future. 'Our Echo devices with cameras can use visual information to enhance interactions,' says Rausch. 'For example, determining if someone is facing the screen and speaking enables a more natural conversation without them having to repeat the wake word.' In addition to visual cues, Alexa+ can now pick up on tone and sentiment. 'She can recognize if you're excited or using sarcasm and then adapt her response accordingly,' Rausch says — a step toward the emotionally intelligent systems we expect by 2035. Memory is the foundation of personalization. Most AI today forgets you between sessions. In 2035, contextual AI systems will maintain editable, long-term memory. Codiant, a software company focused on AI development and digital innovation, calls this 'hyper-personalization,' where assistants learn your routines and adjust suggestions based on history and emotional triggers. AI teams and ambient intelligence Rather than relying on one general assistant, you'll manage a suite of specialized AI agents. Research into agentic LLMs shows orchestration layers coordinating multiple AIs; each handling domains like finance, health, scheduling or family planning. These assistants will work together, handling multifaceted tasks in the background. One might track health metrics while another schedules meetings based on your peak focus hours. The coordination will be seamless, mimicking human teams but with the efficiency of machines. Ullrich believes the biggest breakthroughs will come from solving the 'interaction layer,' where user intent meets intelligent response. 'Our focus is on generating breakthroughs at the interaction layer. This is where all these cutting-edge technologies converge,' he explains. Rausch echoes this multi-agent future. 'We believe the future will include a world of specialized AI agents, each with particular expertise,' he says. 'Alexa is positioned as a central orchestrator that can coordinate across specialized agents to accomplish complex tasks.' He continues, 'We've already been building a framework for interoperability between agents with our multi-agent SDK. Alexa would determine when to deploy specialized agents for particular tasks, facilitating communication between them, and bringing their capabilities together into experiences that should feel seamless to the end customer.' Emotionally intelligent and ethically governed Perhaps the most profound shift will be emotional intelligence. Assistants won't just organize your day, they'll help you regulate your mood. They'll notice tension in your voice, anxiety in your posture and suggest music, lighting or a walk. 'Users need to always feel that they're getting tangible value from these systems and that it's not just introducing a different and potentially more frustrating and opaque interface.' — Chris Ullrich, CTO, Cognixion Ullrich sees emotion detection as an innovation frontier. 'I think we're not far at all from effective emotion detection,' he says. 'This will enable delight — which should always be a key goal for HMI.' He also envisions clinical uses, including mental health care, where AI could offer more objective insights into emotional well-being. But with greater insight comes greater responsibility. Explainable AI (XAI), as described by arXiv and IBM, will be critical. Users must understand how decisions are made. VeraSafe, a leader in privacy law, data protection, and cybersecurity, underscores privacy concerns like data control and unauthorized use. 'Users need to always feel that they're getting tangible value from these systems and that it's not just introducing a different and potentially more frustrating and opaque interface,' Ullrich says. That emotional intelligence must be paired with ethical transparency, something Rausch insists remains central to Amazon's mission: 'Our approach to trust doesn't change with new technologies or capabilities, we design all of our products to protect our customers' privacy and provide them with transparency and control.' He adds, 'We'll continue to double down on resources that are easy to find and easy to use, like the Alexa Privacy Dashboard and the Alexa Privacy Hub, so that deeper personalization is a trusted experience that customers will love using.' The future of work and the rise of human-AI teams AI may replace jobs, but more so, it will reshape them. An OECD study from 2023 reports that 27% of current roles face high automation risk, especially in repetitive rules-based work. An even more recent Microsoft study highlighted 40 jobs that are most likely to be affected by AI. Human-centric fields like education, healthcare, counseling and creative direction will thrive, driven by empathy, ethics and original thinking. Emerging hybrid roles will include AI interaction designers and orchestrators of multi-agent systems. Writers will co-create with AI, doctors will pair AI with human care and entrepreneurs will scale faster than ever using AI-enhanced tools. AI becomes an amplifier, not a replacement, for human ingenuity. Even the boundaries between work and home will blur. 'While Alexa+ may be primarily focused on home and personal use today, we're already hearing from customers who want to use it professionally as well,' says Rausch. 'Alexa can manage your calendar, schedule meetings, send texts and extract information from documents — all capabilities that can bridge personal and professional environments.' AI becomes an amplifier, not a replacement, for human ingenuity. A 2025 study from the University of Pennsylvania and OpenAI found that 80% of U.S. workers could see at least 10% of their tasks impacted by AI tools, and nearly 1 in 5 jobs could see more than half their duties automated with today's AI. Forbes reported layoffs rippling across major companies like marketing, legal services, journalism and customer service as generative AI takes on tasks once handled by entire teams. Yet the outlook is not entirely grim. As the New York Times reports, AI is also creating entirely new jobs, including: AI behavior designers AI ethics and safety specialists AI content editors Human-in-the-loop reviewers AI model trainers AI prompt engineers Automation Alley's vision of a 'new artisan' is gaining traction. As AI lifts mental drudgery, skilled manual work — craftsmanship, artistry and hands-on innovation — may see a renaissance. AI won't kill creativity; it may just unlock deeper levels of it. Society, skills and the human choice Navigating the shift to an AI-augmented society demands preparation. The World Economic Forum emphasizes lifelong learning, UBI (universal basic income) experimentation and education reform. Workers must develop both technical and emotional skills. Curricula must evolve to teach AI collaboration, critical thinking and data literacy. Social safety nets may be required during reskilling or displacement. Ethics and governance must be built into AI design from the start, not added after harm occurs. Ultimately, the question isn't 'What can AI do?'It's 'What should we let AI do?' Ullrich notes the importance of designing with inclusivity in mind. 'By solving the hard design problems associated with doing this in the accessibility space, we will create solutions that benefit all users,' he says. Technologies developed for accessibility, like subtitles or eye tracking—often lead to mainstream breakthroughs. As IBM and VeraSafe highlight, trust hinges on explainability, auditability and data ownership. Public understanding and control are key to avoiding backlash and ensuring equitable access. As AI augments more aspects of life, our relationship with it will define the outcomes. Daniel Rausch believes the key lies in meaningful connection: 'The goal isn't just responding to commands but understanding your life and meaningfully supporting it.' We must ensure systems are inclusive, transparent and designed for real value. As AI grows in intelligence, the human role must remain centered on judgment, empathy and creativity. Ultimately, the question isn't 'What can AI do?' It's 'What should we let AI do?' Bottom line: Preserving what makes us human with better tools than ever By 2035, AI will be a planner, therapist, tutor and teammate. But it will also reflect what we value — and how we choose to interact with it. Ullrich emphasizes that the future won't be defined just by what AI can do for us, but how we engage with it: 'Voice may be useful in some situations, gesture in others, but solutions that leverage neural sensing and agent-assisted interaction will provide precision, privacy and capability that go well beyond existing augmented reality interaction frameworks.' Yet, amid this evolution, a deeper question of trust remains. Emotional intelligence, explainability and data transparency will be essential, not just for usability but for human agency. 'Services that require private knowledge need to justify that there is sufficient benefit directly to the user base,' Ullrich says. 'But if users see this as a fair trade, then I think it's a perfectly reasonable thing to allow.' As AI capabilities rise, we must consciously preserve human ones. The most meaningful advances may not be smarter machines, but more mindful connections between humans and promise of AI is so much more than productivity, it's dignity, inclusion and creativity. If we design wisely, AI won't just help us get more done, it will help us become more of who we are. And that is something worth imagining. • Artificial Intelligence • Smart Glasses• Wearable Tech• Smartphones • iPhones• Robots• Cars• TVs

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store