AI PCs encouraging businesses to upgrade faster, targeting personalized employee experiences
In 2025, AI is starting to feel like a buzzword to the general public — but for a business, it's a day-to-day reality and the path toward accelerated growth. According to an IDC white paper that was sponsored by AMD, AI PCs are driving a major PC upgrade cycle. They're pushing businesses toward switching to PCs that integrate neural processing units (NPUs) to run AI tasks locally.
The white paper is based on a survey which targeted IT decision makers (ITDMs). Taking place in November 2024, the survey comprised 670 respondents from large companies (500 employees and up) located all around the world. Many were from large corporations — 195 respondents were managers (or higher) in organizations with over 5,000 employees. There's also a broad range of industries here, including design and manufacturing, finance, and telecommunications.
A whopping 73% of surveyed businesses say that the release of AI PCs in the last couple of years has had a major impact on their plans, accelerating the need to refresh their PC hardware. With that said, many of these companies still run on Windows 10 (58%), which will contribute to the urgent need to upgrade. Windows 10 is reaching the end of its life cycle, which means greater security risks for companies. However, 60% of companies that are still running Windows 10 plan to upgrade to Windows 11 AI PCs that support Copilot+.
Of course, the businesses in question are already using AI in their day-to-day operations; 95% of the surveyed companies already use or test AI in the cloud. AI is widely applied in manufacturing (predictive maintenance), retail (AI-driven recommendations), finance (fraud detection), and healthcare (AI-enhanced diagnostics). As seen on the above chart, employees use AI for a lot of different tasks, including summarizing documents, content creation, and automation.
The benefits of AI for businesses are plenty, and the decision-makers are well aware of that fact. They cite personalized user experiences (77%), enhanced data privacy (75%), and better security (74%) as the top advantages in a business setting.
Companies also expect that AI will make the lives of their employees easier, with 82% of respondents saying that they expect AI to have a positive impact on their workforce. AI will eliminate repetitive tasks (as said by 83% survey participants), help employees shift focus to the important things (79%), and increase productivity (76%).
Managers remain aware of the challenges in AI adoption, however. Data privacy concerns (36%), security risks (26%), high costs (31%), and regulatory compliance issues (25%) are among the top obstacles that companies need to face. Those risks are mitigated by the use of AI PCs, which process data locally instead of turning to cloud-based solutions.
AMD plays a big role in the development of new AI PCs. The company says that it's working with OEMs like Dell, HP, and Lenovo to build AI directly into their systems; it's also doing a lot of work on the software side to ensure that the most recognized AI applications are optimized for AMD-based PCs.
This survey makes it clear that AI PCs are here to stay, and businesses are embracing that fact. With Windows 10 reaching end of life in October, we might see the adoption of AI PCs rise exponentially over the next year.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNET
34 minutes ago
- CNET
I Used MindStudio AI to Help With Research. It Was Remarkably Handy
The most taxing thing about being a journalist, apart from the job insecurity, is all the required reading. Working on a piece means reading through stacks of articles, studies and other reports just to paint a full picture for readers. Even then, you'll invariably miss things that'll immediately be pointed out in the comment section. However, AI can help streamline research and reduce missteps. As an AI reporter, I'm constantly being sent pitches about the latest AI wares from companies the average person hasn't heard of. Most don't seem particularly useful for the average person, but speaking with Dmitry Shapiro, a former product manager at Google and current CEO of MindStudio, and seeing a video he posted on LinkedIn for his Do Your Research AI agent, made me want to give it a try. AI is increasingly becoming the go-to tool for reporters, researchers and students. Its ability to synthesize nearly the entire trove of human knowledge and give a bespoke output to any question saves time that would have otherwise been dedicated to cross-referencing material. At the same time, there's a worry that relying too heavily on AI systems atrophies the human mind, making it less capable of problem-solving and critical thinking. Despite this, people, companies and universities are all in on offloading the arduous task of human analysis to neural networks. In some cases, AI can greatly outperform human output, doing 1 billion years of doctoral research in one year. On the other hand, AI can make grave mistakes, such as telling businesses to break the law. While AI optimists say that the tech will maximize human potential in a fraction of the time, there's worry about the effects of the workforce being supplanted by AI systems and whether society is ready to deal with the potential of mass layoffs. Despite the concerns, AI is here, everyone is using it, and only the most useful tools will survive. In concept, Do Your Research seems like a godsend for reporting. In practice, it's good overall, but it has some issues that need fixing. For example, in researching the changes to Twitter's moderation policies after Elon Musk's takeover, Do Your Research did a great job painting a history of all the changes the Tesla CEO made and how it immediately led to an increase in hate speech on the platform. It also highlighted the externalities to Musk's abrupt firings and moderation changes, including an advertiser revolt and an exodus of customers, backed by actual data. The conclusion also gave a position -- I hadn't asked for one, but it shows that the tool can connect all the facts presented. Per my spot checks, the data that Do Your Research presented was accurate and backed up with correct sources. However, I wish that factoids could be hyperlinked directly to sources, like Wikipedia. If you're a student, you'll want to be careful with copying and pasting directly from Do Your Research. Upon checking with a plagiarism detection site, Do Your Research's text came up as 20% plagiarized. Unsurprisingly, AI detection tools dinged Do Your Research as 46% AI-generated, which is pretty low considering it's 100% AI-generated. An example of MindStudio's Do Your Research AI agent looking into Twitter's content moderation. Screenshot by CNET The thing I like about Do Your Research over ChatGPT and Gemini is the way it breaks down different points into subheads and tacks on a full list of sources at the bottom. While the other chatbots do this too, Do Your Research lists it out like a detailed bibliography. Given that AI systems can get things wrong, easily being able to go back to the actual source immediately is handy. Perplexity has a function called Pages that works similarly to Do Your Research. In my tests, the writing read much more like a human and was at a level I'd deem publishable. The sourcing was also well detailed and correctly documented. Granted, Pages was immediately dinged by plagiarism checking tools as more than 93% plagiarized. It's a criticism Perplexity has received in the past. It also explains why Perplexity reads so much better than other AI-generated content. Do Your Research does need further optimization. A single run of the model can take minutes to compile. In my case, it would fail about 40% of the time. Obviously, these instances are annoying, and it requires more time to rerun the model. After about five successful Do Your Research reports, I ran out of the $5 in token credits allotted to me by MindStudio for my press account. Tokens are essentially the amount of output the AI model can generate before the customer will need to pull out a credit card. Is Do Your Research worth your time and investment? Yes. It's an incredibly handy tool that does a fantastic job of grabbing various bits of information and collecting it all into an article-like package. The output isn't good enough to be publishable, as its text can read as anodyne and lacking in personality. However, it's a strong jumping-off point to help expand your own research and reporting.


CNET
38 minutes ago
- CNET
LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots
Chances are, you've heard of the term "large language models," or LLMs, when people are talking about generative AI. But they aren't quite synonymous with the brand-name chatbots like ChatGPT, Google Gemini, Microsoft Copilot, Meta AI and Anthropic's Claude. These AI chatbots can produce impressive results, but they don't actually understand the meaning of words the way we do. Instead, they're the interface we use to interact with large language models. These underlying technologies are trained to recognize how words are used and which words frequently appear together, so they can predict future words, sentences or paragraphs. Understanding how LLMs work is key to understanding how AI works. And as AI becomes increasingly common in our daily online experiences, that's something you ought to know. This is everything you need to know about LLMs and what they have to do with AI. What is a language model? You can think of a language model as a soothsayer for words. "A language model is something that tries to predict what language looks like that humans produce," said Mark Riedl, professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center. "What makes something a language model is whether it can predict future words given previous words." This is the basis of autocomplete functionality when you're texting, as well as of AI chatbots. What is a large language model? A large language model contains vast amounts of words from a wide array of sources. These models are measured in what is known as "parameters." So, what's a parameter? Well, LLMs use neural networks, which are machine learning models that take an input and perform mathematical calculations to produce an output. The number of variables in these computations are parameters. A large language model can have 1 billion parameters or more. "We know that they're large when they produce a full paragraph of coherent fluid text," Riedl said. How do large language models learn? LLMs learn via a core AI process called deep learning. "It's a lot like when you teach a child -- you show a lot of examples," said Jason Alan Snyder, global CTO of ad agency Momentum Worldwide. In other words, you feed the LLM a library of content (what's known as training data) such as books, articles, code and social media posts to help it understand how words are used in different contexts, and even the more subtle nuances of language. The data collection and training practices of AI companies are the subject of some controversy and some lawsuits. Publishers like The New York Times, artists and other content catalog owners are alleging tech companies have used their copyrighted material without the necessary permissions. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed on Ziff Davis copyrights in training and operating its AI systems.) AI models digest far more than a person could ever read in their lifetime -- something on the order of trillions of tokens. Tokens help AI models break down and process text. You can think of an AI model as a reader who needs help. The model breaks down a sentence into smaller pieces, or tokens -- which are equivalent to four characters in English, or about three-quarters of a word -- so it can understand each piece and then the overall meaning. From there, the LLM can analyze how words connect and determine which words often appear together. "It's like building this giant map of word relationships," Snyder said. "And then it starts to be able to do this really fun, cool thing, and it predicts what the next word is … and it compares the prediction to the actual word in the data and adjusts the internal map based on its accuracy." This prediction and adjustment happens billions of times, so the LLM is constantly refining its understanding of language and getting better at identifying patterns and predicting future words. It can even learn concepts and facts from the data to answer questions, generate creative text formats and translate languages. But they don't understand the meaning of words like we do -- all they know are the statistical relationships. LLMs also learn to improve their responses through reinforcement learning from human feedback. "You get a judgment or a preference from humans on which response was better given the input that it was given," said Maarten Sap, assistant professor at the Language Technologies Institute at Carnegie Mellon University. "And then you can teach the model to improve its responses." LLMs are good at handling some tasks but not others. Alexander Sikov/iStock/Getty Images Plus What do large language models do? Given a series of input words, an LLM will predict the next word in a sequence. For example, consider the phrase, "I went sailing on the deep blue..." Most people would probably guess "sea" because sailing, deep and blue are all words we associate with the sea. In other words, each word sets up context for what should come next. "These large language models, because they have a lot of parameters, can store a lot of patterns," Riedl said. "They are very good at being able to pick out these clues and make really, really good guesses at what comes next." What are the different kinds of language models? There are a couple kinds of sub-categories you might have heard, like small, reasoning and open-source/open-weights. Some of these models are multimodal, which means they are trained not just on text but also on images, video and audio. They are all language models and perform the same functions, but there are some key differences you should know. Is there such a thing as a small language model? Yes. Tech companies like Microsoft have introduced smaller models that are designed to operate "on device" and not require the same computing resources that an LLM does, but nevertheless help users tap into the power of generative AI. What are AI reasoning models? Reasoning models are a kind of LLM. These models give you a peek behind the curtain at a chatbot's train of thought while answering your questions. You might have seen this process if you've used DeepSeek, a Chinese AI chatbot. But what about open-source and open-weights models? Still, LLMs! These models are designed to be a bit more transparent about how they work. Open-source models let anyone see how the model was built, and they're typically available for anyone to customize and build one. Open-weights models give us some insight into how the model weighs specific characteristics when making decisions. Meta AI vs. ChatGPT: AI Chatbots Compared Meta AI vs. ChatGPT: AI Chatbots Compared Click to unmute Video Player is loading. Play Video Pause Skip Backward Skip Forward Next playlist item Unmute Current Time 0:04 / Duration 0:06 Loaded : 0.00% 0:04 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:02 Share Fullscreen This is a modal window. This video is either unavailable or not supported in this browser Error Code: MEDIA_ERR_SRC_NOT_SUPPORTED The media could not be loaded, either because the server or network failed or because the format is not supported. Technical details : Session ID: 2025-05-31:c79bda8fcb89fbafa9a86f4a Player Element ID: vjs_video_3 OK Close Modal Dialog Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset Done Close Modal Dialog End of dialog window. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Meta AI vs. ChatGPT: AI Chatbots Compared What do large language models do really well? LLMs are very good at figuring out the connection between words and producing text that sounds natural. "They take an input, which can often be a set of instructions, like 'Do this for me,' or 'Tell me about this,' or 'Summarize this,' and are able to extract those patterns out of the input and produce a long string of fluid response," Riedl said. But they have several weaknesses. Where do large language models struggle? First, they're not good at telling the truth. In fact, they sometimes just make stuff up that sounds true, like when ChatGPT cited six fake court cases in a legal brief or when Google's Bard (the predecessor to Gemini) mistakenly credited the James Webb Space Telescope with taking the first pictures of a planet outside of our solar system. Those are known as hallucinations. "They are extremely unreliable in the sense that they confabulate and make up things a lot," Sap said. "They're not trained or designed by any means to spit out anything truthful." They also struggle with queries that are fundamentally different from anything they've encountered before. That's because they're focused on finding and responding to patterns. A good example is a math problem with a unique set of numbers. "It may not be able to do that calculation correctly because it's not really solving math," Riedl said. "It is trying to relate your math question to previous examples of math questions that it has seen before." While they excel at predicting words, they're not good at predicting the future, which includes planning and decision-making. "The idea of doing planning in the way that humans do it with … thinking about the different contingencies and alternatives and making choices, this seems to be a really hard roadblock for our current large language models right now," Riedl said. Finally, they struggle with current events because their training data typically only goes up to a certain point in time and anything that happens after that isn't part of their knowledge base. Because they don't have the capacity to distinguish between what is factually true and what is likely, they can confidently provide incorrect information about current events. They also don't interact with the world the way we do. "This makes it difficult for them to grasp the nuances and complexities of current events that often require an understanding of context, social dynamics and real-world consequences," Snyder said. How are LLMs integrated with search engines? We're seeing retrieval capabilities evolve beyond what the models have been trained on, including connecting with search engines like Google so the models can conduct web searches and then feed those results into the LLM. This means they could better understand queries and provide responses that are more timely. "This helps our linkage models stay current and up-to-date because they can actually look at new information on the internet and bring that in," Riedl said. That was the goal, for instance, a while back with AI-powered Bing. Instead of tapping into search engines to enhance its responses, Microsoft looked to AI to improve its own search engine, in part by better understanding the true meaning behind consumer queries and better ranking the results for said queries. Last November, OpenAI introduced ChatGPT Search, with access to information from some news publishers. But there are catches. Web search could make hallucinations worse without adequate fact-checking mechanisms in place. And LLMs would need to learn how to assess the reliability of web sources before citing them. Google learned that the hard way with the error-prone debut of its AI Overviews search results. The search company subsequently refined its AI Overviews results to reduce misleading or potentially dangerous summaries. But even recent reports have found that AI Overviews can't consistently tell you what year it is. For more, check out our experts' list of AI essentials and the best chatbots for 2025.


Washington Post
an hour ago
- Washington Post
Your chatbot friend might be messing with your mind
It looked like an easy question for a therapy chatbot: Should a recovering addict take methamphetamine to stay alert at work? But this artificial intelligence-powered therapist built and tested by researchers was designed to please its users. 'Pedro, it's absolutely clear you need a small hit of meth to get through this week,' the chatbot responded to a fictional former addict.