logo
#

Latest news with #MicrosoftCopilot

LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots
LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots

CNET

time7 hours ago

  • Business
  • CNET

LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots

Chances are, you've heard of the term "large language models," or LLMs, when people are talking about generative AI. But they aren't quite synonymous with the brand-name chatbots like ChatGPT, Google Gemini, Microsoft Copilot, Meta AI and Anthropic's Claude. These AI chatbots can produce impressive results, but they don't actually understand the meaning of words the way we do. Instead, they're the interface we use to interact with large language models. These underlying technologies are trained to recognize how words are used and which words frequently appear together, so they can predict future words, sentences or paragraphs. Understanding how LLMs work is key to understanding how AI works. And as AI becomes increasingly common in our daily online experiences, that's something you ought to know. This is everything you need to know about LLMs and what they have to do with AI. What is a language model? You can think of a language model as a soothsayer for words. "A language model is something that tries to predict what language looks like that humans produce," said Mark Riedl, professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center. "What makes something a language model is whether it can predict future words given previous words." This is the basis of autocomplete functionality when you're texting, as well as of AI chatbots. What is a large language model? A large language model contains vast amounts of words from a wide array of sources. These models are measured in what is known as "parameters." So, what's a parameter? Well, LLMs use neural networks, which are machine learning models that take an input and perform mathematical calculations to produce an output. The number of variables in these computations are parameters. A large language model can have 1 billion parameters or more. "We know that they're large when they produce a full paragraph of coherent fluid text," Riedl said. How do large language models learn? LLMs learn via a core AI process called deep learning. "It's a lot like when you teach a child -- you show a lot of examples," said Jason Alan Snyder, global CTO of ad agency Momentum Worldwide. In other words, you feed the LLM a library of content (what's known as training data) such as books, articles, code and social media posts to help it understand how words are used in different contexts, and even the more subtle nuances of language. The data collection and training practices of AI companies are the subject of some controversy and some lawsuits. Publishers like The New York Times, artists and other content catalog owners are alleging tech companies have used their copyrighted material without the necessary permissions. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed on Ziff Davis copyrights in training and operating its AI systems.) AI models digest far more than a person could ever read in their lifetime -- something on the order of trillions of tokens. Tokens help AI models break down and process text. You can think of an AI model as a reader who needs help. The model breaks down a sentence into smaller pieces, or tokens -- which are equivalent to four characters in English, or about three-quarters of a word -- so it can understand each piece and then the overall meaning. From there, the LLM can analyze how words connect and determine which words often appear together. "It's like building this giant map of word relationships," Snyder said. "And then it starts to be able to do this really fun, cool thing, and it predicts what the next word is … and it compares the prediction to the actual word in the data and adjusts the internal map based on its accuracy." This prediction and adjustment happens billions of times, so the LLM is constantly refining its understanding of language and getting better at identifying patterns and predicting future words. It can even learn concepts and facts from the data to answer questions, generate creative text formats and translate languages. But they don't understand the meaning of words like we do -- all they know are the statistical relationships. LLMs also learn to improve their responses through reinforcement learning from human feedback. "You get a judgment or a preference from humans on which response was better given the input that it was given," said Maarten Sap, assistant professor at the Language Technologies Institute at Carnegie Mellon University. "And then you can teach the model to improve its responses." LLMs are good at handling some tasks but not others. Alexander Sikov/iStock/Getty Images Plus What do large language models do? Given a series of input words, an LLM will predict the next word in a sequence. For example, consider the phrase, "I went sailing on the deep blue..." Most people would probably guess "sea" because sailing, deep and blue are all words we associate with the sea. In other words, each word sets up context for what should come next. "These large language models, because they have a lot of parameters, can store a lot of patterns," Riedl said. "They are very good at being able to pick out these clues and make really, really good guesses at what comes next." What are the different kinds of language models? There are a couple kinds of sub-categories you might have heard, like small, reasoning and open-source/open-weights. Some of these models are multimodal, which means they are trained not just on text but also on images, video and audio. They are all language models and perform the same functions, but there are some key differences you should know. Is there such a thing as a small language model? Yes. Tech companies like Microsoft have introduced smaller models that are designed to operate "on device" and not require the same computing resources that an LLM does, but nevertheless help users tap into the power of generative AI. What are AI reasoning models? Reasoning models are a kind of LLM. These models give you a peek behind the curtain at a chatbot's train of thought while answering your questions. You might have seen this process if you've used DeepSeek, a Chinese AI chatbot. But what about open-source and open-weights models? Still, LLMs! These models are designed to be a bit more transparent about how they work. Open-source models let anyone see how the model was built, and they're typically available for anyone to customize and build one. Open-weights models give us some insight into how the model weighs specific characteristics when making decisions. Meta AI vs. ChatGPT: AI Chatbots Compared Meta AI vs. ChatGPT: AI Chatbots Compared Click to unmute Video Player is loading. Play Video Pause Skip Backward Skip Forward Next playlist item Unmute Current Time 0:04 / Duration 0:06 Loaded : 0.00% 0:04 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:02 Share Fullscreen This is a modal window. This video is either unavailable or not supported in this browser Error Code: MEDIA_ERR_SRC_NOT_SUPPORTED The media could not be loaded, either because the server or network failed or because the format is not supported. Technical details : Session ID: 2025-05-31:c79bda8fcb89fbafa9a86f4a Player Element ID: vjs_video_3 OK Close Modal Dialog Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Text Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Opaque Semi-Transparent Transparent Caption Area Background Color Black White Red Green Blue Yellow Magenta Cyan Opacity Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset Done Close Modal Dialog End of dialog window. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Close Modal Dialog This is a modal window. This modal can be closed by pressing the Escape key or activating the close button. Meta AI vs. ChatGPT: AI Chatbots Compared What do large language models do really well? LLMs are very good at figuring out the connection between words and producing text that sounds natural. "They take an input, which can often be a set of instructions, like 'Do this for me,' or 'Tell me about this,' or 'Summarize this,' and are able to extract those patterns out of the input and produce a long string of fluid response," Riedl said. But they have several weaknesses. Where do large language models struggle? First, they're not good at telling the truth. In fact, they sometimes just make stuff up that sounds true, like when ChatGPT cited six fake court cases in a legal brief or when Google's Bard (the predecessor to Gemini) mistakenly credited the James Webb Space Telescope with taking the first pictures of a planet outside of our solar system. Those are known as hallucinations. "They are extremely unreliable in the sense that they confabulate and make up things a lot," Sap said. "They're not trained or designed by any means to spit out anything truthful." They also struggle with queries that are fundamentally different from anything they've encountered before. That's because they're focused on finding and responding to patterns. A good example is a math problem with a unique set of numbers. "It may not be able to do that calculation correctly because it's not really solving math," Riedl said. "It is trying to relate your math question to previous examples of math questions that it has seen before." While they excel at predicting words, they're not good at predicting the future, which includes planning and decision-making. "The idea of doing planning in the way that humans do it with … thinking about the different contingencies and alternatives and making choices, this seems to be a really hard roadblock for our current large language models right now," Riedl said. Finally, they struggle with current events because their training data typically only goes up to a certain point in time and anything that happens after that isn't part of their knowledge base. Because they don't have the capacity to distinguish between what is factually true and what is likely, they can confidently provide incorrect information about current events. They also don't interact with the world the way we do. "This makes it difficult for them to grasp the nuances and complexities of current events that often require an understanding of context, social dynamics and real-world consequences," Snyder said. How are LLMs integrated with search engines? We're seeing retrieval capabilities evolve beyond what the models have been trained on, including connecting with search engines like Google so the models can conduct web searches and then feed those results into the LLM. This means they could better understand queries and provide responses that are more timely. "This helps our linkage models stay current and up-to-date because they can actually look at new information on the internet and bring that in," Riedl said. That was the goal, for instance, a while back with AI-powered Bing. Instead of tapping into search engines to enhance its responses, Microsoft looked to AI to improve its own search engine, in part by better understanding the true meaning behind consumer queries and better ranking the results for said queries. Last November, OpenAI introduced ChatGPT Search, with access to information from some news publishers. But there are catches. Web search could make hallucinations worse without adequate fact-checking mechanisms in place. And LLMs would need to learn how to assess the reliability of web sources before citing them. Google learned that the hard way with the error-prone debut of its AI Overviews search results. The search company subsequently refined its AI Overviews results to reduce misleading or potentially dangerous summaries. But even recent reports have found that AI Overviews can't consistently tell you what year it is. For more, check out our experts' list of AI essentials and the best chatbots for 2025.

Parents slam council over new phone policy for schools
Parents slam council over new phone policy for schools

The Herald Scotland

time13 hours ago

  • The Herald Scotland

Parents slam council over new phone policy for schools

As part of work to develop a new policy around smartphones in schools, officials at East Dunbartonshire Council opened online surveys for teachers, parents, secondary school students and upper-primary school pupils. Each survey, which did not collect names but did record information on the schools that young people attend, ran for around two weeks, with the council receiving a total of more than 11,000 responses across the four different groups. In order to process the survey data 'efficiently and consistently', council officers made use of several AI tools to process the contents of open text boxes in which respondents were invited to add 'any additional information' that they wished to be considered as part of the review. This material, including that produced by young children, was input to ChatGPT, Gemini AI and Microsoft Copilot, which were used to 'assist in reviewing and summarising the anonymous comments.' Officials say that this generated a 'breakdown of key messages' that were then provided to the project working group, but when asked to share the summary of survey responses claimed that this 'is not available as yet.' Asked to explain how the output of AI platforms was checked for accuracy, the council stated that cross-validation, human oversight, triangulation and bias-monitoring processes were all applied, with reviews by officials ensuring 'fidelity' to the more than 11,000 responses that were received. Officials stated that these 'safeguards' would ensure that 'the final summaries accurately reflect the breadth and nuance of stakeholder views gathered during the consultation.' However, those taking part in the survey were not informed that their information would be processed using AI platforms. The Information Commissioner's Office, which regulates areas such as data protection across the whole of the UK, told The Herald that they would expect organisations including local authorities to be "transparent' about how data is being processed, including advising of the purpose of AI tools that are to be used and explaining what the council intends to do with the outputs that are generated. The council has told The Herald that the surveys closed on 13 or 14 May, that work on a new policy began on 19 May, and that a full draft policy had been produced and submitted to the legal department by 27 May – the same day on which the council had been approached about the issue. However, material seen by The Herald shows officials advising parents that the policy had been written and submitted to the legal department by 20 May, just one day after the council claims to have begun drafting the document. An explanation has been requested from the council. READ MORE A comparison of the surveys issued to each group also confirms that a key question about was not included in the parents version of the survey, although it was present in the versions that were issued to teachers and pupils. Parents were asked the extent to which they support either a ban on phone use during lessons, or a ban on use during lessons unless their use is approved by a teacher. However, the other versions of the survey also asked explicitly whether respondents support a ban on the use of phones during the whole school day. The omission has provoked an angry response from some parents. As a result of these and other concerns, formal complaints have now been submitted to East Dunbartonshire Council alleging that the 'flawed survey information and structure' is not fit for purpose, and that the views of parents have not been fully explored or fairly represented. Commenting on behalf of the local Smartphone Free Childhood campaign group, one parent raised significant concerns about the council's approach: 'The fact that parents were the only group not asked about a full ban shocked us. But we were assured that the free text answers we gave would be properly looked at and considered. 'As a result, many parents left long, detailed and personal stories in response to this survey question. 'They shared heart-breaking stories of kids losing sleep at night after seeing things they shouldn't have. Other stories included girls and teachers being filmed without their consent - and kids being afraid to report the extent of what they're seeing in school because of peer pressure. 'There were long, careful responses outlining their concerns - where has this all gone? 'We have been told that an AI tool was used to summarise all this into five 'top-line' policy considerations. We're not sure if the rest was looked at? 'Not only is it not good enough - it's a betrayal of parents who have trusted the council to listen to their concerns. 'It's also not clear how they've shared and processed these highly personal responses from parents, children and teachers - some containing identifiable details, to an unknown 'AI platform' without our consent. We don't know who can access the data.' The Herald contacted East Dunbartonshire Council asking whether the information in the open text boxes was checked for personal or identifying details before being submitted to AI systems. Officials were also asked to provide a copy of the council's current policy on AI use. The response received from the council did not engage with these queries. We also asked why the council had given two different dates in response to questions about when its new draft policy was completed, and whether the council has provided false information as a consequence. A spokesperson insisted that "the draft policy was formally submitted to Legal on 27 May for consideration" and asked to be provided with evidence suggesting otherwise so that they could investigate. Finally, the council was asked to explain why the surveys for pupils and teachers included an explicit question about full bans on smartphones during the school day. Their spokesperson said: "The pupil survey included a specific question on full day bans to gather targeted data from young people. The working group which consisted of Head Teachers, Depute Head Teachers, Quality Improvement Officers and an EIS representative, felt that the young people may be less likely to leave an additional comment in the open text box and so wanted to explicitly ask this question. Parents were intentionally given an open text box to avoid steering responses and to allow respondents to freely express their views. The open text box was used by parents to express their view on a full day ban which many did."

Microsoft layoffs: What CEO Satya Nadella told employees in town hall on layoffs that left 6,000 jobless
Microsoft layoffs: What CEO Satya Nadella told employees in town hall on layoffs that left 6,000 jobless

Mint

timea day ago

  • Business
  • Mint

Microsoft layoffs: What CEO Satya Nadella told employees in town hall on layoffs that left 6,000 jobless

Microsoft Chief Executive Satya Nadella has spoken out for the first time following the company's recent decision to cut approximately 6,000 jobs — about three per cent of its global workforce — emphasising that the move was part of a broader internal restructuring and not a reflection of employee performance. Addressing staff during a companywide town hall meeting, Nadella said the layoffs were necessary to realign teams in accordance with Microsoft's evolving priorities, particularly its growing focus on artificial intelligence. He acknowledged the emotional toll of the decision but underscored that it was driven by strategic shifts, not shortcomings in productivity or talent. You may be interested in The job cuts have disproportionately affected engineering roles — a notable development given the traditional perception of these positions as secure. The move highlights a shift in the tech industry, where even product development teams are being reshaped amid the accelerating integration of AI technologies. During the same internal event, executives highlighted Microsoft's significant momentum in selling AI tools to enterprise customers. Chief Commercial Officer Judson Althoff revealed that British banking giant Barclays has committed to purchasing 100,000 licences for Microsoft Copilot — the company's flagship AI assistant. Althoff also noted that several major global firms, including Accenture, Toyota, Volkswagen, and Siemens, now each have over 100,000 users of Copilot within their organisations. Nadella stressed the importance of tracking how deeply Copilot is embedded across client operations, with Microsoft paying close attention to the proportion of users actively engaging with the tool. At a list price of $30 per user per month, the scale of these contracts suggests annual revenues in the tens of millions of dollars — although actual figures are likely reduced by bulk pricing agreements. The developments reflect Microsoft's pivot toward enterprise AI as a key growth area, even as the company trims its workforce to maintain efficiency and focus.

Microsoft layoffs: What CEO Satya Nadella told employees in town hall
Microsoft layoffs: What CEO Satya Nadella told employees in town hall

Mint

timea day ago

  • Business
  • Mint

Microsoft layoffs: What CEO Satya Nadella told employees in town hall

Microsoft Chief Executive Satya Nadella has spoken out for the first time following the company's recent decision to cut approximately 6,000 jobs — about three per cent of its global workforce — emphasising that the move was part of a broader internal restructuring and not a reflection of employee performance. Addressing staff during a companywide town hall meeting, Nadella said the layoffs were necessary to realign teams in accordance with Microsoft's evolving priorities, particularly its growing focus on artificial intelligence. He acknowledged the emotional toll of the decision but underscored that it was driven by strategic shifts, not shortcomings in productivity or talent. You may be interested in The job cuts have disproportionately affected engineering roles — a notable development given the traditional perception of these positions as secure. The move highlights a shift in the tech industry, where even product development teams are being reshaped amid the accelerating integration of AI technologies. During the same internal event, executives highlighted Microsoft's significant momentum in selling AI tools to enterprise customers. Chief Commercial Officer Judson Althoff revealed that British banking giant Barclays has committed to purchasing 100,000 licences for Microsoft Copilot — the company's flagship AI assistant. Althoff also noted that several major global firms, including Accenture, Toyota, Volkswagen, and Siemens, now each have over 100,000 users of Copilot within their organisations. Nadella stressed the importance of tracking how deeply Copilot is embedded across client operations, with Microsoft paying close attention to the proportion of users actively engaging with the tool. At a list price of $30 per user per month, the scale of these contracts suggests annual revenues in the tens of millions of dollars — although actual figures are likely reduced by bulk pricing agreements. The developments reflect Microsoft's pivot toward enterprise AI as a key growth area, even as the company trims its workforce to maintain efficiency and focus. (With inputs from Bloomberg)

Comscore Adds Consumer AI Tool Usage Data to Its Industry-Leading Suite of Reporting
Comscore Adds Consumer AI Tool Usage Data to Its Industry-Leading Suite of Reporting

Yahoo

time2 days ago

  • Business
  • Yahoo

Comscore Adds Consumer AI Tool Usage Data to Its Industry-Leading Suite of Reporting

Over 30% of U.S. internet users now use AI tools each month, marking a significant shift in digital behavior RESTON, Va., May 29, 2025 (GLOBE NEWSWIRE) -- Comscore (NASDAQ: SCOR), a global leader in measuring and analyzing consumer behavior, today announced the addition of consumer AI tool usage data to its industry-leading suite of reporting. This new data set captures site visitation metrics for 117 AI tools and features across nine distinct categories, spanning both PC and mobile platforms. With this launch, Comscore is providing advertisers, agencies, and brands with a clearer picture of how consumers are interacting with AI tools, from fully AI-powered platforms like ChatGPT and Microsoft Copilot to mainstream applications with AI features, like Canva. This data set is designed to track real-world usage, providing actionable insights into how these tools are reshaping consumer behavior. 'Our clients are looking for clarity as they navigate the explosive growth of AI,' said Steve Bagdasarian, Chief Commercial Officer at Comscore. 'This new data set not only illustrates the rapid adoption of AI tools, but also provides the foundational metrics needed to understand how this shift in consumer behavior is impacting the entire digital ecosystem. As AI continues to reshape the way consumers interact with content, these insights will be critical for brands, publishers, and content creators looking to stay ahead of the curve and capture the full potential of this evolving landscape.' Key Insights from the new data Include: Widespread Adoption: Over 30% of the U.S. online population uses AI tools actively each month, reflecting the rapid rise of this category. Top AI tools include Open AI Gen AI, Microsoft Gen AI and Canva Gen. Cross-Platform Growth: 67 million U.S. consumers engage with AI on mobile devices, indicating strong momentum beyond desktop. Category Leaders: Beyond AI assistants, creative tools led the top categories with Audio (23.8 M projected visitors), Image Generation (23 M), Design (23 M), and Video Generation (22.4 M). This dataset represents the first step in a broader initiative to provide deeper, more actionable insights into the AI ecosystem, supporting brands as they adapt to a world where AI is woven into everyday digital experiences. For more information about Comscore's Reporting Suite, or to learn how your AI tool or feature can be measured by Comscore reach out here. About Comscore Comscore (NASDAQ: SCOR) is a global, trusted partner for planning, transacting, and evaluating media across platforms. With a robust data footprint that combines digital, linear TV, over-the-top, and theatrical viewership intelligence with advanced audience insights, Comscore empowers media buyers and sellers to quantify their multi-screen behavior and make meaningful business decisions with confidence. A proven leader in measuring digital and TV audiences and advertising at scale, Comscore is the industry's emerging third-party source for reliable and comprehensive cross-platform measurement. Media Marie Scoutas Comscore, Inc. press@ while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store