
LLMs and AI Aren't the Same. Everything You Should Know About What's Behind Chatbots
Chances are, you've heard of the term "large language models," or LLMs, when people are talking about generative AI. But they aren't quite synonymous with the brand-name chatbots like ChatGPT, Google Gemini, Microsoft Copilot, Meta AI and Anthropic's Claude.
These AI chatbots can produce impressive results, but they don't actually understand the meaning of words the way we do. Instead, they're the interface we use to interact with large language models. These underlying technologies are trained to recognize how words are used and which words frequently appear together, so they can predict future words, sentences or paragraphs. Understanding how LLMs work is key to understanding how AI works. And as AI becomes increasingly common in our daily online experiences, that's something you ought to know.
This is everything you need to know about LLMs and what they have to do with AI.
What is a language model?
You can think of a language model as a soothsayer for words.
"A language model is something that tries to predict what language looks like that humans produce," said Mark Riedl, professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center. "What makes something a language model is whether it can predict future words given previous words."
This is the basis of autocomplete functionality when you're texting, as well as of AI chatbots.
What is a large language model?
A large language model contains vast amounts of words from a wide array of sources. These models are measured in what is known as "parameters."
So, what's a parameter?
Well, LLMs use neural networks, which are machine learning models that take an input and perform mathematical calculations to produce an output. The number of variables in these computations are parameters. A large language model can have 1 billion parameters or more.
"We know that they're large when they produce a full paragraph of coherent fluid text," Riedl said.
How do large language models learn?
LLMs learn via a core AI process called deep learning.
"It's a lot like when you teach a child -- you show a lot of examples," said Jason Alan Snyder, global CTO of ad agency Momentum Worldwide.
In other words, you feed the LLM a library of content (what's known as training data) such as books, articles, code and social media posts to help it understand how words are used in different contexts, and even the more subtle nuances of language. The data collection and training practices of AI companies are the subject of some controversy and some lawsuits. Publishers like The New York Times, artists and other content catalog owners are alleging tech companies have used their copyrighted material without the necessary permissions.
(Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed on Ziff Davis copyrights in training and operating its AI systems.)
AI models digest far more than a person could ever read in their lifetime -- something on the order of trillions of tokens. Tokens help AI models break down and process text. You can think of an AI model as a reader who needs help. The model breaks down a sentence into smaller pieces, or tokens -- which are equivalent to four characters in English, or about three-quarters of a word -- so it can understand each piece and then the overall meaning.
From there, the LLM can analyze how words connect and determine which words often appear together.
"It's like building this giant map of word relationships," Snyder said. "And then it starts to be able to do this really fun, cool thing, and it predicts what the next word is … and it compares the prediction to the actual word in the data and adjusts the internal map based on its accuracy."
This prediction and adjustment happens billions of times, so the LLM is constantly refining its understanding of language and getting better at identifying patterns and predicting future words. It can even learn concepts and facts from the data to answer questions, generate creative text formats and translate languages. But they don't understand the meaning of words like we do -- all they know are the statistical relationships.
LLMs also learn to improve their responses through reinforcement learning from human feedback.
"You get a judgment or a preference from humans on which response was better given the input that it was given," said Maarten Sap, assistant professor at the Language Technologies Institute at Carnegie Mellon University. "And then you can teach the model to improve its responses."
LLMs are good at handling some tasks but not others.
Alexander Sikov/iStock/Getty Images Plus
What do large language models do?
Given a series of input words, an LLM will predict the next word in a sequence.
For example, consider the phrase, "I went sailing on the deep blue..."
Most people would probably guess "sea" because sailing, deep and blue are all words we associate with the sea. In other words, each word sets up context for what should come next.
"These large language models, because they have a lot of parameters, can store a lot of patterns," Riedl said. "They are very good at being able to pick out these clues and make really, really good guesses at what comes next."
What are the different kinds of language models?
There are a couple kinds of sub-categories you might have heard, like small, reasoning and open-source/open-weights. Some of these models are multimodal, which means they are trained not just on text but also on images, video and audio. They are all language models and perform the same functions, but there are some key differences you should know.
Is there such a thing as a small language model?
Yes. Tech companies like Microsoft have introduced smaller models that are designed to operate "on device" and not require the same computing resources that an LLM does, but nevertheless help users tap into the power of generative AI.
What are AI reasoning models?
Reasoning models are a kind of LLM. These models give you a peek behind the curtain at a chatbot's train of thought while answering your questions. You might have seen this process if you've used DeepSeek, a Chinese AI chatbot.
But what about open-source and open-weights models?
Still, LLMs! These models are designed to be a bit more transparent about how they work. Open-source models let anyone see how the model was built, and they're typically available for anyone to customize and build one. Open-weights models give us some insight into how the model weighs specific characteristics when making decisions.
Meta AI vs. ChatGPT: AI Chatbots Compared Meta AI vs. ChatGPT: AI Chatbots Compared
Click to unmute
Video Player is loading.
Play Video
Pause
Skip Backward
Skip Forward
Next playlist item
Unmute
Current Time
0:04
/
Duration
0:06
Loaded :
0.00%
0:04
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
0:02
Share
Fullscreen
This is a modal window. This video is either unavailable or not supported in this browser
Error Code: MEDIA_ERR_SRC_NOT_SUPPORTED
The media could not be loaded, either because the server or network failed or because the format is not supported.
Technical details :
Session ID: 2025-05-31:c79bda8fcb89fbafa9a86f4a Player Element ID: vjs_video_3
OK
Close Modal Dialog
Beginning of dialog window. Escape will cancel and close the window.
Text
Color White Black Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Text Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Transparent Caption Area Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset Done
Close Modal Dialog
End of dialog window.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Meta AI vs. ChatGPT: AI Chatbots Compared
What do large language models do really well?
LLMs are very good at figuring out the connection between words and producing text that sounds natural.
"They take an input, which can often be a set of instructions, like 'Do this for me,' or 'Tell me about this,' or 'Summarize this,' and are able to extract those patterns out of the input and produce a long string of fluid response," Riedl said.
But they have several weaknesses.
Where do large language models struggle?
First, they're not good at telling the truth. In fact, they sometimes just make stuff up that sounds true, like when ChatGPT cited six fake court cases in a legal brief or when Google's Bard (the predecessor to Gemini) mistakenly credited the James Webb Space Telescope with taking the first pictures of a planet outside of our solar system. Those are known as hallucinations.
"They are extremely unreliable in the sense that they confabulate and make up things a lot," Sap said. "They're not trained or designed by any means to spit out anything truthful."
They also struggle with queries that are fundamentally different from anything they've encountered before. That's because they're focused on finding and responding to patterns.
A good example is a math problem with a unique set of numbers.
"It may not be able to do that calculation correctly because it's not really solving math," Riedl said. "It is trying to relate your math question to previous examples of math questions that it has seen before."
While they excel at predicting words, they're not good at predicting the future, which includes planning and decision-making.
"The idea of doing planning in the way that humans do it with … thinking about the different contingencies and alternatives and making choices, this seems to be a really hard roadblock for our current large language models right now," Riedl said.
Finally, they struggle with current events because their training data typically only goes up to a certain point in time and anything that happens after that isn't part of their knowledge base. Because they don't have the capacity to distinguish between what is factually true and what is likely, they can confidently provide incorrect information about current events.
They also don't interact with the world the way we do.
"This makes it difficult for them to grasp the nuances and complexities of current events that often require an understanding of context, social dynamics and real-world consequences," Snyder said.
How are LLMs integrated with search engines?
We're seeing retrieval capabilities evolve beyond what the models have been trained on, including connecting with search engines like Google so the models can conduct web searches and then feed those results into the LLM. This means they could better understand queries and provide responses that are more timely.
"This helps our linkage models stay current and up-to-date because they can actually look at new information on the internet and bring that in," Riedl said.
That was the goal, for instance, a while back with AI-powered Bing. Instead of tapping into search engines to enhance its responses, Microsoft looked to AI to improve its own search engine, in part by better understanding the true meaning behind consumer queries and better ranking the results for said queries. Last November, OpenAI introduced ChatGPT Search, with access to information from some news publishers.
But there are catches. Web search could make hallucinations worse without adequate fact-checking mechanisms in place. And LLMs would need to learn how to assess the reliability of web sources before citing them. Google learned that the hard way with the error-prone debut of its AI Overviews search results. The search company subsequently refined its AI Overviews results to reduce misleading or potentially dangerous summaries. But even recent reports have found that AI Overviews can't consistently tell you what year it is.
For more, check out our experts' list of AI essentials and the best chatbots for 2025.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
40 minutes ago
- Yahoo
Knicks Players, Coaches Had Major Karl-Anthony Towns Complaints During Season
Knicks Players, Coaches Had Major Karl-Anthony Towns Complaints During Season originally appeared on Athlon Sports. The New York Knicks traded for Karl-Anthony Towns and Mikal Bridges last offseason with the hopes that they could, after 26 years, finally make the NBA Finals. Advertisement They got close, falling to the Indiana Pacers in six games in the Eastern Conference Finals. While New York had the talent (but not the depth) to keep up with the Pacers, their effort all season had been questioned. While Josh Hart, OG Anunoby, and Bridges seemed to fit perfectly with Jalen Brunson in Tom Thibodeau's system, Towns was the odd man out, despite his undeniable talents. New York Knicks head coach Tom Thibodeau with center Karl-Anthony Towns (32) against the Phoenix Suns at Footprint Center.© Mark J. Rebilas-Imagn Images Towns Showed No Effort on Defense In the postseason, Towns' offensive efficiency and scoring numbers took a slight hit, but it was nothing too serious. However, on defense, he didn't look bought in, which quickly wore down his teammates and coaching staff. Advertisement "Publicly, Knicks players made veiled comments all season about poor communication causing their inconsistencies," reported The Athletic's James Edwards III (subscription required). "Behind the scenes, they and coaches expressed frustration with Towns' defensive habits — less concerned with his talent level and more with his process on that end. Too often, Towns executed incorrect coverages without communicating why he did it. After it became a theme, players worried Towns didn't grasp the importance of the matter." In the postseason, Jalen Duren shot 71.4 percent when guarded by Towns in the first round, and Luke Kornet shot 83.3 percent when guarded by him in the second round. In the Conference Finals, Myles Turner shot only 56 percent from the floor and 31.8 percent from deep, and Towns held him to 0-for-6 shooting from deep, although his interior defense was suspect at best all season long. Advertisement As the Knicks look to retool and make another Finals push next season, KAT is already the odd man out and has already been included in trade rumors. Check out the All Knicks home page for more news, analysis, and must-read articles. Related: Fans React to Timothee Chalamet Announcement During Knicks-Pacers Game 6 Related: New York Knicks Make Tom Thibodeau Decision After Loss to Pacers This story was originally reported by Athlon Sports on Jun 1, 2025, where it first appeared.
Yahoo
40 minutes ago
- Yahoo
New York Knicks Have Two Scapegoats After Loss to Indiana Pacers
New York Knicks Have Two Scapegoats After Loss to Indiana Pacers originally appeared on Athlon Sports. Before the 2024-25 NBA season, the New York Knicks pulled off two of the biggest trades of the offseason, adding Mikal Bridges from the Brooklyn Nets and Karl-Anthony Towns from the Minnesota Timberwolves. Advertisement With their new star-studded roster, the Knicks felt that they could compete for an NBA championship, although they ultimately would lose to the Indiana Pacers in the playoffs for the second season in a row. While Jalen Brunson, OG Anunoby, Mitchell Robinson, and Mikal Bridges were solid in the Eastern Conference Finals, two scapegoats emerged for the Knicks. Indiana Pacers guard Tyrese Haliburton (0), New York Knicks guard Jalen Brunson (11), center Karl-Anthony Towns (32), and forward Josh Hart (3)© Wendell Cruz-Imagn Images Karl-Anthony Towns and Josh Hart Did Not Live Up to Expectations During the Knicks' playoff run last season, Josh Hart was one of the best players on the roster, grabbing rebounds, hitting timely shots, and playing stellar defense. With the addition of Towns, the Knicks bolstered their star power and were expected to compete, although they ran into a similar situation as last season. Advertisement When New York needed Towns and Hart the most, they were non-factors. Towns averaged 24.4 points and 12.8 rebounds in the regular season while shooting 42 percent from deep, landing him on his third All-NBA team. Hart averaged 13.6 points and 9.6 boards. In the decisive series, Towns averaged 24.8 points on much worse efficiency and played poor defense down the stretch. Hart posted only 8.3 points and had his starting spot taken from him after Game 2 for Mitchell Robinson, who could emerge as the Knicks' starting center next season. After the early playoff exit, there have been calls for New York to trade Towns, and the argument makes sense. If Brunson, Anunoby, and Bridges can shoulder a large scoring load, perhaps an elite defender is better than a streaky shooter who is known for disappearing in the postseason. Advertisement Check out the All Knicks home page for more news, analysis, and must-read articles. Related: Fans React to Timothee Chalamet Announcement During Knicks-Pacers Game 6 Related: Tom Thibodeau Gets Candid About Knicks' Offseason After Playoff Exit This story was originally reported by Athlon Sports on Jun 1, 2025, where it first appeared.
Yahoo
an hour ago
- Yahoo
Mets Turn Heads with Major Francisco Lindor Announcement
Mets Turn Heads with Major Francisco Lindor Announcement originally appeared on Athlon Sports. The New York Mets are off to a strong start in the 2025 season. Heading into Sunday's game against the Colorado Rockies, they were tied with the Philadelphia Phillies atop the National League East. The Mets had already taken the first two games of the three-game series by a combined score of 12-4. Advertisement A big reason for their hot start has been the offense, and Francisco Lindor has been right in the middle of it. In the fifth inning of Sunday's game, Lindor launched a solo home run to give the Mets a 4-3 lead. It was his 13th homer of the season, but it also carried historic weight for his career. Shortly after the blast, the Mets shared a milestone on social media: "With his 5th inning home run, Francisco Lindor is now 4th all-time in home runs by a Shortstop!" The stat got fans buzzing: "Better than jeter," posted this fan. A similiar response: "That officially makes him the best SS in the history of New York. I don't make the rules." Advertisement This poster likes what he sees: "HOF." "Lindor is better than Jeter," another fan responds. "Best SS in the last 25yrs HOF Met," said here. "Erasing A Rod history." says this fan. New York Mets shortstop Francisco Lindor (12).© Isaiah J. Downing-Imagn Images Lindor's accomplishment puts him within 10 home runs of Hanley Ramirez for third all-time among shortstops. He also has a chance to catch Miguel Tejada in second. The top spot is still 170 home runs away, but at Lindor's pace, and age at 31, even that doesn't feel out of reach. Since joining the Mets in 2021, Lindor has been a cornerstone of the franchise. He debuted in 2015 with the Cleveland Indians and has since earned four All-Star selections, won two Gold Gloves and taken home four Silver Slugger awards. He also helped Puerto Rico win a silver medal in the 2017 World Baseball Classic. Advertisement On the mound, the Mets have been just as impressive. Heading into Sunday, New York's pitching staff was allowing only 3.3 total runs per game — the lowest in Major League Baseball. After wrapping up the Rockies series, the Mets head west for a four-game series in Los Angeles against the Dodgers. Then they'll travel to Denver for a three-game rematch with Colorado. Related: Shohei Ohtani's Behavior During Dodgers' Blowout Over Yankees Catches Attention Related: Justin Turner Sends Clear Message After MLB Player's Retirement Decision This story was originally reported by Athlon Sports on Jun 1, 2025, where it first appeared.