logo
What is ChatGPT? Here's everything you need to know about OpenAI's famous chatbot.

What is ChatGPT? Here's everything you need to know about OpenAI's famous chatbot.

ChatGPT is OpenAI's flagship AI model. In August 2025, the company unveiled GPT-5, its latest iteration. OpenAI CEO Sam Altman says GPT-5 is its most advanced AI model yet, marked by increased general intelligence and enhanced usability, like a "real-time router" that selects the most appropriate model to handle each user request.
Here's everything you need to know.
What is ChatGPT?
OpenAI is the leading AI startup. Its ultimate mission is to develop artificial general intelligence, or AGI, in a way that benefits humanity as a whole. AGI is a still-theoretical AI that reasons as well as humans.
The company first released ChatGPT in November 2022. It has yet to achieve AGI-level intelligence. For now, it's a conversational chatbot that relies on a large language model to generate responses to questions. Some people use it much like they would Google Search. But it can also do deeper research, generate images and reports, write just about anything, code, and solve problems that involve quantitative reasoning.
Since its debut, the chatbot's user base has exploded. OpenAI said in a blog post in August that its user base had reached 700 million weekly users.
How to use ChatGPT
ChatGPT is available online, and as an app available for both iOS and Android.
Users engage with it through conversation by simply typing in a prompt — an instruction for the chatbot. OpenAI also unveiled an " advanced voice mode" in 2024 — following a legal battle with Scarlett Johansson over the use of a voice that sounded too similar to hers — that lets users engage with the chatbot in natural, real-time conversations with the ability to sense emotions.
Since the release of ChatGPT, OpenAI has unveiled several different ChatGPT models — all of which can be used in conjunction with the chatbot. It has rolled out a series of reasoning models, for example, which are designed to think more deeply about problems. It also unveiled GPT-4.5, which Altman described on X as "the first model that feels like talking to a thoughtful person."
Until the release of GPT-5, it was up to the user to understand which model was best for their needs. Now, GPT-5 can make that decision for them. That means, in essence, that the model is deciding how long it needs to think about a problem to get to the best answer.
ChatGPT also offers dozens of plug-ins to paying subscribers. An Expedia plug-in can help you book a trip, while one from OpenTable will nab you a dinner reservation. OpenAI has also launched Code Interpreter, a version of ChatGPT that can code and analyze data.
Despite the bot's impressive capabilities, it remains imperfect. ChatGPT relies on available data for its responses, which means it can sometimes give misinformation. OpenAI has also been accused of stealing personal or copyrighted data to train ChatGPT. It has even encouraged students to cheat and plagiarize on their assignments.
How does ChatGPT work?
Chatbots like ChatGPT are powered by large amounts of data and computing techniques to make predictions and string words together meaningfully. They not only tap into a vast amount of vocabulary and information but also understand words in context. This helps them mimic speech patterns while dispatching encyclopedic knowledge.
When a user prompts a large language model, the query is broken into tokens — the smallest unit of text a model processes. For OpenAI's models, they can be "as short as a single character or as long as a full word, depending on the language and context. Spaces, punctuation, and partial words all contribute to token counts," according to OpenAI.
ChatGPT's growing influence
Users have flocked to ChatGPT to improve their personal lives and boost productivity. The chatbot attracted 100 million users in its first five days on the market, a record at the time.
Some workers have used the AI chatbot to develop code, write real estate listings, and create lesson plans, while others have made teaching the best ways to use ChatGPT a career in itself.
Businesses, including consulting firms, are also scrambling to adopt AI. The popularity of ChatGPT crystallized the value of a conversational tool, McKinsey senior partner Delphine Zurkiya told Business Insider.
"There wasn't a major shift in our strategy in the sense that we had already been developing a lot of tools internally. It's just these tools now have become, we'll say faster, in delivering value thanks to that natural user interface," she said in regards to the firm's internal chatbot, Lilli. Many consulting firms are also building similar tools for clients. KPMG, for example, has been collecting data on how its workers prompt AI, and used that information to build new tools — for itself and clients.
AI is also making waves in the legal world. Gibson Dunn is piloting ChatGPT Enterprise for its roughly 500 lawyers and staff. Judges, however, say they've seen an increase in fake legal citations due to lawyers relying too much on AI.
There is a slate of ChatGPT competitors that have also come out since its launch. Meta AI, built on its Llama 4 model, offers users an AI assistant that "gets to know" user preferences, remembers context, and is personalized. Anthropic's Claude has become the leading AI assistant for coding. Elon Musk also built Grok, a chatbot that the company is training in line with Musk's support for free speech. Google has Gemini, a multimodal model that CEO Sundar Pichai called "one of the biggest science and engineering efforts we've undertaken as a company."
For OpenAI, which continues to unveil new models at a healthy clip, the chatbot is an eternal work in progress.
"There is no analogy for what we're building," Nick Turley, the company's head of ChatGPT, said on a podcast in August.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Stock Market News Review: SPY, QQQ Tumble on Alarming AI Report as Momentum Fades
Stock Market News Review: SPY, QQQ Tumble on Alarming AI Report as Momentum Fades

Business Insider

time14 minutes ago

  • Business Insider

Stock Market News Review: SPY, QQQ Tumble on Alarming AI Report as Momentum Fades

Both the S&P 500 ETF (SPY) and the Nasdaq 100 ETF (QQQ) finished Tuesday's trading session in negative territory following a report from MIT that cast doubt on the sustainability of AI. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. The report estimates that U.S. companies have invested between $35 and $40 billion in AI, though the returns have been underwhelming. 'Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable [profit and loss] impact,' said MIT. The report surveyed hundreds of leaders and employees and collected data from 300 public AI announcements. Over the weekend, OpenAI CEO Sam Altman said that he believes the AI industry is experiencing a bubble, reported The Verge. 'I do think some investors are likely to lose a lot of money, and I don't want to minimize that, that sucks,' Altman said. 'There will be periods of irrational exuberance.' Meanwhile, the White House announced that President Trump is working to set up a bilateral meeting between Ukrainian President Volodymyr Zelenskyy and Russian President Vladimir Putin in an attempt to secure a ceasefire or truce. Trump added that he was open to attending the meeting. Furthermore, Trump has pledged air support for Ukraine as part of a security guarantee package while insisting that U.S. troops would not set foot on Ukrainian territory. Discussions surrounding these guarantees between the U.S., Ukraine, and several other European nations are set to begin in the coming days. Trump's efforts to broker peace between several nations haven't exactly improved his ratings. According to a Reuters/Ipsos poll ended August 18, Trump's approval rating is still at a term-low of 40%, remaining unchanged from late July. 54% of the respondents worried that Trump was too closely aligned with Russia. Trump met with Putin last week in Anchorage, Alaska to try and resolve the Russia-Ukraine war. To end on a positive note, S&P Global affirmed the U.S. long-term credit rating of AA+, citing elevated tariff revenue that is expected to offset the tax breaks and spending measures from The One Big Beautiful Bill. 'Amid the rise in effective tariff rates, we expect meaningful tariff revenue to generally offset weaker fiscal outcomes that might otherwise be associated with the recent fiscal legislation, which contains both cuts and increases in tax and spending,' said S&P.

What Worries Americans About AI? Politics, Jobs and Friends
What Worries Americans About AI? Politics, Jobs and Friends

CNET

time2 hours ago

  • CNET

What Worries Americans About AI? Politics, Jobs and Friends

Americans have a lot of worries about artificial intelligence. Like job losses and energy use. Even more so: political chaos. All of that is a lot to blame on one new technology that was an afterthought to most people just a few years ago. Generative AI, in the few years since ChatGPT burst onto the scene, has become so ubiquitous in our lives that people have strong opinions about what it means and what it can do. A Reuters/Ipsos poll conducted Aug. 13-18 and released Tuesday dug into some of those specific concerns. It focused on the worries people had about the technology, and the general public has often had a negative perception. In this survey, 47% of respondents said they believe AI is bad for humanity, compared with 31% who disagreed with that statement. Compare those results with a Pew Research Center survey, released in April, that found 35% of the public believed AI would have a negative impact on the US, versus 17% who believed it would be positive. That sentiment flipped when Pew asked AI experts the same question. The experts were more optimistic: 56% said they expected a positive impact, and only 15% expected a negative one. Don't miss any of CNET's unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome. The Reuters/Ipsos poll specifically highlights some of the immediate, tangible concerns many people have with the rapid expansion of generative AI technology, along with the less-specific fears about runaway robot intelligence. The numbers indicate more concern than comfort with those bigger-picture, long-term questions, like whether AI poses a risk to the future of humankind (58% agree, 20% disagree). But even larger portions of the American public are worried about more immediate issues. Foremost among those immediate issues is the potential that AI will disrupt political systems, with 77% of those polled saying they were concerned. AI tools, particularly image and video generators, have the potential to create distorting or manipulative content (known as deepfakes) that can mislead voters or undermine trust in political information, particularly on social media. Most Americans, at 71%, said they were concerned AI would cause too many people to lose jobs. The impact of AI on the workforce is expected to be significant, with some companies already talking about being "AI-first." AI developers and business leaders tout the technology's ability to make workers more efficient. But other polls have also shown how common fears of job loss are. The April Pew survey found 64% of Americans and 39% of AI experts thought there would be fewer jobs in the US in 20 years because of AI. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts But the Reuters/Ipsos poll also noted two other worries that have become more mainstream: the effect of AI on personal relationships and energy consumption. Two-thirds of respondents in the poll said they were concerned about AI's use as a replacement for in-person relationships. Generative AI's human-like tone (which comes from the fact that it was trained on, and therefore replicates, stuff written by humans) has led many users to treat chatbots and characters as if they were, well, actual friends. This is widespread enough that OpenAI, when it rolled out the new GPT-5 model this month, had to bring back an older model that had a more conversational tone because users felt like they'd lost a friend. Even OpenAI CEO Sam Altman acknowledged that users treating AI as a kind of therapist or life coach made him "uneasy." The energy demands of AI are also significant and a concern for 61% of Americans surveyed. The demand comes from the massive amounts of computing power required to train and run large language models like OpenAI's ChatGPT and Google's Gemini. The data centers that house these computers are like giant AI factories, and they're taking up space, electricity and water in a growing number of places.

California State University Bets $17 Million on ChatGPT for All Students and Faculty
California State University Bets $17 Million on ChatGPT for All Students and Faculty

Yahoo

time2 hours ago

  • Yahoo

California State University Bets $17 Million on ChatGPT for All Students and Faculty

California State University Bets $17 Million on ChatGPT for All Students and Faculty originally appeared on L.A. Mag. California State University, the nation's largest public four-year system, will make OpenAI's ChatGPT available to all students and faculty starting this year. The effort is controversial, costing CSU almost $17 million, despite already having a $2.3 million budget gap, even with combative measures such as a tuition increase and spending cuts that have decreased course offerings for students. Across its 23 campuses, some CSU students are paying for personal ChatGPT subscriptions, so University officials say their decision to provide AI tools is a matter of equity. CSU wants each student to have equal access to tools and learning opportunities regardless of means or which campus they attend. The rise of AI has altered how students learn and professors teach, as each assignment is at risk of AI overpowering a student's knowledge. AI's ongoing influence has led professors to question the originality of student work, with a dramatic increase in academic misconduct claims, whether a student used the tool or not. AI has also threatened the potential of students in tech majors, making it essential for them to become fluent in ChatGPT. But if you can't beat them, join them. Universities across the country have been establishing deals with OpenAI, even some public institutions. Among these universities are the CSU schools that serve nearly half a million students and have devoted more resources to generative AI than any other public university, both in terms of funding and reach. ChatGPT Edu, an OpenAI chatbot designed for college settings, is provided and tailored to each campus it serves. The academic chatbot offers a diverse range of tools for students and faculty, including access to ChatGPT-5, the company's flagship model, and the ability to make custom AI models. Researchers at Columbia University in New York City even built a prediction tool to assist with decreasing overdose fatalities, which, without the platform, would have taken weeks of research rather than mere seconds. ChatGPT Edu can also be used as a classic study catalyst, assisting students and faculty with their academic needs. The company suggests using personalized tutoring for students, helping with writing grant applications, and assisting faculty with anyone can have a version of ChatGPT for free, the academic version's possibilities are limitless, and the data is kept private and is not used to train future models. More advanced ChatGPT Plus versions range from $20 to $200 a month. In the first half of this year, CSU paid $1.9 million to grant ChatGPT Edu to 40,000 users. Starting in July, the university system paid $15 million for a year's use for 500,000 users, securing a lower cost-per-student than other universities. Despite the major discount, CSU professors still have their concerns. 'For me, it's frightening,' said Kevin Wehr, a sociology professor at Sacramento State and chair of the California Faculty Association's bargaining team. 'I already have all sorts of problems with students engaging in plagiarism. This feels like it takes a shot of steroids and injects it in the arm of that particular beast.'Wehr also cautions that chatbots can often generate 'hallucinations' or inaccurate information, with many responses spreading racial and gender bias. CSU's financial struggles are also still in question. 'We are cutting programs. We are merging campuses. We are laying off faculty. We are making it harder for students to graduate,' Wehr said. And instead of using that money to ameliorate those issues, he added, 'we're giving it to the richest technology companies in the world."However, CSU is hopeful that the new addition will provide equitable access and prepare all students for a digitally advanced future. This story was originally reported by L.A. Mag on Aug 19, 2025, where it first appeared. Solve the daily Crossword

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store