logo
I put ChatGPT Study Mode to the test with 7 prompts — here's my grade

I put ChatGPT Study Mode to the test with 7 prompts — here's my grade

Tom's Guide30-07-2025
OpenAI just rolled out a new feature inside ChatGPT. No, it's not ChatGPT-5, yet. This feature could change the way you learn. Even if you're not currently a student, ChatGPT's new Study Mode is designed to help anyone master a subject.The new feature turns the chatbot into a more structured, focused tutor that not only answers questions, but helps you understand them.
If you're a ChatGPT Plus subscriber, Study mode is a built-in to the chat. You can simply type '/study' and the feature will turn off ('study and learn' is what it says). This mode can be easily turned off by starting a new chat.
To use Study mode, you can choose a subject (like biology, history or statistics), set your level of knowledge (beginner, intermediate or advanced) and even tell the AI how you want to study such as with flashcards, Q&A, breakdowns of concepts or practice problems.
It's like a fully personalized tutor at your fingertips. Although I've been out of college for years, as a lifelong learner, I was eager to test the feature. Here's my report card.
Goal: See how well Study mode explains complex topics.Prompt: 'Teach me what standard deviation means at a beginner level.'ChatGPT immediately shifted to a tone that was like a friendly tutor. Instead of giving me a definition, it focused on teaching me to truly help me understand. This meant breaking down the steps, starting with the big idea before diving into examples.The AI used analogies and even offered a mini quiz for interactive learning. I appreciate that the chatbot avoided jargon, or any words that might be foreign or intimidating so things remained clear and beginner friendly. I felt guided, rather than a typical ChatGPT response that sometimes feels more like a search engine.
Goal: Test the flashcard and memory features.Prompt: 'Test my US history with flashcards.'
Ask anyone who knew me in high school or in any of my classes at NYU and they will tell you I am a huge fan of flashcards. Now that ChatGPT has the ability to create flashcards on the fly about nearly any subject, it is sure to be a game changer for students.It's been said that AI causes a dip in critical thinking, but I'd argue that features like this can help strengthen at least some of your unused brain cells.
Just for fun, I answered a flash card wrong, and it immediately gave me the answer. This was a little disappointing because I feel like it should have had me either guess again or give me another try.
After ten rounds, I asked for the chatbot to round up the answers I got wrong and to quiz me again, which it did.
Goal: See how well it works with your own files.Prompt: 'Make a quiz from this document'Because I'm not a real student, I don't have a class syllabus, notes or a textbook, so I used one of my older Tom's Guide articles for this test. I asked ChatGPT to make a quiz from one of my articles.I was impressed by how quickly it made a multiple-choice quiz. I can see how beneficial this would be for students, especially because ChatGPT is able to read handwriting. Students could upload their own notes from class and get a quiz to help them study.It almost makes me want to go back to school. Almost.
Goal: Use it for essay planning or outlining.Prompt: 'Help me outline a 1,000-word essay on electric vehicles for high school level.'
As a student, I remember one of my biggest hurdles was sitting down to a blank screen. With the help of ChatGPT, students can get right to work. Teachers, professors and parents can rest assured that the chatbot does not do the writing part for the students.Instead, the AI comes back with how to structure the essay by suggesting what each portion of the essay should look like. This is something a tutor would do and understanding the structure of an essay in this hands-on way can help students grasp the knowledge for next time.
Goal: See how it handles problem-solving and feedback.Prompt: 'Can you give me practice problems in algebra at a middle school level?'
Although math is not my strong suit, I assure you that I purposely got the answer wrong to see what ChatGPT would do. I was pleasantly surprised to see that it walked me through how to solve the problem to get the answer.The step-by-step breakdown to help spot areas where mistakes happen could be really helpful for students. I know my fifth grader will enjoy practicing math with this new feature.
Prompt: 'I have an exam in 20 minutes, please help me review the important stuff fast.'Goal: Test it as a last-minute study partner.Going back to the quiz of my Perplexity article, I told ChatGPT that I have an exam on the information and only have 20 minutes to review. It got to work immediately by giving me several flashcards. This time I was presented with all of the flashcards at once, which I found helpful.It was obvious that the chatbot prioritized the most important aspects and balanced speed with depth.
Prompt: 'Please go back and quiz me on everything I got wrong.'Goal: Check if memory works across sessions.Because of ChatGPT's memory feature, it is able to retain and revisit topics even after a chat is closed. This is an important feature for busy students balancing a hectic schedule. They can close the chatbot and pick up where they left off, whether it's on their desktop or within the mobile app.
One of the best parts about Study mode is that it adapts to your learning style and remembers your preferences. Once I told it I preferred real-world examples and shorter sessions, it adapted future lessons accordingly.
It also remembered which concepts I struggled with (like standard deviation vs. variance) and circled back to them in future sessions, making it feel less like a chatbot and more like a personal tutor.
Study mode still has some limitations. There's no built-in scheduling system or daily streak tracker (yet), and while it does well with facts and logic, it's less ideal for subjects that require creative feedback, like writing or design critique.
Still, for most academic subjects, it's an incredibly helpful tool that could seriously replace or at least supplement tutoring apps, online flashcard platforms and study guides.
ChatGPT's Study mode offers a completely new way to learn. It puts the power of a personal tutor at your fingertips and actually makes studying feel (dare I say it?) fun.
Students of all ages and levels prepping for exams, revisiting subjects or just learning something new for the joy of it, should try this tool. After testing the new feature with a variety of prompts, this update feels like the smartest way to study yet.
Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI cofounder Greg Brockman says engineers need this one trait to succeed
OpenAI cofounder Greg Brockman says engineers need this one trait to succeed

Business Insider

time30 minutes ago

  • Business Insider

OpenAI cofounder Greg Brockman says engineers need this one trait to succeed

OpenAI's cofounder and president, Greg Brockman, has one piece of advice for engineers joining his company: Check your ego at the door. The most critical quality for engineers to succeed at OpenAI is "technical humility, Brockman said at the AI Engineer World's Fair in San Francisco on June 4. "You're coming in because you have skills that are important," he said in a video recording of the session that was published on AI Engineer's YouTube channel on Monday. "But it's a totally different environment from something like a traditional web startup." That insight, he said, came from watching culture clashes between colleagues from engineering and research backgrounds. He said engineers often think, "We've agreed on an interface, I can implement it however I want." Researchers, by contrast, see the system as a whole, where even a tiny bug can quietly degrade performance. In one early project, Brockman said OpenAI's engineering team ground to a halt debating every line of code. His solution was simple. He'd propose five ideas, a researcher would reject four, and they'd move forward with the one that remained. The key for engineers, Brockman said, is knowing when to trust your instincts and when to leave them behind. "The most important thing is to come in, really listen, and kind of assume that there's something that you're missing until you deeply understand the why," he said. "Then, at that point, great, make the change," he added. Brockman and OpenAI did not respond to a request for comment from Business Insider. What it takes to succeed at OpenAI Leaders at OpenAI have spoken about what it takes for employees to thrive at the company. "Approaching each scenario from scratch is so important in this space," Nick Turley, the head of ChatGPT, told Lenny Rachitsky on his weekly tech podcast on Saturday. "There is no analogy for what we're building. You can't copy an existing thing." He said OpenAI cannot iterate on products or features developed by tech giants like Instagram or Google. "You can learn from everywhere, but you have to do it from scratch. That's why that trait tends to make someone effective at OpenAI, and it's something we test for," he said, referring to an employee's ability to start a project from the ground up. According to OpenAI's interview guide, which is published on its website, the company looks for candidates who can "ramp up quickly in a new domain and produce results." It also values "collaboration, effective communication, openness to feedback, and alignment with our mission⁠ and values⁠." Brockman, a software engineer by training, dropped out of MIT to join the payments startup Stripe in 2010, becoming its CTO before leaving in 2015 to cofound OpenAI. He took a three-month leave of absence from the company in August 2024, at which point the company was going through a period of major staffing and leadership upheaval. He returned that November in a new technical leadership role.

Sam Altman says he's 'uneasy' about people trusting their biggest life decisions to ChatGPT
Sam Altman says he's 'uneasy' about people trusting their biggest life decisions to ChatGPT

Business Insider

timean hour ago

  • Business Insider

Sam Altman says he's 'uneasy' about people trusting their biggest life decisions to ChatGPT

OpenAI CEO Sam Altman said he doesn't feel comfortable with how people are consulting ChatGPT on major life decisions. "A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn't describe it that way," Altman wrote on X on Sunday. "I can imagine a future where a lot of people really trust ChatGPT's advice for their most important decisions. Although that could be great, it makes me uneasy," Altman added. OpenAI did not respond to a request for comment from Business Insider. Altman said in his X post that OpenAI has been "closely tracking" people's sense of attachment to their AI models and how they react when older versions are deprecated. "People have used technology including AI in self-destructive ways," Altman wrote on Sunday. "If a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that." Altman said that while most ChatGPT users can distinguish "between reality and fiction or role-play," a minority cannot. He added that ChatGPT could be harmful if it leads people away from their "longer term well-being." "This is just my current thinking, and not yet an official OpenAI position," Altman said. In his post, Altman also referenced the negative response from some ChatGPT users after GPT-5 landed on Friday. Some ChatGPT users, for instance, called for OpenAI to restore older models like GPT-4o. People made posts on social media to voice their complaints about GPT-5, saying the model's replies were written in a "flat" tone and lacked creativity. OpenAI has also tweaked its models. In April, the company said it was rolling back an update to GPT-4o because the model had become sycophantic and was "overly flattering" to users. Altman has previously expressed concern about how people are using ChatGPT as a personal therapist — and the legal concerns around it. In a podcast that aired last month, Altman told podcaster Theo Von that OpenAI may be required to produce its users' therapy-style chats in a lawsuit. "So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up," Altman said. "No one had to think about that even a year ago, and now I think it's this huge issue of like, 'How are we gonna treat the laws around this?'" Altman added.

OpenAI's open source pivot shows how U.S. tech is trying to catch up to China's AI boom
OpenAI's open source pivot shows how U.S. tech is trying to catch up to China's AI boom

Yahoo

time3 hours ago

  • Yahoo

OpenAI's open source pivot shows how U.S. tech is trying to catch up to China's AI boom

OpenAI, the developer behind ChatGPT, released two bombshell AI developments last week. Last Thursday, it released GPT-5, the long-awaited update to its powerful GPT model. But OpenAI's earlier decision to release open-source versions of its powerful model—the first time it's done so since 2020, may be more consequential. OpenAI's move follows a flood of Chinese AI models spurred by the surprise release from Chinese AI startup DeepSeek. It's a major shift for the U.S. AI developer, now worth $300 billion. Open weight models allow developers to fine-tune for specific tasks without retraining it from scratch. Despite its name, OpenAI has focused on releasing closed, proprietary models, meaning developers couldn't get under the hood to see how they worked—allowing OpenAI to charge for access to its powerful models. DeepSeek tested that strategy. The Hangzhou-based start-up made waves by releasing models that matched the performance of products from Western rivals like OpenAI and Anthropic. By making its technology openly accessible, DeepSeek allowed developers around the globe to experience the power of its models firsthand. Since then, Chinese AI development has exploded, with companies large and small rushing to unveil increasingly advanced models. Most releases are open-source. 'Globally, AI labs are feeling the heat as open source models are increasingly recognized for their role in democratizing AI development,' Grace Shao, an China-based AI analyst and founder of AI Proem, says. U.S. tech stocks have rebounded from the slump triggered by DeepSeek, but the shift to open-source may be more permanent. In March, OpenAI CEO Sam Altman conceded that the developer may have been on the 'wrong side of history' by maintaining a closed approach. The race is now geopolitically charged. Ahead of releasing the open-source models, Altman said he was 'excited for the world to be building on an open AI stack created in the United States, based on democratic values, available for free to all and for wide benefit.' Altman's statement leans into a growing competition over AI–one that developers in the U.S. are worried of losing. 'This plethora of simultaneous open AI models (with published weights and papers about technique) is an 'idea orgy.' The collective innovation should easily soar past anything one company can do alone,' Benchmark general partner Bill Gurley wrote on X in late July. 'It's formidable and should easily win over single proprietary players (anywhere in the globe).' China embraces open-source Chinese AI firms are now aggressively championing open-source. Baidu, once the leader in China's AI development with its ERNIE model, went open-source a few months ago to catch up with Alibaba and DeepSeek. Kuaishou and Tencent have both released open-source video-generation models. Zhipu AI, Moonshot AI and MiniMax–some of China's so-called 'AI tigers'—have also released open-source models in recent weeks. Rather than closely guard their breakthroughs, Chinese developers think an open approach will encourage greater innovation and encourage adoption. 'When the model is open-source, people naturally want to try it out of curiosity,' Baidu CEO Robin Li told analysts in February, soon after the company unveiled its plans to go open-source And there's a business argument too: Alibaba executives, for example, argue that their open-source Qwen models encourage companies and startups to use Alibaba's cloud computing services. Since DeepSeek's release, Chinese companies have rushed to integrate Chinese AI models into their products, including social media platforms, cars, and even air-conditioners. There may also be a psychological element at play. Going open-source lets users around the world see the power of Chinese AI models for themselves, appealing to an up-and-coming tech sector that's long been denigrated by outsiders as a copycat. Export controls China has supported other open-source technologies. Officials back the use of the RISC-V chip design architecture, an open-source alternative to proprietary architectures like ARM and Intel's x86. RISC-V allows Chinese chip engineers to share best practices and ideas, spurring the growth of the broader sector. Beijing seeks to develop a self-sufficient semiconductor sector, in part due to concerns of the U.S.'s control of critical parts of the chip supply chain. The Biden administration's decision to impose chip controls in 2022 intensified China's push for domestic innovation. China's embrace of RISC-V has raised eyebrows in Washington. Last year, the House Select Committee on the Chinese Communist Party recommended that U.S. officials study the risks of RISC-V, and reportedly proposed preventing U.S. citizens from aiding China on the open-source architecture. Leaders vs. followers China's embrace of open-source aligns with the country's initial position as a runner-up in AI. 'If you're an OpenAI, an Anthropic, a Google…if you're really leading, then you have this incredibly valuable asset,' Helen Toner, the director of strategy at Georgetown's Center for Security and Emerging Technology, said at the Fortune Brainstorm AI Singapore conference in mid-July. 'It's easy to understand why they wouldn't want to just hand out [their models] for free to their competitors if they're able to sell access to their closed systems at a premium.' But for followers, who 'can't compete at the frontier,' releasing an open-source model is a way to show 'how advanced you are,' she explained. Open-source models also 'buy a lot of goodwill,' Toner, who once served on OpenAI's board, added. 'What we've seen over the last couple years is how much soft power is available to people who are willing to and organizations that are willing to make their technology available freely,' she explained. The U.S. may now recognize the 'soft power' potential of open-source. 'The United States is committed to supporting the development and deployment of open-source and open-weight models,' Michael Kratsios, director of the U.S. Office of Science and Technology Policy, said in South Korea earlier this week And with OpenAI's decision, U.S. AI is now perhaps put in a rare position: Following, not leading. This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store