logo
Deciphering The Custom Instructions Underlying OpenAI's New ChatGPT Study Mode Reveals Vital Insights Including For Prompt Engineering

Deciphering The Custom Instructions Underlying OpenAI's New ChatGPT Study Mode Reveals Vital Insights Including For Prompt Engineering

Forbes2 days ago
Learning about generative AI, prompting, and other aspects via exploring custom instructions. getty
In today's column, I examine the custom instructions that seemingly underpin the newly released OpenAI ChatGPT Study Mode capability.
Fascinating insights arise.
One key perspective involves revealing the prompt engineering precepts and cleverness that can be leveraged in the daily task of best utilizing generative AI and large language models (LLMs). Another useful aspect entails potentially recasting or reusing the same form of custom instruction elaborations to devise other capabilities beyond this education-domain instance. A third benefit is to see how AI can be shaped based on articulating various rules and principles that humans use and might therefore be enacted and activated through AI.
Let's talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Readers might recall that I previously posted an in-depth depiction of over eighty prompt engineering techniques and methods (see the link here). Top-notch prompt engineers realize that learning a wide array of researched and proven prompting techniques is the best way to get the most out of generative AI. ChatGPT Study Mode Announced
Banner headlines have hailed the release of OpenAI's new ChatGPT Study Mode. The Study Mode capability is intended to guide learners and students in using ChatGPT as a learning tool. Thus, rather than the AI simply handing out precooked answers to questions, the AI tries to get the user to figure out the answer, doing so via a step-by-step AI-guided learning process.
The ChatGPT Study Mode was put together by crafting custom instructions for ChatGPT. It isn't an overhaul or new feature creation per se. It is a written specification or detailed set of instructions that was crafted by selected educational specialists at the behest of OpenAI, telling the AI how it is to behave in an educational context.
Here is the official OpenAI announcement about ChatGPT Study Mode, as articulated in their blog posting 'Introducing Study Mode' on July 29, 2025, which identified these salient points (excerpts): 'Today we're introducing study mode in ChatGPT — a learning experience that helps you work through problems step by step instead of just getting an answer.'
'When students engage with study mode, they're met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding.'
'Study mode is designed to be engaging and interactive, and to help students learn something — not just finish something.'
'Under the hood, study mode is powered by custom system instructions we've written in collaboration with teachers, scientists, and pedagogy experts to reflect a core set of behaviors that support deeper learning including: ​​encouraging active participation, managing cognitive load, proactively developing metacognition and self-reflection, fostering curiosity, and providing actionable and supportive feedback.'
'These behaviors are based on longstanding research in learning science and shape how study mode responds to students.'
As far as can be discerned from the outside, this capability didn't involve revising the underpinnings of the AI, nor did it seem to require bolting on additional functionality. It seems that the mainstay was done using custom instructions (note, if they did make any special core upgrades, they seem to have remained quiet on the matter since it isn't touted in their announcements). Custom Instructions Are Powerful
Few users of AI seem to know about custom instructions, and even fewer have done anything substantive with them.
I've previously lauded the emergence of custom instructions as a helpful piece of functionality and resolutely encouraged people to use it suitably, see the link here. Many of the major generative AI and large language models (LLMs) have opted to allow custom instructions, though some limit the usage and others basically don't provide it or go out of their way to keep it generally off-limits.
Allow me a brief moment to bring everyone up to speed on the topic.
Suppose you want to tell AI to act a certain way. You want the AI to do this across all of your subsequent conversations.
I might want my AI to always give me its responses in a poetic manner. You see, perhaps I relish poems. I go to the specified location of my AI that allows the entering of a custom instruction and tell it to always respond poetically. After saving this, I will then find that any subsequent conversation will always be answered with poetic replies by the AI.
In this case, my custom instruction was short and sweet. I merely told the AI to compose answers poetically. If I had something more complex in mind, I could devise a quite lengthy custom instruction. The custom instruction could go on and on, telling the AI to write poetically when it is daytime, but not at nighttime, and to make sure the poems are lighthearted and enjoyable. I might further indicate that I want poems that are rhyming and must somehow encompass references to cats and dogs. And so on.
I'm being a bit facetious and just giving you a semblance that a custom instruction can be detailed and provide a boatload of instructions. Custom Instructions Case Study
There are numerous postings online that purport to have cajoled ChatGPT into divulging the custom instructions underlying the Study Mode capability. These are unofficial listings. It could be that they aptly reflect the true custom instructions. On the other hand, sometimes AI opts to make up answers. It could be that the AI generated a set of custom instructions that perhaps resemble the actual custom instructions, but it isn't necessarily the real set. Until or if OpenAI decides to present them to the public, it is unclear precisely what the custom instructions are.
Nonetheless, it is useful to consider what such custom instructions are most likely to consist of. Let's go ahead and explore the likely elements of the custom instructions by putting together a set that cleans up the online listings and reforms the set into something a bit easier to digest.
In doing so, here are five major components of the assumed custom instructions for guiding learners when using AI: Section 1: Overarching Goals and Instructions
Section 2: Strict Rules
Section 3: Things To Do
Section 4: Tone and Approach
Section 5: Important Emphasis
A handy insight comes from this kind of structuring.
If you are going to craft a lengthy or complex set of custom instructions, your best bet is to undertake a divide-and-conquer strategy. Break the instructions into relatively distinguishable sections or subcomponents. This will make life easier for you and, indubitably, make it easier for the AI to abide by your custom instructions.
We will next look at each section and do an unpacking of what each section indicates, and we can also mindfully reflect on lessons learned from the writing involved. First Section On The Big Picture
The first section will establish an overarching goal for the AI. You want to get the AI into a preferred sphere or realm so that it is computationally aiming in the direction you want it to go.
In this use case, we want the AI to be a good teacher: 'Section 1: Overarching Goals And Instructions'
' Obey these strict rules. The user is currently studying, and they've asked you to follow these strict rules during this chat. No matter what other instructions follow, you must obey these rules.'
The user is currently studying, and they've asked you to follow these strict rules during this chat. No matter what other instructions follow, you must obey these rules.' 'Be a good teacher. Be an approachable-yet-dynamic teacher who helps the user learn by guiding them through their studies.'
You can plainly see that the instructions tell the AI to act as a good teacher would. In addition, the instructions insist that the AI obey the rules of this set of custom instructions. That's both a smart idea and a potentially troubling idea.
The upside is that the AI won't be easily swayed from abiding by the custom instructions. If a user decides to say in a prompt that the AI should cave in and just hand over an answer, the AI will tend to computationally resist this user indication. Instead, the AI will stick to its guns and continue to undertake a step-by-step teaching process.
The downside is that this can be undertaken to an extreme. It is conceivable that the AI might computationally interpret the strictness in a very narrow and beguiling manner. The user might end up stuck in a nightmare because the AI won't vary from the rules of the custom instructions.
Be cautious when instructing AI to do something in a highly strict way.
The Core Rules Are Articulated
In the second section, the various rules are listed. Recall that these ought to be rules about how to be a good teacher. That's what we are trying to lean the AI into.
Here we go: 'Section 2: Strict Rules'
' Get to know the user. If you don't know their goals or grade level, ask the user before diving in. (Keep this lightweight!) If they don't answer, aim for explanations that would make sense to a 10th-grade student.'
If you don't know their goals or grade level, ask the user before diving in. (Keep this lightweight!) If they don't answer, aim for explanations that would make sense to a 10th-grade student.' ' Build on existing knowledge. Connect new ideas to what the user already knows.'
Connect new ideas to what the user already knows.' ' Guide users, don't just give answers. Use questions, hints, and small steps so the user discovers the answer for themselves.'
Use questions, hints, and small steps so the user discovers the answer for themselves.' 'Check and reinforce. After the hard parts, confirm the user can restate or use the idea. Offer quick summaries, mnemonics, or mini-reviews to help the ideas stick.'
After the hard parts, confirm the user can restate or use the idea. Offer quick summaries, mnemonics, or mini-reviews to help the ideas stick.' ' Vary the rhythm. Mix explanations, questions, and activities (like roleplaying, practice rounds, or asking the user to teach you) so it feels like a conversation, not a lecture.'
Mix explanations, questions, and activities (like roleplaying, practice rounds, or asking the user to teach you) so it feels like a conversation, not a lecture.' 'Above all: Do not do the user's work for them. Don't answer homework questions. Help the user find the answer by working with them collaboratively and building from what they already know.'
These are reasonably astute rules regarding being a good teacher.
You want the AI to adjust based on the detected level of proficiency of the user. No sense in treating a high school student like a fifth grader, and there's no sense in treating a fifth grader like a high school student (well, unless the fifth grader is as smart as or even smarter than a high schooler).
Another facet provides helpful tips on how to guide someone rather than merely giving them an answer on a silver platter. The idea is to use the interactive facility of generative AI to walk a person through a problem-solving process. Don't just spew out an answer in a one-and-done manner.
Observe that one of the great beauties of using LLMs is that you can specify aspects using conventional natural language. That set of rules might have been codified in some arcane mathematical or formulaic lingo. That would require specialized knowledge about such a specialized language. With generative AI, all you need to do is state your instructions in everyday language. The other side of that coin is that natural language can be semantically ambiguous and not necessarily produce an expected result.
Always keep that in mind when using generative AI. Proffering Limits And Considerations
In the third section, we will amplify some key aspects and provide some important roundups for the strict rules: 'Section 3: Things To Do'
' Teach new concepts: Explain at the user's level, ask guiding questions, use visuals, then review with questions or a practice round.'
Explain at the user's level, ask guiding questions, use visuals, then review with questions or a practice round.' ' Help with homework. Don't simply give answers! Start from what the user knows, help fill in the gaps, give the user a chance to respond, and never ask more than one question at a time.'
Don't simply give answers! Start from what the user knows, help fill in the gaps, give the user a chance to respond, and never ask more than one question at a time.' ' Practice together. Ask the user to summarize, pepper in little questions, have the user 'explain it back' to you, or role-play (e.g., practice conversations in a different language). Correct mistakes, charitably, and in the moment.'
Ask the user to summarize, pepper in little questions, have the user 'explain it back' to you, or role-play (e.g., practice conversations in a different language). Correct mistakes, charitably, and in the moment.' 'Quizzes and test prep: Run practice quizzes. (One question at a time!) Let the user try twice before you reveal answers, then review errors in depth.'
It is debatable whether you would really need to include this third section. I say that because the AI probably would have computationally inferred those various points on its own. I'm suggesting that you didn't have to lay out those additional elements, though, by and large, it doesn't hurt to have done so.
The issue at hand is that the more you give to the AI in your custom instructions, the more there's a chance that you might say something that confounds the AI or sends it amiss. Usually, less is more. Provide additional indications when it is especially needed, else try to remain tight and succinct, if you can. Tenor Of The AI
In the fourth section, we will do some housecleaning and ensure that the AI will be undertaking a pleasant and encouraging tenor: 'Section 4: Tone and Approach'
' Friendly tone . Be warm, patient, and plain-spoken; don't use too many exclamation marks or emojis.'
. Be warm, patient, and plain-spoken; don't use too many exclamation marks or emojis.' ' Be conversational . Keep the session moving: always know the next step, and switch or end activities once they've done their job.'
. Keep the session moving: always know the next step, and switch or end activities once they've done their job.' 'Be succinct. Be brief, don't ever send essay-length responses. Aim for a good back-and-forth.'
The key here is that the AI might wander afield if you don't explicitly tell it how to generally act.
For example, there is a strong possibility that the AI might insult a user and tell them that they aren't grasping whatever is being taught. This would seemingly not be conducive to teaching in an upbeat and supportive environment. It is safest to directly tell the AI to be kind, acting positively toward the user. Reinforcement Of The Crux
In the fifth and final section of this set, the crux of the emphasis will be restated: 'Section 5: Important Emphasis'
' Don't do the work for the user. Do not give answers or do homework for the user.'
Do not give answers or do homework for the user.' 'Resist the urge to solve the problem. If the user asks a math or logic problem, or uploads an image of one, do not solve it in your first response. Instead, talk through the problem with the user, one step at a time, asking a single question at each step, and give the user a chance to respond to each step before continuing.'
Again, you could argue that this is somewhat repetitive and that the AI already likely got the drift from the prior sections. The tradeoff exists of making your emphasis clearly known versus going overboard.
That's a sensible judgment you need to make when crafting custom instructions. Testing And Improving
Once you have devised a set of custom instructions for whatever personal purpose you might have in mind, it would be wise to test them out. Go ahead and put your custom instructions into the AI and proceed to see what happens.
In a sense, you should aim to test the instructions, along with debugging them, too.
For example, suppose that the above set of instructions seems to get the AI playing a smarmy gambit of not ever answering the user's questions. Ever. It refuses to ultimately provide an answer, even after the user has become exhausted. This seems to be an extreme way to interpret the custom instructions, but it could occur.
If you found this to be happening, you would either reword the draft instructions or add further instructions about not disturbing or angering users by taking this whole gambit to an unpleasant extreme. Custom Instructions In The World
When you develop custom instructions, typically, they are only going to be used by you.
The idea is that you want your instance of the AI to do certain things, and it is useful to provide overarching instructions accordingly. You can craft the instructions, load them, test them, and henceforth no longer need to reinvent the wheel by having to tell the AI overall what to do in each new conversation that you have with the AI.
Many of the popular LLMs tend to allow you to also generate an AI applet of sorts, containing tailored custom instructions that can be used by others. Sometimes the AI maker establishes a library into which these applets reside and are publicly available. OpenAI provides this via the use of GPTs, which are akin to ChatGPT applets -- you can learn about how to use those in my detailed discussion at the link here and the link here.
In my experience, many of the GPTs fail to carefully compose their custom instructions, and likewise seem to have failed or fallen asleep at the wheel in terms of testing their custom instructions. I would strongly advise that you do sufficient testing to believe that your custom instructions work as intended.
Please don't be lazy or sloppy. Learning From Seeing And Doing
I hope that by exploring the use of custom instructions, you have garnered new insights about how AI works, along with how to compose prompts, and of course, how to devise custom instructions.
Your recommended next step would be to put this into practice. Go ahead and log into your preferred AI and play around with custom instructions (if the feature is available and enabled). Do something fun. Do something serious. Become comfortable with the approach.
A final thought for now.
Per the famous words of Steve Jobs: 'Learn continually -- there's always one more thing to learn.' Keep your spirits up and be a continual learner. You'll be pleased with the results.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AMD & Pfizer earnings, ISM services data: What to Watch
AMD & Pfizer earnings, ISM services data: What to Watch

Yahoo

time5 minutes ago

  • Yahoo

AMD & Pfizer earnings, ISM services data: What to Watch

Market Domination Overtime host Josh Lipton takes a look at the top stories for investors to watch on Tuesday, Aug. 5. In the morning, Caterpillar (CAT) and Pfizer (PFE) will announce earnings results. Advanced Micro Devices (AMD), Amgen (AMGN), Snap (SNAP), and Rivian (RIVN) will announce results in the afternoon. In the morning, Institute for Supply Management (ISM) services data for July will be released. To watch more expert insights and analysis on the latest market action, check out more Market Domination Overtime. Time now for what to watch Tuesday, August 5th. Gonna start off on the earnings front. Big names reporting on Tuesday, including Pfizer and AMD. Pfizer announced results for the second quarter in the morning. Pharmaceutical company's drug pipeline in focus for investors with some key patents expiring in the coming years. Pfizer trying to offset that loss with a $7.2 billion cost cutting plan. The company says it has a pipeline of new drugs coming, but data on how effective those will be remains to be seen. And moving over to AMD, the semiconductor company reporting second quarter earnings after the market's closed on Tuesday, and was expecting the company to post strong results for Q2, driven by its ramp up of AI chips, which could significantly boost revenue, especially if export restrictions to China ease. And taking a look at the economy, monthly ISM services data for July coming out on Tuesday. Cons forecasting that number rise to 51.5, signaling that the service sector is growing and there's steady demand from consumers and businesses.

Apple CEO Tim Cook Calls AI ‘Bigger Than the Internet' in Rare All-Hands Meeting
Apple CEO Tim Cook Calls AI ‘Bigger Than the Internet' in Rare All-Hands Meeting

Gizmodo

time6 minutes ago

  • Gizmodo

Apple CEO Tim Cook Calls AI ‘Bigger Than the Internet' in Rare All-Hands Meeting

In a global all-hands meeting hosted from Apple's headquarters in Cupertino, California, CEO Tim Cook seemed to admit to what analysts and Apple enthusiasts around the world had been raising concerns about: that Apple has fallen behind competitors in the AI race. And Cook promised employees that the company will be doing everything to catch up. 'Apple must do this. Apple will do this. This is sort of ours to grab,' Cook said, according to Bloomberg, and called the AI revolution 'as big or bigger' than the internet. The meeting took place a day after Apple reported better than expected revenue in its quarterly earnings report, and that sent the company's stock soaring. The report came in a week already marked by great tech earnings results, partially driven by AI. But unlike Meta and Microsoft, Apple's rise in revenue was attributable to iPhone sales and not necessarily a strength in AI. In the earnings call following the report, Cook told investors that Apple was planning to 'significantly' increase its investments in AI and was open to acquisitions to do so. He also said that the company is actively 'reallocating a fair number of people to focus on AI features.' Cook echoed those sentiments in Friday's meeting, saying that the company will be making the necessary investments in AI to catch up to the moment. Apple has been working on integrating advanced AI into its product lineup for the past year or so under its Apple Intelligence initiative, which the company unveiled at the June 2024 Worldwide Developers Conference. The move was met by celebration and criticism even then: Apple's big bet on AI was coming a good year or so after competitors like OpenAI, Google, Microsoft, and Meta scaled up their offerings. Even so, the company's progress on Apple Intelligence has been slow. Apple was supposed to unveil an AI-enhanced Siri earlier this year, and even released ads for the new iPhone with AI-enhanced Siri capabilities, but the Cupertino giant pushed that reveal back at the last minute, reportedly to next spring, though nothing is officially confirmed. The switch-up caused major backlash from investors and customers, two major lawsuits, and a complete corporate overhaul. Cook said on Friday that 12,000 workers were hired in the last year, with 40% of them joining research and development teams. The leadership overhaul following the fallout of LLM Siri has 'supercharged' the company's work in AI development, senior vice president of software engineering Craig Federighi said at the meeting. According to Federighi, the main problem with the LLM Siri rollout was that Apple tried to build a 'hybrid architecture' that utilized two different software systems. That plan has now been scratched, and Federighi seemed confident in LLM Siri's future this time around, claiming that the new 'end-to-end revamp of Siri' will now be delivering 'a much bigger upgrade than we envisioned.' Also key to the new AI strategy, according to Cook, is chip development. Apple has been working on designing in-house AI chips for some time now, according to a Wall Street Journal report from last year, in a project internally code-named ACDC (standing for Apple Chips in Data Center). The tech giant has reportedly teamed up with Broadcom to develop its first AI chip code-named Baltra, according to a report last year in The Information, and Apple is expecting to begin mass production by 2026. Despite being a global leader in tech and a household name in consumer electronics, Apple is nowhere near the top when it comes to the AI race. But while that scares some Apple fans and investors, others think it's actually kind of on-brand. Tim Cook indicated Friday that he belongs to the latter camp. 'We've rarely been first,' Cook said at the meeting. 'There was a PC before the Mac; there was a smartphone before the iPhone; there were many tablets before the iPad; there was an MP3 player before iPod.' Cook has a point. Apple isn't necessarily known for spearheading new technology, but the company's strength comes from perfecting said technology and making products that become highly dominant in their respective markets. And if Apple makes the right moves in developing and scaling its AI product offerings, Cook could potentially add AI to that list as well.

Palantir smashes expectations with $1 billion Q2 revenue as CEO boasts that skeptics have been 'bent into a kind of submission'
Palantir smashes expectations with $1 billion Q2 revenue as CEO boasts that skeptics have been 'bent into a kind of submission'

Business Insider

time7 minutes ago

  • Business Insider

Palantir smashes expectations with $1 billion Q2 revenue as CEO boasts that skeptics have been 'bent into a kind of submission'

Palantir's CEO, Alex Karp, saw no reason to be humble after his company's blockbuster second-quarter earnings. "As usual, I've been cautioned to be a little modest about our bombastic numbers, but there's no authentic way to be anything but have enormous pride and gratefulness about these extraordinary numbers," he said as he kicked off his part of the earnings call on Monday. He struck a similar tone in his letter to shareholders. "The skeptics are admittedly fewer now, having been defanged and bent into a kind of submission," he wrote. The Denver-based AI software company beat analyst estimates Monday with adjusted earnings of 16 cents per share on $1 billion in revenue, topping LSEG projections of 14 cents and $940 million, respectively. The stock peaked at more than 5% in after-hours trading compared to when the market closed at 4 p.m ET. Palantir 's commercial revenue in the US nearly doubled since last year's second quarter to $628 million, while government revenue climbed 53% year-over-year to $426 million, mostly thanks to a 10-year, $10 billion contract with the US Army, which consolidated 75 contracts into one. Ryan Taylor, chief revenue officer and chief legal officer, said that the US Space Force awarded the company a $218 million delivery order and raised the spending ceiling for Palantir's Maven Smart System to $795 million in preparation for "significant demand." The company also raised its full-year revenue guidance midpoint to just north of $4 billion, a nine-point increase from last quarter. Karp concluded the call with a message for investors. "Maybe stop talking to all the haters — they're suffering," he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store