logo
OpenAI brings 'study mode' in ChatGPT: What it is, how it works, and more

OpenAI brings 'study mode' in ChatGPT: What it is, how it works, and more

US-based artificial intelligence company OpenAI has introduced a new 'study mode' in ChatGPT that helps users work through academic questions step-by-step, instead of simply providing answers. The feature is available starting July 29 for all logged-in users across ChatGPT Free, Plus, Pro, and Team plans, with access for ChatGPT Edu users expected in the coming weeks. The update is part of OpenAI's wider push to make ChatGPT more useful in educational settings.
ChatGPT's study mode: What is it
According to OpenAI, study mode is designed to support learning rather than provide shortcuts for it. Instead of offering direct answers, ChatGPT now responds with a guided experience that includes interactive prompts, hints, and self-reflection questions.
Lessons are broken down into structured sections that connect concepts and reduce cognitive overload, especially when handling dense or complex topics. Users can also take quick quizzes or answer open-ended questions to check their understanding during the session, according to OpenAI's blog.
The feature remains optional and can be turned on or off during any conversation, allowing users to switch between regular and guided learning modes depending on their goals.
ChatGPT study mode: Key features
Interactive guidance: Prompts use Socratic questioning techniques (a discussion that leads to deeper understanding and critical thinking) to engage users in the learning process rather than delivering final answers outright.
Organised explanations: Information is organised into easy-to-follow sections that highlight the key connections between topics, keeping information engaging with just the right amount of context and reducing overwhelm for complex topics.
Personalised difficulty: The system adjusts based on a user's prior chats and self-reported skill level.
Built-in knowledge checks: Short quizzes and feedback are included to help users reflect on what they've learned.
Flexible use: Users can toggle study mode on and off as needed throughout a session.
ChatGPT study mode: How it works
As per OpenAI, study mode runs on custom system instructions crafted with input from teachers, scientists, and education experts. These instructions are said to aim to reflect established learning principles, such as reducing mental overload and promoting self-reflection.
ChatGPT study mode: Limitations
For now, study mode relies on user motivation. It's not mandatory, and there are no tools available to enforce its use—either by schools, parents, or institutions. Students can turn it off at any time and revert to ChatGPT's regular mode, which still provides direct answers.
In the future, OpenAI said that it plans to add clearer visuals for complex material, options for goal setting, progress tracking, and deeper personalisation.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Decoding UK's OSA, ChatGPT adds emotional support, and pondering Windows' future
Decoding UK's OSA, ChatGPT adds emotional support, and pondering Windows' future

Hindustan Times

time25 minutes ago

  • Hindustan Times

Decoding UK's OSA, ChatGPT adds emotional support, and pondering Windows' future

The UK's newly updated enforcements for the Online Safety Act (OSA) are worth having a conversation about, this week. Most of the online media coverage would lead you to believe that it's a new act that's been passed into a new law, but it isn't — the Online Safety Act received Royal Assent on 26 October 2023, some provisions came into force on 10 January 2024, while additional elements took effect on 1 April 2024. We're talking about this now, because the critical age verification requirements took effect on 25 July 2025, which means all online platforms accessible in that part of the world are legally required to implement "highly effective" age assurance measures. In fact, this will not have a UK-only fallout, because it could potentially reshape the digital landscape globally, much in the way the GDPR or General Data Protection Regulation of 2016 has had on how online platforms and services collect and handle user data, in subsequent regulations worldwide. OpenAI The mandatory age verification measures that came into place late last month, are meant to provide a substantial legal assurance of a user's age and consent, the idea being to reduce access to content such as pornography, or anything that encourages self-harm, suicide or eating disorders for instance, on the World Wide Web. Apple There are two sides to this coin. Tech companies and content creators are alarmed by the OSA's new sweeping requirements. If any site accessible in the UK—including social media, search engines, music sites, and adult content providers— does not enforce age checks to prevent children from seeing harmful content, they now face potential fines up to 10% of their revenue for non-compliance. This could very well pressure them into implementing invasive verification systems. Depending how a specific platform does it, methods include scanning your face, credit card, or an identity document, if you want to access content. UK's regulators have been at it for a while, a recent case in point being the Investigatory Powers Act, which we decoded in my Tech Tonic column recently which would have forced tech companies to disable active encryption methods, putting user data at significant risk. There are privacy and access implications of this, something digital rights advocates warn about, detailing that these measures have the potential to create an unprecedented surveillance infrastructure, with these massive databases of personal and biometric information inevitably vulnerable to breaches and misuse. Users must now choose between privacy and access, fundamentally altering the internet's traditionally open nature. 'The Act, which is now coming in enforcement stages exemplifies how well-intended laws can cause unintended consequences on other aspects of technologies. The mandatory use of accredited technology is bound to weaken end-to-end encryption which is the hallmark of a free digital society without which commerce or personal communications systems cannot work. Any of the current age verification methods cannot be imposed without addressing biometric surveillance creep, data breaches and misuse, and increased centralization of user data,' explains a spokesperson of Software Freedom Law Centre India ( in a conversation with us. 'The OSA's age assurance rules require platforms to use facial scans, upload IDs, or verify age through banking or telecom data. These measures raise serious privacy concerns and discourage online anonymity. Larger platforms are testing third-party software for this, but the risk does not disappear, it spreads. User data could now sit with multiple external vendors, increasing the chances of leaks or misuse,' points out Vikram Jeet Singh, Partner at BTG Advaya, a law firm. Possible global implications cannot be ignored, considering the OSA's impact extends far beyond British borders, potentially influencing online speech frameworks worldwide. There can be an argument that while it is effective in some form, it breaks the right to privacy and free speech, while also compromising cybersecurity. Countries such as India, already grappling with content regulation challenges, are likely to be closely watching the UK's approach as a potential model, or cautionary tale. The precedent set by Britain's age verification requirements could normalise similar measures globally, which alongside a fragmented internet where access to information depends on geography, also depends on a willingness to submit to digital surveillance. This is something the spokesperson details, 'Such laws generally have global ripple effects like the GDPR. Companies may choose to adopt UK-compliant policies to avoid the costs related to fragmentation. Countries will emulate such provisions to curb dissent and justify surveillance under the guise of child protection or moral regulation by the state.' What's the way forward? The UK's now complete Online Safety Act effectively represents a watershed moment for internet governance, confronted with often opposing scenarios of fundamental questions about digital rights and any government's commitment to protecting children. The intent of the UK government is commendable, in terms of what it is trying to achieve — the internet as a safe space for children. However, the immediate surge in VPN downloads in the UK on the Apple App Store and Google Play Store suggest, citizens aren't likely to play along. Does that potentially undermine the Act's effectiveness? EMOTIONAL SUPPORT OpenAI says that they are updating ChatGPT (irrespective of which model you use specifically) giving it an ability to detect mental or emotional distress. The AI company wants ChatGPT to work better for users when they want guidance and perhaps a pep talk, than pure facts or information. 'I'm feeling stuck—help me untangle my thoughts' is an example OpenAI mentions, among others, to indicate the GPT models will be more capable of listening to the reasoning of a user's thoughts, rather than just tokenise those words into a response. Newly added are also gentle reminders during long sessions to encourage breaks. OpenAI isn't building this out of its own hat, but instead suggests they've worked with over 90 physicians across over 30 countries (including psychiatrists, paediatricians, and general practitioners) to build custom rubrics for evaluating complex, multi-turn conversations, as well as engaging human-computer-interaction (HCI) researchers and clinicians to give feedback on how well they've identified concerning behaviours, as well as convening an advisory group of experts in mental health, youth development, and HCI. The company admits they haven't gotten it right earlier, case in point being an update earlier this year, which made the model respond in a tone that was too agreeable, bordering on saying what sounded nice instead of what was actually helpful. MAKING NOTES Notes In what is usually the most difficult quarter for iPhone sales, with the spectre of the traditional September refresh looming large, Apple has reported Q3 earnings higher than expectations. The company has reported quarterly revenue of $94 billion, up 10 percent year over year, a June quarter revenue record, and beating $89.3 billion expectations. Apple CEO Tim Cook again emphasised the importance of India, for Apple's growth trajectory. 'The world of mousing and keyboarding around will feel as alien as it does to Gen Z [using] MS-DOS,' the words of David Weston, Microsoft's Corporate Vice President of Enterprise & Security in what is apparently the first of Microsoft's 'Windows 2030 Vision' video series. What does this mean? Since he doesn't elaborate any further than this breadcrumb, I'll lay out the possibility for you — another attempt at Windows which is overloaded with AI, perhaps even more within the OS itself and the apps you use on the computing device, with some element of agentic features that'll utilise natural language understanding and context from a user's data as well as what's on the screen. Ready for the future? MEET'S GETTING SMART Google Meet Google Meet is getting a rather interesting new feature, and it may seem like there's some sorcery to it, instead it is more of attention to details. Google says that if you are now joining a Meet call from your own device such as a laptop, the video meeting platform can detect when you may be doing so in a large conference room-esque physical space. In an attempt to reduce or totally eliminate the problem of sound echo on such a call, Meet will suggest joining the call using something called a 'Companion Mode'. Mind you, this presently only works if you are joining a meet call from your laptop on the Google Chrome web browser — and rolls out for all Google Workspace customers with Google Meet hardware devices. Meet uses your laptop's microphone to intelligently know when you are in a room using an ultrasonic signal. 'This wayfinding feature helps ensure a seamless, echo-free start to your meeting. When you join using the highlighted Companion mode button, you will also be automatically checked into the correct room,' says Google, in an official post. Basically, this will require your Google Workspace admins (basically, your organisation's IT folks) to enable 'Proximity Detection' that will allow that hardware to detect nearby devices, as a feature on the Google Meet Hardware installed in a conference room (for this I am sure there will be typical inertia reasoned around 'compatibility' and 'security' to mask ineptitude). At this point, based on my experiences with IT folks, easier said than done. End of the story.

Cryptocurrency Live News & Updates : Ethereum's Position Strengthened by Regulatory Shifts
Cryptocurrency Live News & Updates : Ethereum's Position Strengthened by Regulatory Shifts

Time of India

time2 hours ago

  • Time of India

Cryptocurrency Live News & Updates : Ethereum's Position Strengthened by Regulatory Shifts

06 Aug 2025 | 11:55:12 PM IST Figment's CEO Lorien Gabel stated that recent U.S. regulatory changes, including the market structure bill, clarify Ethereum's role and enhance its use cases, alongside a surge in stablecoin trading and Ethereum transactions. In the latest cryptocurrency news, Figment's CEO Lorien Gabel emphasized Ethereum's strengthened position amid evolving U.S. regulations, particularly with the market structure bill that clarifies its use cases. Meanwhile, OpenAI has allowed share sales at a staggering $500 billion valuation, showcasing the growing intersection of technology and finance. On the trading front, Pepe's price is consolidating at a crucial support level, indicating potential for a bullish reversal if it can maintain momentum. Additionally, BNB has surpassed 770 USDT, reflecting a 2.67% increase in just 24 hours. However, the crypto landscape is not without its challenges, as users face risks of having their accounts frozen due to compliance issues, even when they have done nothing wrong. The importance of understanding the provenance of funds is highlighted, as interactions with 'tainted' wallets can lead to significant consequences. As the industry matures, compliance and risk management are becoming essential for all crypto participants, underscoring the need for proactive measures to safeguard assets. Show more

WhatsApp Launches New Safety Tool, 6.8 Million Scam Centre Accounts Banned
WhatsApp Launches New Safety Tool, 6.8 Million Scam Centre Accounts Banned

NDTV

time4 hours ago

  • NDTV

WhatsApp Launches New Safety Tool, 6.8 Million Scam Centre Accounts Banned

New Delhi: WhatsApp has launched a new 'safety overview', which will alert a user when someone not in their contact list adds them to a new WhatsApp group that they may not recognise, as Meta's messaging platform intensified its crackdown on scams and fraud. The safety overview will contain key information about the said group and tips to stay safe, it said. "From there, you can exit the group without ever having to look at the chat. And if you think you might recognise the group after seeing the safety overview, you can choose to see the chat for more context," according to WhatsApp. And notifications from the group will be silenced until the user marks that they do wish to stay. Meta's messaging platform further said it is exploring ways to caution users when they start a chat with someone not in their contacts by showing more context about the person they are messaging. This, it hopes, will enable informed decisions. WhatsApp said it also takes down attempts by criminal scam centres, often fuelled by forced labour and operated by organised crime, primarily in South East Asia. In the first six months this year, as part of its efforts to protect people from scams, WhatsApp and Meta's security teams detected and banned over 6.8 million accounts linked to scam centres. "Based on our investigative insights into the latest enforcement efforts, we proactively detected and took down accounts before scam centres were able to operationalise them," it said. Citing an instance of such a crackdown, it said recently, WhatsApp, Meta, and OpenAI disrupted scamsters' efforts that had links to a criminal scam centre in Cambodia. These attempts ranged from offering payments for fake likes to enlisting others into a rent-a-scooter pyramid scheme or luring people to invest in cryptocurrency. "As OpenAI reported, the scammers used ChatGPT to generate the initial text message containing a link to a WhatsApp chat, and then quickly directed the target to Telegram, where they were assigned a task of liking videos on TikTok. The scammers attempted to build trust in their scheme by sharing how much the target has already 'earned' in theory, before asking them to deposit money into a crypto account as the next task," it said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store