OpenAI rolls out Study Mode to help learners break down problems into steps
The ChatGPT-maker noted that the new study mode was built in collaboration with teachers, scientists, and pedagogy experts. Study Mode is aimed at helping students tackle learning from a more critical standpoint even if they are using an AI chatbot for this purpose. The mode is designed to encourage active participation, manage cognitive load, develop metacognition/self-reflection, foster curiosity, and provide useful feedback.
The company provided chat-based examples of how students from across disciplines used the Study Mode to review old concepts, encounter new theories at their applicable skill level, or get introduced to a wholly new subject at a quick pace.
'When students engage with study mode, they're met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding. Study mode is designed to be engaging and interactive, and to help students learn something—not just finish something,' said OpenAI in a blog post on Tuesday (July 29, 2025).
OpenAI noted that Study Mode was powered by custom system instructions, but added that the approach led to 'some inconsistent behavior and mistakes across conversations.'
OpenAI's Study Mode is available to logged-in Free, Plus, Pro, and Team users, with availability in ChatGPT Edu coming in some weeks.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


News18
19 minutes ago
- News18
Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained
Last Updated: AI veganism is abstaining from using AI systems due to ethical, environmental, or wellness concerns, or by avoiding harming AI systems, especially if they might one day be sentient Even as the world goes gaga over artificial intelligence (AI) and how it could change the way the world and jobs function, there are some who are refraining from using it. They are the AI vegans. Why is it? What are their reasons? AI veganism explained. What is AI veganism? The term refers to applying principles of veganism to AI — either by abstaining from using AI systems due to ethical, environmental, or personal wellness concerns, or by avoiding harming AI systems, especially if they might one day be sentient. Some view AI use as potentially exploitative — paralleling harm to animals via farming. Is AI so bad that we need to abstain from it? Here's what studies show: A 2024 Pew study showed that a fourth of K-12 teachers in the US thought AI was doing more harm than good. A Harvard study from May found that generative AI, while increasing the productivity of workers, diminished their motivation and increased their levels of boredom. A Microsoft Research study found that people who were more confident in using generative AI showed diminished critical thinking. Time reports growing concerns over a phenomenon labeled AI psychosis, where prolonged interaction with chatbots can trigger or worsen delusions in vulnerable individuals—especially those with preexisting mental health conditions. A study by the Center for Countering Digital Hate found that ChatGPT frequently bypasses its safeguards, offering harmful, personalized advice—such as suicide notes or instructions for substance misuse—to simulated 13-year-old users in over half of monitored interactions. Research at MIT revealed that students using LLMs like ChatGPT to write essays demonstrated weaker brain connectivity, lower linguistic quality, and poorer retention compared to peers relying on their own thinking. A study from Anthropic and Truthful AI found that AI models can covertly transmit harmful behaviors to other AIs using hidden signals—these actions bypass human detection and challenge conventional safety methods. A global report chaired by Yoshua Bengio outlines key threats from general-purpose AI: job losses, terrorism facilitation, uncontrolled systems, and deepfake misuse—and calls for urgent policy attention. AI contributes substantially to global electricity and water use, and could add up to 5 million metric tons of e-waste by 2030—perhaps accounting for 12% of global e-waste volume. Studies estimate AI may demand 4.1–6.6 billion cubic meters of water annually by 2027—comparable to the UK's total usage — while conceptually exposing deeper inequities in AI's extraction and pollution impacts. A BMJ Global Health review argues that AI could inflict harm through increased manipulation/control, weaponization, labour obsolescence, and—at the extreme—pose existential risks if self-improving AGI develops unchecked. What is the basis of the concept? Ethical Concerns: Many AI models are trained on creative work (art, writing, music) without consent from original creators. Critics argue this is intellectual theft or unpaid labor. Potential Future AI Sentience: Some fear that sentient AI might eventually emerge, and using it today could normalise treating it as a tool rather than a being with rights. Environmental Impact: AI systems — especially large language models—consume massive resources which contribute to carbon emissions and water scarcity. Cognitive and Psychological Health: Some believe overuse of AI weakens our ability to think, write, or create independently. The concern is about mental laziness or 'outsourcing" thought. Digital Overwhelm: AI makes everything faster, more accessible—sometimes too fast, leading to burnout, distraction, or dopamine addiction. Social and Cultural Disruption: AI threatens job markets—especially in creative fields, programming, and customer service. Why remaining an AI vegan may be tough? AI is deeply embedded in many systems — from communication to healthcare—making total abstinence unrealistic for most. Current AI lacks consciousness, so overlaying moral concerns meant for animals onto machines may distract from real human and animal rights issues. Potential overreach: Prioritising hypothetical sentient AI ethics could divert attention from pressing societal challenges. With Agency Inputs view comments Location : New Delhi, India, India First Published: August 10, 2025, 18:08 IST News explainers Do You Know Any AI Vegan? What Is It? Is It Even Possible? The Concept Explained Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.


Time of India
an hour ago
- Time of India
Sam Altman on Elon Musk: All day he does is tweeting, 'how much OpenAI sucks, our model is bad and ...
OpenAI CEO Sam Altman has bluntly responded to recent attacks from Tesla CEO Elon Musk, saying he doesn't spend much time thinking about the xAI founder. Speaking in an interview with CNBC's Squawk Box, Altman dismissed Musk's repeated criticisms of OpenAI and its newly launched GPT-5 creator OpenAI recently unveiled its latest AI model GPT-5. The company claims that the latest AI model offer advancements in accuracy, speed, reasoning and math capabilities. However, after the launch of GPT-5, Microsoft CEO Satya Nadella announced full integration of GPT-5 across Microsoft ecosystem. Responding to Nadella's post, Tesla CEO Elon Musk said, 'OpenAI is going to eat Microsoft alive'. Sam Altman responds to Elon Musk's comment about GPT-5 During the CNBC interview, Andrew Ross Sorkin asked Altman his views on Musk's comment that 'OpenAI is going to eat Microsoft alive'. 'You knew I'd asked the question. I think you knew I'd asked the question. You probably saw Elon yesterday. He said, quote, OpenAI will eat Microsoft alive, and then Satya responding to that. What do you think when you read that?' asked Sorkin. Replying to the question Altman said, 'You know, I don't think about him that much.' Sorkin then said, 'I'm not sure what he means except to say that he thinks in the grand scheme of the partnership, that, ultimately, you'll have more power and more influence and more leverage over them than they'll have over you.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like The Secret Lives of the Romanovs — the Last Rulers of Imperial Russia! Learn More Undo To this Altman said that Elon Musk is someone who was just tweeting all day about how much OpenAI sucks, and our model is bad, and, you know, not going to be a good company and all that. 'I thought he was most -- I mean, I -- someone was -- I thought he was just like tweeting all day about how much like OpenAI sucks and our model is bad and, you know, not being a good company and all of that. So, I don't know how you square those two things,' said Altman. The remarks come amid escalating tensions between the two former collaborators, who co-founded OpenAI in 2015 before parting ways over disagreements about the company's direction. Musk has since launched his own AI venture, xAI, and recently claimed that OpenAI would 'eat Microsoft alive' following the tech giant's integration of GPT-5 across its platforms. GPT-5 launched, free for all OpenAI says that GPT-5 is the company's 'best model yet for coding and agentic tasks.' The model comes in three sizes — gpt-5, gpt-5-mini, and gpt-5-nano — for developers to balance performance, cost, and speed. In the API, GPT-5 is the reasoning model that powers ChatGPT 's top performance. A separate non-reasoning version, called gpt-5-chat-latest, will also be available. Sam Altman said GPT-5 is a major leap from GPT-4 and a 'pretty significant step' toward Artificial General Intelligence (AGI). 'GPT-5 is really the first time that I think one of our mainline models has felt like you can ask a legitimate expert, like a PhD-level expert, anything... We wanted to make it available in our free tier for the first time,' he said.


Mint
2 hours ago
- Mint
GPT-5 brings four new personalities to ChatGPT: what they do and how to use them — check our step-by-step guide
OpenAI unveiled its latest large language model powering ChatGPT during a live-streamed event on Thursday. The new GPT-5 model comes with a number of enhancements across areas like coding, accuracy, reasoning, writing, health-related questions and multimodal abilities. However, a feature in the new model that has gone relatively under the radar is the introduction of four new personalities in ChatGPT that allow users to customize the chatbot according to their requirements. So what exactly are personalities and how can you turn them on in ChatGPT? Let's find out below. Personality in ChatGPT is the style and tone that the chatbot uses while responding to questions from users. It is a combination of traits, voice, and behaviours that ultimately determine whether the chatbot's answers feel friendly, casual, concise, or professional. Changing the personality of ChatGPT allows users to choose a style for the chatbot that is most relevant to them. The new personalities also work alongside the memories saved by users in ChatGPT, allowing them to customize the personalities according to their preferences. OpenAI says that saved preferences of a user could adjust or override the personality's behaviour. Changing the personality in ChatGPT will not change the inherent capabilities of the chatbot or the safety rules it follows. It also does not affect the type of content users can ask it to produce. So if the user has the 'Listener' personality turned on and they ask for Python code for a certain problem, ChatGPT will still provide it in a clear and functional manner rather than its usual reflective and conversational style. ChatGPT personalities are only available to OpenAI's paying users, including Plus, Pro, and Team subscribers. The new personalities take effect only in a new conversation; any ongoing conversation will continue in the chatbot's original personality. OpenAI officially describes the Cynic personality as 'Sarcastic and dry, delivers blunt help with wit. Often teases, but provides direct, practical answers when it matters.' This personality will provide candid responses that may include sarcastic observations but will not be hostile or irrelevant. It's best for users who want candid yet entertaining replies from ChatGPT that are actionable. It could also be good for creative brainstorming sessions. While describing this personality, OpenAI writes, 'Precise, efficient, and emotionless, delivering direct answers without extra words.' With the Robot personality turned on, users can expect direct answers first, followed by concise reasoning. ChatGPT will clearly map problems into inputs, levers, and outputs when applicable. There could also be occasional citations for references when making factual claims. It is best for times when users want direct, fast, and unambiguous answers from the chatbot. This personality is also useful for technical tasks, code walkthroughs, and troubleshooting. OpenAI's official description for this personality goes, 'Warm and laid-back, reflecting your thoughts back with calm clarity and light wit.' This personality is aimed at helping users make their own decisions by giving responses that discuss trade-offs and likely outcomes. It acts as a conversational sounding board and allows users to reflect on a problem. 'Playful and curious, explaining concepts clearly while celebrating knowledge and discovery,' reads OpenAI's official description for this personality. With the Nerd personality turned on, users can expect deep yet accessible explanations, along with possible next steps they could take. It could also give users encouragement to explore follow-up paths or experiments. Make sure you are subscribed to ChatGPT Plus, Pro, or Team Select the profile icon at the bottom left corner of the ChatGPT website Click on 'Customize ChatGPT' to open the settings page Scroll down to find the 'What personality should ChatGPT have?' option Enter your desired personality here If you are using the ChatGPT iOS or Android app: Go to settings by clicking on the profile icon Tap on 'Personalization', then select 'Custom Instructions' Write your chosen personality in the 'What personality should ChatGPT have?' option