logo
OpenAI launches Study Mode in ChatGPT: What is it, how it works, and more

OpenAI launches Study Mode in ChatGPT: What is it, how it works, and more

Time of India4 days ago
OpenAI
rolled out
Study Mode
in ChatGPT, a new feature designed to help students develop critical thinking skills rather than simply spoon-feeding them answers. Available to logged-in users across Free, Plus, Pro, and Team plans, the feature transforms the AI chatbot into an interactive tutor that guides students through problems step-by-step instead of delivering instant solutions.
Study Mode represents OpenAI's response to widespread concerns about AI's impact on education. When enabled, ChatGPT asks probing questions to assess skill levels, breaks down complex concepts into digestible sections, and provides personalised feedback based on previous conversations. The system refuses to offer direct answers unless students actively engage with the material, encouraging deeper understanding over quick completion.
No more instant answers: How ChatGPT's new mode forces students to think
The feature employs Socratic questioning techniques developed in collaboration with teachers and pedagogy experts. Students can upload course materials, images, or PDFs for contextualized help, while the system adapts responses based on individual learning goals and past study sessions when memory is enabled.
"It's like a live, 24/7, all-knowing office hours," said Noah Campbell, a college student who tested the feature early. The tool works across all ChatGPT platforms, iOS, Android, web, and desktop, and integrates with voice dictation and advanced voice mode for hands-free learning.
Unlike traditional ChatGPT interactions where users get immediate responses, Study Mode deliberately slows down the process. The AI might ask "What do you think happens next?" or "Can you explain why this step is important?" before revealing solutions.
Students can toggle Study Mode on and off through ChatGPT's tools menu, and it works with uploaded materials like class notes or problem photos. The system remembers past interactions when memory is enabled, creating increasingly personalized learning experiences.
Study Mode is available in 11 Indian languages with comprehensive multimodal support combining voice, image, and text capabilities.
AI Masterclass for Students. Upskill Young Ones Today!– Join Now
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Big Tech continues to hire in India even as local majors downsize
Big Tech continues to hire in India even as local majors downsize

Time of India

time13 minutes ago

  • Time of India

Big Tech continues to hire in India even as local majors downsize

Academy Empower your mind, elevate your skills ETtech Indian IT services majors may be trimming their workforce, but for Big Tech, it's still hiring season in giants under the FAAMNG umbrella – Facebook parent Meta, Amazon, Apple, Microsoft, Netflix, and Google – have grown their India headcount across their own and their affiliate entities by 16% over the past 12 months, data from staffing firm Xpheno pace of growth is slightly higher than 15% in the 12 months to August saw over 28,000 net employee additions in the past one year. The current estimated collective headcount across their entities in India is over 208,000, according to hiring rate 'is relatively healthy, especially with the buzz of AI potentially impacting pace and volume of hiring,' said Kamal Karanth, cofounder of companies continue to post healthy hiring demand in the country, with active openings at 4,500 currently, despite executing large-scale layoffs in the US, with an estimated 100,000 laid off Covid pandemic-induced hiring surge saw net additions for the FAAMNG cohort grow 35% year-on-year in 2022, followed by a slowdown to 6% growth in comparison, the top six Indian IT services firms saw headcount additions soar 22% on year in 2022, followed by declines by 0.2% and 3.1% in 2023 and 2024, respectively. As of June 2025, combined headcount in these firms grew 1.3% year-on-year to 1,625, the country's IT bellwether Tata Consultancy Service (TCS) on Sunday caused shockwaves as it announced layoff of 12,000 employees in mid-to-senior levels, citing skills mismatch in project global giants have also announced major layoffs in recent times, India has seen a lesser impact compared to many other geographies, experts noted.'While we do see some of this affecting India, the volumes so far are not as high as global numbers,' said Neeti Sharma, CEO of IT staffing firm TeamLease hiring in the industry is increasingly selective, with a focus on specialised skills, especially in artificial intelligence (AI) and cloud.'There is a high demand for skills such as AI, cloud and cybersecurity,' while hiring is down for support and routine roles, especially in conventional technologies, Sharma said.'Few older roles will gradually become redundant. However, newer roles are being defined,' she said. 'This transition is tough now, but it's needed to stay relevant.'Employees face more pressure to perform and upskill, especially in AI and cloud, as companies focus on keeping top talent and building leaner teams, Sharma per Quess IT staffing, hiring by large tech firms in India dipped by 3-6% in the fourth quarter of FY2025 ended in March. However, it was up by about 8-10% in the first quarter FY2026, it said.'While global tech firms are making headlines for layoffs abroad, many are increasing hiring in India, especially through their GCCs,' said Kapil Joshi, CEO of Quess IT global capability centres (GCCs) are now doing more high-end work, such as developing AI tools, cloud platforms, and new digital products, he the same time, companies are seeing the need to balance costs while increasing focus on innovation, experts said.'They need to train people faster, close talent gaps, and compete for the best candidates in a tight market,' Joshi said.

Big Tech continues to hire in India even as local majors downsize
Big Tech continues to hire in India even as local majors downsize

Economic Times

time13 minutes ago

  • Economic Times

Big Tech continues to hire in India even as local majors downsize

iStock Indian IT services majors may be trimming their workforce, but for Big Tech, it's still hiring season in giants under the FAAMNG umbrella – Facebook parent Meta, Amazon, Apple, Microsoft, Netflix, and Google – have grown their India headcount across their own and their affiliate entities by 16% over the past 12 months, data from staffing firm Xpheno showed. The pace of growth is slightly higher than 15% in the 12 months to August 2024. FAAMNG saw over 28,000 net employee additions in the past one year. The current estimated collective headcount across their entities in India is over 208,000, according to hiring rate 'is relatively healthy, especially with the buzz of AI potentially impacting pace and volume of hiring,' said Kamal Karanth, cofounder of companies continue to post healthy hiring demand in the country, with active openings at 4,500 currently, despite executing large-scale layoffs in the US, with an estimated 100,000 laid off Covid pandemic-induced hiring surge saw net additions for the FAAMNG cohort grow 35% year-on-year in 2022, followed by a slowdown to 6% growth in 2023. By comparison, the top six Indian IT services firms saw headcount additions soar 22% on year in 2022, followed by declines by 0.2% and 3.1% in 2023 and 2024, respectively. As of June 2025, combined headcount in these firms grew 1.3% year-on-year to 1,625, the country's IT bellwether Tata Consultancy Service (TCS) on Sunday caused shockwaves as it announced layoff of 12,000 employees in mid-to-senior levels, citing skills mismatch in project deployments. Selective hiring While global giants have also announced major layoffs in recent times, India has seen a lesser impact compared to many other geographies, experts noted.'While we do see some of this affecting India, the volumes so far are not as high as global numbers,' said Neeti Sharma, CEO of IT staffing firm TeamLease hiring in the industry is increasingly selective, with a focus on specialised skills, especially in artificial intelligence (AI) and cloud.'There is a high demand for skills such as AI, cloud and cybersecurity,' while hiring is down for support and routine roles, especially in conventional technologies, Sharma said.'Few older roles will gradually become redundant. However, newer roles are being defined,' she said. 'This transition is tough now, but it's needed to stay relevant.'Employees face more pressure to perform and upskill, especially in AI and cloud, as companies focus on keeping top talent and building leaner teams, Sharma per Quess IT staffing, hiring by large tech firms in India dipped by 3-6% in the fourth quarter of FY2025 ended in March. However, it was up by about 8-10% in the first quarter FY2026, it said.'While global tech firms are making headlines for layoffs abroad, many are increasing hiring in India, especially through their GCCs,' said Kapil Joshi, CEO of Quess IT global capability centres (GCCs) are now doing more high-end work, such as developing AI tools, cloud platforms, and new digital products, he the same time, companies are seeing the need to balance costs while increasing focus on innovation, experts said.'They need to train people faster, close talent gaps, and compete for the best candidates in a tight market,' Joshi said. Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Zomato delivered, but did the other listed unicorns? US tariff hike to hit Indian exports, may push RBI towards rate cuts Will TCS layoffs open the floodgates of mass firing at Indian IT firms? Indian IT firms never reveal the truth hiding behind 'strong' deal wins Is Bajaj Finance facing its HDFC Bank moment? Tata Motors' INR38k crore Iveco buy: Factors that can make investors nervous Stock Radar: Strides Pharma stock hits fresh 52-week high in July; will the rally continue in August? F&O Radar| Deploy Short Strangle in Nifty to gain from Theta decay For investors who can think beyond Trump: 5 large-cap stocks with an upside potential of up to 36%

As young Indians turn to AI ‘therapists', how confidential is their data?
As young Indians turn to AI ‘therapists', how confidential is their data?

Scroll.in

time43 minutes ago

  • Scroll.in

As young Indians turn to AI ‘therapists', how confidential is their data?

This is the second of a two-part series. Read the first here. Imagine a stranger getting hold of a mental health therapist's private notes – and then selling that information to deliver tailored advertisements to their clients. That's practically what many mental healthcare apps might be doing. Young Indians are increasingly turning to apps and artificial intelligence-driven tools to address their mental health challenges – but have limited awareness about how these digital tools process user data. In January, the Centre for Internet and Society published a study based on 45 mental health apps – 28 from India and 17 from abroad – and found that 80% gathered user health data that they used for advertising and shared with third-party service providers. An overwhelming number of these apps, 87%, shared the data with law enforcement and regulatory bodies. The first article in this series had reported that some of these apps are especially popular with young Indian users, who rely on them for quick and easy access to therapy and mental healthcare support. Users had also told Scroll that they turned to AI-driven technology, such as ChatGPT, to discuss their feelings and get advice, however limited this may be compared to interacting with a human therapist. But they were not especially worried about data misuse. Keshav*, 21, reflected a common sentiment among those Scroll interviewed: 'Who cares? My personal data is already out there.' The functioning of Large Language Models, such as ChatGPT, is already under scrutiny. LLMs are 'trained' on vast amounts of data, either from the internet or provided by its trainers, to simulate human learning, problem solving and decision making. Sam Altman, CEO of OpenAI that built ChatGPT, said on a podcast in July that though users talk about personal matters with the chatbot, there are no legal safeguards protecting that information. 'People use it – young people, especially, use it – as a therapist, a life coach; having these relationship problems and [asking] what should I do?' he asked. 'And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT.' Play He added: 'So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up.' Therapists and experts said the ease of access of AI-driven mental health tools should not sideline privacy concerns. Clinical psychologist Rhea Thimaiah, who works at Kaha Mind, a collective that provides mental health services, emphasised that confidentiality is an essential part of the process of therapy. 'The therapeutic relationship is built on trust and any compromise in data security can very possibly impact a client's sense of safety and willingness to engage,' she said. 'Clients have a right to know how their information is being stored, who has access, and what protections are in place.' This is more than mere data – it is someone's memories, trauma and identity, Thimaiah said. 'If we're going to bring AI into this space, then privacy shouldn't be optional, it should be fundamental.' Srishti Srivastava, founder of AI-driven mental health app Infiheal, said that her firm collects user data to train its AI bot, but users can access the app even without signing up and also ask for their data to be deleted. Dhruv Garg, a tech policy lawyer at Indian Governance and Policy Project, said the risk lies not just in apps collecting data but in the potential downstream uses of that information. 'Even if it's not happening now, an AI platform in the future could start using your data to serve targeted ads or generate insights – commercial, political, or otherwise – based on your past queries,' said Garg. 'Current privacy protections, though adequate for now, may not be equipped to deal with each new future scenario.' India's data protection law For now, personal data processed by chatbots is governed by the Information Technology Act framework and Sensitive Personal Data Rules, 2011. Section 5 of the sensitive data rules says that companies must obtain consent in writing before collecting or using sensitive information. According to the rules, information relating to health and mental health conditions are considered sensitive data. There are also specialised sectoral data protection rules that apply to regulated entities like hospitals. The Digital Personal Data Protection Act, passed by Parliament in 2023, is expected to be notified soon. But it exempts publicly available personal data from its ambit if this information has voluntarily been disclosed by an individual. Given the black market of data intermediaries that publish large volumes of personal information, it is difficult to tell what personal data in the public domain has been made available 'voluntarily'. The new data protection act does not have different regulatory standards for specific categories of personal data – financial, professional, or health-related, Garg said. This means that health data collected by AI tools in India will not be treated with special sensitivity under this framework. 'For instance, if you search for symptoms on Google or visit WebMD, Google isn't held to a higher standard of liability just because the content relates to health,' said Garg. WebMD provides health and medical information. It might be different for AI tools explicitly designed for mental healthcare – unlike general-purpose models like ChatGPT. These, according to Garg, 'could be made subject to more specific sectoral regulations in the future'. However, the very logic on which AI chatbots function – where it responds based on user data and inputs – could itself be a privacy risk. Nidhi Singh, a senior research analyst and programme manager at Carnegie India, an American think tank, said she has concerns about how tools like ChatGPT customise responses and remember user history – even though users may appreciate those functions. Singh said India's new data protection is quite clear that any data made publicly available by putting it on the internet is no longer considered personal data. 'It is unclear how this will apply to your conversations with ChatGPT,' she said. Without specific legal protections, there's no telling how an AI-driven tool will use the data it has gathered. According to Singh, without a specific rule designating conversations with generative AI as an exception, it is likely that a user's interactions with these AI systems won't be treated as personal data and consequently will not fall under the purview of the act. Who takes legal responsibility? Technology firms have tried hard to evade legal liability for harm. In Florida, a lawsuit by a mother has alleged that her 14-year-old son died by suicide after becoming deeply entangled in an 'emotionally and sexually abusive relationship' with a chatbot. In case of misdiagnosis or harmful advice from an AI tool, legal responsibility is likely to be analysed in court, said Garg. 'The developers may argue that the model is general-purpose, trained on large datasets, and not supervised by a human in real-time,' said Garg. 'Some parallels may be drawn with search engines – if someone acts on bad advice from search results, the responsibility doesn't fall on the search engine, but on the user.' Highlighting the urgent need for a conversation on sector-specific liability frameworks, Garg said that for now, the legal liability of AI developers will have to be assessed on a case-to-case basis. 'Courts may examine whether proper disclaimers and user agreements were in place,' he said. In another case, Air Canada was ordered to pay compensation to a customer who was misled by its chatbot regarding bereavement fares. The airline had argued that the chatbot was a ' separate legal entity ' and therefore responsible for its own actions. Singh of Carnegie India said that transparency is important and that user consent should be meaningful. 'You don't need to explain the model's source code, but you do need to explain its limitations and what it aims to do,' she said. 'That way, people can genuinely understand it, even if they don't grasp every technical step.' AI, meanwhile, is here for the long haul. Until India can expand its capacity to offer mental health services to everyone, Singh said AI will inevitably fill that void. 'The use of AI will only increase as Indic language LLMs are being built, further expanding its potential to address the mental health therapy gap,' she said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store