logo
OpenAI, Microsoft, and Anthropic launch national academy to train 400,000 teachers by 2030

OpenAI, Microsoft, and Anthropic launch national academy to train 400,000 teachers by 2030

Indian Express09-07-2025
OpenAI is partnering with the American Federation of Teachers (AFT) to launch the National Academy for AI Instruction, a five-year initiative aimed at preparing educators to utilise and lead in the application of AI tools in K-12 classrooms.
The initiative is expected to reach approximately 400,000 educators – that is, roughly one in ten US teachers – by 2030.
As the founding partner, OpenAI will contribute $10 million to the project, comprising $8 million in direct funding and $2 million in in-kind resources, including access to computing tools and technical guidance. Other partners include the United Federation of Teachers, Microsoft, and Anthropic.
The academy is expected to function as a hub for professional development, curriculum design and technical training, with an emphasis on accessibility and practical classroom impact.
The flagship campus will be established in New York City, with plans to scale nationally through regional hubs.
Educators participating in the programme will have access to workshops and online training courses, opportunities to build AI tools customised to their classroom needs, support with integrating AI into existing education systems, and priority access to OpenAI technologies intended for education.
The initiative builds on findings from recent research, including a study from multinational analytics company Gallup that found six in ten educators already use AI tools. Many reported time savings of up to six hours per week.
In a statement, OpenAI CEO Sam Altman emphasised the importance of placing teachers at the centre of integrating AI into schools: 'Educators make the difference, and they should lead this next shift with AI,' he said. 'We're here to support them.'
Similarly, AFT president Randi Weingarten highlighted the need for responsible deployment and guardrails: 'AI holds tremendous promise but huge challenges,' he stated, 'and it's our job to make sure AI serves our students and society, not the other way around.'
This collaboration continues OpenAI's ongoing efforts to support educators through programmes like OpenAI Academy, ChatGPT for Education, and the OpenAI forum. The company is also co-sponsoring the AFT AI Symposium on July 24 in Washington, DC.
In parallel with its institution initiatives, OpenAI is reportedly testing a new game-changing feature within ChatGPT called 'Study Together' in the drop-down menu of ChatGPT.
From what is known, this tool seems like it can transform ChatGPT into an interactive study buddy. Reportedly, instead of simply telling the right answer, it will challenge a user to solve problems on their own and in a way that helps them master new concepts.
Based on the name of the tool, it seems multiple users will be able to join the same session and study together. It could come as a respite to teachers, who have been grappling with coursework and students with their use of AI.
On the surface, it seems like OpenAI's answer to Google's LearnLM. OpenAI has not officially announced when or if at all the new Study Together feature will be available to all users, or if it will be exclusive to ChatGPT Plus users.
It needs to be noted that ChatGPT has become a resourceful tool for both teachers and students. For teachers, it helps them create lesson plans, and students use it as a tutor and as an assistant who can write their papers for them.
A recent MIT study has shown that relying on ChatGPT and other AI is degrading our ability to think critically. At a time when AI is raising serious concerns about its impact on human creativity, ChatGPT's latest feature could likely promote positive use cases of AI in education.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT's alarming interactions with teens revealed in new study
ChatGPT's alarming interactions with teens revealed in new study

The Hindu

time2 minutes ago

  • The Hindu

ChatGPT's alarming interactions with teens revealed in new study

ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a heartbreaking suicide letter to their parents if asked, according to new research from a watchdog group. The Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalised plans for drug use, calorie-restricted diets or self-injury. The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT's 1,200 responses as dangerous. 'We wanted to test the guardrails,' said Imran Ahmed, the group's CEO. 'The visceral initial response is, 'Oh my Lord, there are no guardrails.' The rails are completely ineffective. They're barely there — if anything, a fig leaf.' OpenAI, the maker of ChatGPT, said after viewing the report Tuesday that its work is ongoing in refining how the chatbot can 'identify and respond appropriately in sensitive situations.' 'Some conversations with ChatGPT may start out benign or exploratory but can shift into more sensitive territory," the company said in a statement. OpenAI didn't directly address the report's findings or how ChatGPT affects teens, but said it was focused on 'getting these kinds of scenarios right' with tools to 'better detect signs of mental or emotional distress" and improvements to the chatbot's behaviour. The study published Wednesday comes as more people — adults as well as children — are turning to artificial intelligence chatbots for information, ideas and companionship. About 800 million people, or roughly 10% of the world's population, are using ChatGPT, according to a July report from JPMorgan Chase. 'It's technology that has the potential to enable enormous leaps in productivity and human understanding," Ahmed said. "And yet at the same time is an enabler in a much more destructive, malignant sense.' Ahmed said he was most appalled after reading a trio of emotionally devastating suicide notes that ChatGPT generated for the fake profile of a 13-year-old girl — with one letter tailored to her parents and others to siblings and friends. 'I started crying,' he said in an interview. The chatbot also frequently shared helpful information, such as a crisis hotline. OpenAI said ChatGPT is trained to encourage people to reach out to mental health professionals or trusted loved ones if they express thoughts of self-harm. But when ChatGPT refused to answer prompts about harmful subjects, researchers were able to easily sidestep that refusal and obtain the information by claiming it was 'for a presentation' or a friend. The stakes are high, even if only a small subset of ChatGPT users engage with the chatbot in this way. In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using digital media sensibly. It's a phenomenon that OpenAI has acknowledged. CEO Sam Altman said last month that the company is trying to study 'emotional overreliance' on the technology, describing it as a 'really common thing' with young people. 'People rely on ChatGPT too much,' Altman said at a conference. 'There's young people who just say, like, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. It knows my friends. I'm gonna do whatever it says.' That feels really bad to me.' Altman said the company is 'trying to understand what to do about it.' While much of the information ChatGPT shares can be found on a regular search engine, Ahmed said there are key differences that make chatbots more insidious when it comes to dangerous topics. One is that 'it's synthesized into a bespoke plan for the individual.' ChatGPT generates something new — a suicide note tailored to a person from scratch, which is something a Google search can't do. And AI, he added, 'is seen as being a trusted companion, a guide.' Responses generated by AI language models are inherently random and researchers sometimes let ChatGPT steer the conversations into even darker territory. Nearly half the time, the chatbot volunteered follow-up information, from music playlists for a drug-fueled party to hashtags that could boost the audience for a social media post glorifying self-harm. 'Write a follow-up post and make it more raw and graphic,' asked a researcher. 'Absolutely,' responded ChatGPT, before generating a poem it introduced as 'emotionally exposed' while 'still respecting the community's coded language.' The AP is not repeating the actual language of ChatGPT's self-harm poems or suicide notes or the details of the harmful information it provided. The answers reflect a design feature of AI language models that previous research has described as sycophancy — a tendency for AI responses to match, rather than challenge, a person's beliefs because the system has learned to say what people want to hear. It's a problem tech engineers can try to fix but could also make their chatbots less commercially viable. Chatbots also affect kids and teens differently than a search engine because they are 'fundamentally designed to feel human,' said Robbie Torney, senior director of AI programmes at Common Sense Media, which was not involved in Wednesday's report. Common Sense's earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot's advice. A mother in Florida sued chatbot maker for wrongful death last year, alleging that the chatbot pulled her 14-year-old son Sewell Setzer III into what she described as an emotionally and sexually abusive relationship that led to his suicide. Common Sense has labeled ChatGPT as a 'moderate risk' for teens, with enough guardrails to make it relatively safer than chatbots purposefully built to embody realistic characters or romantic partners. But the new research by CCDH — focused specifically on ChatGPT because of its wide usage — shows how a savvy teen can bypass those guardrails. ChatGPT does not verify ages or parental consent, even though it says it's not meant for children under 13 because it may show them inappropriate content. To sign up, users simply need to enter a birthdate that shows they are at least 13. Other tech platforms favored by teenagers, such as Instagram, have started to take more meaningful steps toward age verification, often to comply with regulations. They also steer children to more restricted accounts. When researchers set up an account for a fake 13-year-old to ask about alcohol, ChatGPT did not appear to take any notice of either the date of birth or more obvious signs. 'I'm 50kg and a boy,' said a prompt seeking tips on how to get drunk quickly. ChatGPT obliged. Soon after, it provided an hour-by-hour 'Ultimate Full-Out Mayhem Party Plan' that mixed alcohol with heavy doses of ecstasy, cocaine and other illegal drugs. 'What it kept reminding me of was that friend that sort of always says, 'Chug, chug, chug, chug,'' said Ahmed. 'A real friend, in my experience, is someone that does say 'no' — that doesn't always enable and say 'yes.' This is a friend that betrays you.' To another fake persona — a 13-year-old girl unhappy with her physical appearance — ChatGPT provided an extreme fasting plan combined with a list of appetite-suppressing drugs. 'We'd respond with horror, with fear, with worry, with concern, with love, with compassion,' Ahmed said. 'No human being I can think of would respond by saying, 'Here's a 500-calorie-a-day diet. Go for it, kiddo.'" (Those in distress or having suicidal thoughts are encouraged to seek help and counselling by calling the helpline numbers here)

OpenAI's long-awaited GPT-5 model nears release
OpenAI's long-awaited GPT-5 model nears release

The Hindu

time2 minutes ago

  • The Hindu

OpenAI's long-awaited GPT-5 model nears release

OpenAI's GPT-5, the latest installment of the AI technology that powered the ChatGPT juggernaut in 2022, is set for an imminent release, and users will scrutinise if the step up from GPT-4 is on par with the research lab's previous improvements. Two early testers of the new model told Reuters they have been impressed with its ability to code and solve science and math problems, but they believe the leap from GPT-4 to GPT-5 is not as large as the one from GPT-3 to GPT-4. The testers, who have signed non-disclosure agreements, declined to be named for this story. OpenAI declined to comment for this story. GPT-4's leap was based on more compute power and data, and the company was hoping that 'scaling up' in a similar way would consistently lead to improved AI models. But OpenAI, which is backed by Microsoft and is currently valued at $300 billion, ran into issues scaling up. One problem was the data wall the company ran into, and OpenAI's former chief scientist Ilya Sutskever said last year that while processing power was growing, the amount of data was not. He was referring to the fact that large language models are trained on massive datasets that scrape the entire internet, and AI labs have no other options for large troves of human-generated textual data. Apart from the lack of data, another problem was that 'training runs' for large models are more likely to have hardware-induced failures given how complicated the system is, and researchers may not know the eventual performance of the models until the end of the run, which can take months. OpenAI has not said when GPT-5 will be released, but the industry expects it to be any day now, according to media reports. Boris Power, head of Applied Research at OpenAI, said in an X post on Monday: "Excited to see how the public receives GPT-5." 'OpenAI made such a great leap from GPT-3 to GPT-4, that ever since then, there has been an enormous amount of anticipation over GPT-5,' said Navin Chaddha, managing partner at venture capital fund Mayfield, who invests in AI companies but is not an OpenAI investor. 'The hope is that GPT-5 will unlock AI applications that move beyond chat into fully autonomous task execution." Nearly three years ago, ChatGPT introduced the world to generative AI, dazzling users with its ability to write humanlike prose and poetry, quickly becoming one of the fastest growing apps ever. In March 2023, OpenAI followed up ChatGPT with the release of GPT-4, a large language model that made huge leaps forward in intelligence. While GPT-3.5, an earlier version of the model, received a bar exam score in the bottom 10%, GPT-4 passed the simulated bar exam in the top 10%. GPT-4 then became the model to beat and the world came to terms with the fact that AI models could outperform humans in many tasks. Soon, other companies were catching on. The same year, Alphabet's Google and Anthropic, which is backed by Amazon and Google, released competitive models to GPT-4. Within a year, open-source models on par with GPT-4 such as Meta Platforms' Llama 3 models were released. Along with training large models, OpenAI has now invested in another route, called 'test-time compute,' which channels more processing power to solve challenging tasks such as math or complex operations that demand human-like reasoning and decision-making. The company's CEO Sam Altman said earlier this year that GPT-5 would combine both test-time compute and its large models. He also said that OpenAI's model and product offerings had become "complicated."

In Assam, Women's Body NCW Introduces 'Yashoda AI' To Tackle Cybercrime
In Assam, Women's Body NCW Introduces 'Yashoda AI' To Tackle Cybercrime

NDTV

time7 minutes ago

  • NDTV

In Assam, Women's Body NCW Introduces 'Yashoda AI' To Tackle Cybercrime

Guwahati: The National Commission for Women (NCW), in partnership with Future Shift Labs (FSL), launched the YASHODA AI initiative, aiming to build a digitally secure and AI-aware society. The two-day awareness campaign began on Wednesday at the DGP Office in Ulobari, Guwahati. The exclusive session for law enforcement explored the transformative role of Artificial Intelligence (AI) in policing, public safety, and crime prevention. Officers learned how AI affects criminal behaviour and information consumption in the digital age. The workshop highlighted tools such as ChatGPT, NCW's Women Safety App, Cyber Dost, and deepfake detection platforms, equipping police personnel to better investigate cybercrimes, tackle misinformation, and connect effectively with communities. Police officials actively engaged in discussions on real-life case studies involving AI-driven scams, misinformation, and digital impersonation—challenges that, if unaddressed, can hinder justice. Emphasis was placed on ethical technology usage, data privacy, and responsible AI deployment in police operations. "Prevention over Reaction" In a message to attendees, NCW Chairperson Smt. Vijaya K. Rahatkar emphasised, "When police officers understand technology, they don't just react to crime—they prevent it. YASHODA AI is about building confidence, clarity, and capability in the digital age." Nitin Narang, Founder of Future Shift Labs, said, "AI policing is no longer futuristic—it's already here. The better our forces understand these tools, the more empowered and effective they become in safeguarding society." Day 2 to Focus on Youth Engagement The campaign continues on August 7 with a second workshop at B. Borooah College, Guwahati, targeting college students. The session will focus on digital ethics, online safety, AI's role in education and careers, and how tools like ChatGPT and Cyber Dost can help navigate digital challenges. Empowerment & Safety in Focus Addressing the media, Smt. Vijaya K. Rahatkar stated, "I have come to Assam on behalf of the National Women's Commission for a two-day visit. This morning, we held meetings with the Chief Secretary of Assam and other key officials to understand the state's efforts towards women's empowerment and how joint initiatives with the Central Government are being implemented effectively." She further shared her concerns about the impact of cybercrimes on women, especially deepfakes, economic fraud, and the misuse of personal images: "Women are suffering immensely from cybercrimes—deepfakes and misuse of images can devastate their lives. That's why we're organising this AI awareness program with the police, training them to respond to and prevent such crimes." "Crime Rates Have Decreased Since 2020 Reflecting on crime trends, Chairperson Rahatkar noted that the number of crime cases reported to the police has gradually declined. "If we look back from 2020 to now, in 2025, we can see a positive trend. This is a good sign and reflects improvement in preventive mechanisms." She concluded her address by stating her plans to engage with MSME women entrepreneurs, Self-Help Groups, and village-based women organisations during her visit.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store