Latest news with #RobbieTorney


India Today
30-07-2025
- India Today
OpenAI's new study mode in ChatGPT is designed to help you learn, not cheat and get quick answers
ChatGPT is becoming a go-to tool for many students to complete their projects. But more than learning, many students are simply using the AI to gather information and copy-paste it into their assignments. Now, in an attempt to help students study and learn as they work, OpenAI has introduced a new Study Mode in ChatGPT. The feature is now live and available for logged-in users on the Free, Plus, Pro, and Team plans, and it is expected to roll out to ChatGPT Edu users in the coming to OpenAI, Study Mode is designed to help students "build deeper understanding, not just finish something." Instead of offering direct solutions, the tool guides users step by step through interactive prompts, questions, and knowledge checks. 'Instead of doing the work for them, Study Mode encourages students to think critically about their learning,' said Robbie Torney, Senior Director of AI Programmes at Common Sense Media, in a statement shared by Study Mode works in ChatGPTOpenAI says that its new Study Mode has been developed in collaboration with teachers, scientists, and pedagogy experts to help the AI model reflect teaching practices grounded in learning science. The feature uses custom system instructions that encourage active participation by users, manage their cognitive load, promote self-reflection, foster curiosity, and deliver actionable feedback. Once activated, Study Mode responds to the user's queries according to their objectives and skill level. The tool also sections the lessons into easy-to-follow sections, using Socratic-style questioning, hints, and self-reflection prompts to encourage user engagement. OpenAI highlights that the new tool also uses scaffolded responses, a teaching method that organises information in a structured way, helping learners see how different concepts connect without becoming to make it more personalized, Study Mode also adjusts lessons based on the user's prior interactions and understanding of the subject matter. It also includes Built-in knowledge checks, including quizzes and open-ended questions, to offer personalised feedback and help students measure their progress over Study Mode key featuresSome of the key highlights of Study Mode are:Interactive prompts: The AI tool uses questions and hints to promote active learning rather than delivering answers learning: It breaks down complex topics into easy-to-digest support: The tool adjusts responses to each student's needs and learning checks: Incorporates quizzes and open-ended questions to track flexibility: Students can switch Study Mode on and off at any time in a believes that these new features will not only make learning more engaging for students but also reduce the temptation to rely on ChatGPT purely for quick Study Mode limitationsOpenAI acknowledges that Study Mode is still in its early stages. Since it currently relies on custom system instructions, students may experience inconsistent behaviour and occasional mistakes. The company plans to integrate these behaviours directly into its core AI models once it has gathered enough feedback.- Ends


Daily Mail
29-07-2025
- Daily Mail
Parents rejoice! ChatGPT has a new 'Study Mode' that will force students to work through questions step–by–step instead of just getting an answer
Homework used to be something that could take hours. But the recent introduction of ChatGPT means tricky questions or complicated essays can be completed in the click of a button. It can undoubtedly be helpful in certain situations – like if a student is really stuck on a topic. But frustrated family members and teachers have complained that youngsters are using the AI chatbot as a 'second brain' and to answer simple questions like 'how many hours are there in one day?' However, a new update could help reach a happy middle ground. OpenAI, the firm behind ChatGPT, has just announced 'study mode' – an option which will help users work through questions step by step instead of just providing an answer. Students who tested the mode have described it as a 'tutor who doesn't get tired of my questions' and said it is helpful for breaking down dense material. The tool contains interactive prompts to guide understanding, 'scaffolded responses' to reduce overwhelm and personalised support that is tailored to the right level for the user. An example of how 'study mode' would work. Experts say it is 'especially useful' for homework help, test prep and learning new topics It also features knowledge checks in the form of quizzes and open–ended questions, along with personalised feedback. The mode can also easy be toggled on and off during a conversation. Those wanting to use it should select 'Study and learn' from tools in ChatGPT. 'Instead of doing the work for them, study mode encourages students to think critically about their learning', Robbie Torney, senior director of AI Programs at Common Sense Media said. 'Features like these are a positive step toward effective AI use for learning. 'Even in the AI era, the best learning still happens when students are excited about and actively engaging with the lesson material.' It's hoped the latest update will prevent students from over–reliance on AI when it comes to learning. One person previously described their sister's use of ChatGPT for homework as 'heartbreaking'. Writing on Reddit the user, called 'Alone_Yam_36', said: 'She is 11 years old and in the last year of elementary school. 'She uses ChatGPT on my account so I can go back to previous conversations and see what she wrote. 'So she starts and asks ChatGPT to convert three minutes into seconds. But the thing is she proceeds to ask ChatGPT to answer every single question in her two–page math homework. 'These include very simple questions like how many hours are there is one day and seven hours. She asked ChatGPT without even trying to solve it. 'If there's a lot of elementary school kids doing this then the future is dark.' Another person said: 'For many of us it's a tool. For a lot of people – and the younger generation – it looks like it'll be their second brain (if not their main), which is very concerning.' While one pointed out: 'She'll learn her lesson when she starts failing exams.' Schools in the UK have previously said they are looking to move away from homework essays due to the power of online artificial intelligence. Staff at Alleyn's School in southeast London are rethinking their practises after a test English essay produced by ChatGPT was awarded an A* grade. Meanwhile, around three–quarters of students admit to using AI to help with homework. A recent survey commissioned by Berkshire–based independent girls' school Downe House saw 1,044 teenagers aged 15 to 18, attending both state and private schools across England, Scotland and Wales, polled about their own use and views of artificial intelligence earlier this year. More than three quarters (77 per cent) of those who answered admitted to using AI tools to help complete their homework. One in five admitted regular use, with some saying they felt disadvantaged if they did not use it. In a recent blog post on AI in schools, the Department for Education wrote: 'AI tools can speed up marking and help teachers understand each pupil's progress better, so they can tailor their teaching to what each child needs. 'This won't replace the important relationship between pupils and teachers – it will strengthen it by giving teachers back valuable time to focus on the human side of teaching that makes all the difference to how well pupils learn.' What is OpenAI's chatbot ChatGPT and what is it used for? OpenAI states that their ChatGPT model, trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF), can simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises and reject inappropriate requests. Initial development involved human AI trainers providing the model with conversations in which they played both sides - the user and an AI assistant. The version of the bot available for public testing attempts to understand questions posed by users and responds with in-depth answers resembling human-written text in a conversational format. A tool like ChatGPT could be used in real-world applications such as digital marketing, online content creation, answering customer service queries or as some users have found, even to help debug code. The bot can respond to a large range of questions while imitating human speaking styles. As with many AI-driven innovations, ChatGPT does not come without misgivings. OpenAI has acknowledged the tool´s tendency to respond with "plausible-sounding but incorrect or nonsensical answers", an issue it considers challenging to fix. AI technology can also perpetuate societal biases like those around race, gender and culture. Tech giants including Alphabet Inc's Google and have previously acknowledged that some of their projects that experimented with AI were "ethically dicey" and had limitations. At several companies, humans had to step in and fix AI havoc. Despite these concerns, AI research remains attractive. Venture capital investment in AI development and operations companies rose last year to nearly $13 billion, and $6 billion had poured in through October this year, according to data from PitchBook, a Seattle company tracking financings.


West Australian
24-05-2025
- Entertainment
- West Australian
The Washington Post: Teens are sexting with AI. Here's what parents should know.
Parents have another online activity to worry about. In a new tech-driven twist on 'sexting', teenagers are having romantic and sexual conversations with artificial intelligent chatbots. The chats can range from romance- and innuendo-filled to sexually graphic and violent, according to interviews with parents, conversations posted on social media, and experts. They are largely taking place on 'AI companion' tools, but general-purpose AI apps like ChatGPT can also create sexual content with a few clever prompts. Experts warn the chats with AI can lead to unrealistic expectations of sex and relationship dynamics. Parents worry about the dangers to their children's mental health, or exposing them to described sexual scenarios too young. Some think the tools might have some value, with limits. We tested 10 chatbots ourselves to identify the most popular AI characters, the types of conversations they have, what filters are in place and how easy they are to circumvent. AI chatbots are open-ended chat interfaces that generate answers to complex questions, or banter in a conversational way about any topic. There is no shortage of places minors can find these tools, and that makes blocking them difficult. AI bots are websites, stand-alone apps and features built into existing services like Instagram or video games. There are different kinds of chatbots. The mainstream options are OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini, and Meta AI, which recently launched as a stand-alone app. These have stronger filters, and their main products aren't designed for role-play. They can partake at least suggestive or romantic conversations and create sexual content with the right prompts. They can switch over to voice-based chat, reading the replies aloud in realistic — even sultry — sounding voices. Companion AI tools are far more popular for suggestive chats, including Replika, Talkie, Talk AI, SpicyChat and PolyBuzz. ChatGPT and Meta AI have also launched companion-chat options. These types of tools have libraries of characters and preprogrammed personalities, many designed with titillation in mind, like those from romance novels or the many 'step-sibling' options on Meta's AI Studio. We tested a Meta AI Studio chat, and the flirtatious direction of the 'Step sis' character was immediately clear. The smaller apps tend to have fewer limits or filters. Look for anything that has 'AI girlfriend,' 'AI boyfriend,' or 'AI companion' in the name or description. More are being added to app stores daily. It's not just sex, according to parents and experts. Teens are having a range of conversations with character bots, including friendly, therapeutic, funny and romantic ones. 'We're seeing teens experiment with different types of relationships - being someone's wife, being someone's father, being someone's kid. There's game and anime-related content that people are working though. There's advice,' Robbie Torney, senior director of AI programs at US family advocacy group Common Sense Media, said. 'The sex is part of it but it's not the only part of it.' Some confide in AI chats, seeing them as a nonjudgmental space during a difficult developmental time. Others use them to explore their gender or sexuality. When they partake in sexual chats, they vary between innuendo and graphic descriptions. The chats can involve power dynamics and consent issues in ways that don't mimic the real world. 'Where some of the harm or risk comes in is the bots aren't programmed to respond in the same way they would in a real relationship,' Mr Torney said. The default settings on most AI companion tools allow, and sometimes encourage, risqué role play situations, based on our tests. Some stop before actual descriptions of sex appear, while others describe it but avoid certain words, like the names of body parts. There are work-arounds and paid options that can lead to more graphic exchanges. Prompts to get past filters — sometimes called jailbreaks — are shared in group chats, on Reddit and on GitHub. Sometimes all it requires is patience and ignoring warnings. A common technique is pretending you need help writing a book. Many apps have a built-in filter based on the age of the user. Meta said it prevents accounts registered as minors from searching for 'romance' AI characters, and that sexually explicit chats are prohibited for users under 18. The filter can show parents what AI characters their children have used in the past week. Google says Gemini has different content restrictions for people it knows are under 18. has stricter limits for people it knows are under 18, while some AI apps have teen modes that need to be turned on. In a recent risk assessment of companion AI apps, Common Sense Media found that safety measures like content restrictions and age limits were often easily circumvented. In our own tests, we were able to easily work around filters to generate detailed sexual content while logged in as an adult. On one filter we ran into while testing one of the most popular companion AI apps, the warning came up after the conversation described sex. Experts agreed that for children and young teens, it never makes sense to have access to unmonitored chatbots because of the risk that they can encounter inappropriate content. For older teens, the choice is more nuanced, the experts said, depending on how much exposure to sexual or intimate themes they already have, the types of content they're accessing and what their parents consider appropriate. Potential harms from AI bots extend beyond sexual content, experts said. Researchers have been warning AI chatbots could become addictive or worsen mental health issues. There have been multiple lawsuits and investigations after teens died by suicide following conversations with chatbots. Common Sense Media also flagged harmful advice, like requests to self harm, as an issue with companion bots. Similar to too much pornography, bots can exasperate loneliness, depression or withdrawal from real-world relationships, Megan Maas, an associate professor of human development and family at Michigan State University, said. They can also give a misleading picture of what it's like to date. 'They can create unrealistic expectations of what interpersonal romantic communication is, and how available somebody is to you,' Associate Professor Maas said. 'How are we going to learn about sexual and romantic need-exchange in a relationship with something that has no needs?' Some experts said there can be advantages to teens exploring in a somewhat safe space, without the unpredictable factor of a human being on the other side. It's a chance to practice some limited interpersonal skills, or ask questions someplace other than Google. 'If you have a kid who has an AI chatbot and they're mostly asking this bot questions they're too embarrassed to ask you or a nurse or a therapist, then that chatbot is doing good things for that kid,' Associate Professor Maas said. However, the bots could replace much needed human experiences, like rejection. Monitor what apps your children and teens are using and if they require logins, make sure they are using accounts set up with their accurate age. Most built-in parental controls on tablets and smartphones will let you require permission when a child downloads a new app. Many of the companion apps are labeled 'Teen' on the Google Play store and 17-years old and up on iOS. Set up your child's devices with their correct age and add limits on app ratings to prevent them from being downloaded. Using their proper age on individual chatbot or social media accounts should trigger any built-in parental controls. However, most chatbots can easily be accessed online where accounts require only a self-reported age. Some internet-level filters can block access or flag specific language. Beyond regularly finding ways around parental controls, tweens and teens can access the internet at their school or on friends' devices. Parents may want to prepare them for a world where they will need to know how to navigate these tools. Experts suggest creating an open and honest relationship with your child. Teach them about what AI is and isn't, and how tech companies collect and use personal information. Check in regularly with your kids and make sure they feel safe coming to you with questions or issues. Have age-appropriate conversations about sex, and don't shy away from embarrassing topics. © 2025 , The Washington Post


CNN
29-01-2025
- CNN
What this teenager wants you to know about the damaging effects of AI
My friend Sammy turned around at his desk and grinned at me. He held out his phone and waved it in front of my face, the universal sign for 'Look at this!' It was a video of a capybara in a swimming pool, seemingly treading water like a human does, feet down and arms circling. Last year, I would have believed the post I saw on that screen and accepted it as part of what I know to be true about the world: capybaras tread water like people. Now, I wasn't sure. Even though teens spend so much of our lives online, a new study by Common Sense Media found that teens between the ages of 13 and 18 increasingly do not trust the content they consume online. With the emergence of generative AI — a type of artificial intelligence that produces content such as images, text and videos — it has become easy to quickly produce fake visual content. My friends and I have noticed these AI-generated images flooding social media platforms. What's real and what's not? Many teens struggle to figure out what online content is real and what is fake, according to the study. Some 46% said they either know that they have been misled by content or suspect that they have been, while 54% have seen visual content that 'was real, but misleading.' Collected by Ipsos Public Affairs on behalf of Common Sense Media from March to May 2024, the survey data is from a nationally representative survey that includes responses from 1,045 American adults (age 18 or older) who are parents or guardians of one or more teens age 13 to 18, and responses from one of the teenage children in each of those parents' families. Maybe the teens who reported that they haven't been misled are just absolute aces at spotting fake images, but I wonder if they just haven't realized that they have, in fact, been swindled by AI images. 'Many kids don't see flaws that they aren't trained to detect,' said Robbie Torney, Common Sense Media's senior director of AI programs, in an email. 'Knowing that the media we consume can be retouched, modified, or even fake, it's crucial that we all learn to think critically about information we encounter.' Most teens have spotted deceptive content I think that I'm generally pretty good at spotting AI images, but there was once a time when I believed that I couldn't be fooled. That was mostly in the early days of generative AI, when people in AI-generated photos typically had 15 fingers. But I no longer think I can't be fooled. More than 70% of the surveyed teens who have experienced deceptive visual content reported it has changed how they view the accuracy of content. I'm part of that group now. After a few too many cool posts of alleged historical pictures that turned out to be fake, I generally distrust most pictures I see if they are not posted by someone I know. I examine the fingers of any human picture first, since I know AI can still get the number of human fingers wrong. I have begun checking the comments of nearly every post I come across to see whether other users believe it. Doubt in the back of my mind now colors my response to everything I look at, and what I think. Since third grade, I have been warned about checking sources and not believing everything you read. I have been generally distrustful of information from the internet for years, so AI has not suddenly damaged my trust in online information. What has changed is my trust in images. Seeing is not believing anymore. I used to accept photos as fact. Before AI, there were programs people used to photoshop images, but those images are still real images of real people, just tweaked. Colors could be changed, and things could be cut out — but the original photo was an image of real life. As generative AI improves, however, completely fake content is getting harder for me to spot. Many teens speak through images One of the primary ways that my friends and I communicate is with images and videos. Memes are their own form of communication. Sharing an Instagram post about some weird scientific fact or interesting historical photo shows you know what a friend would find interesting. Sending someone funny video clips is a form of affection. It tells someone you remember them and lets you share an inside joke. While I'm no longer a heavy social media user, I still probably see hundreds of images a day online. Other teens who spend more time on Instagram than I do or who use TikTok might see thousands. If those 'photos' are just images cobbled together by a machine, wholly detached from anything real, what can I believe? I already doubt everything I read online. This mistrust is spreading into teens' offline lives. Studies show that teens already have low trust in institutions such as the government and news media. Doubt in the world around us is rampant among my friends. It's common for me to hear dismissive statements thrown around, even about the textbooks we read in class. I've heard refrains like 'Eh, maybe they did that,' 'Might be true,' and 'I don't believe that' thrown out about anything and everything. My classmates and I often doubt the news, history and authority figures. If teens can no longer trust what they read, hear or see online, why should they trust anything? In a generation in which mistrust could very well become the default approach to life, what can a person rely on? Or hope for? What's the point of caring about anything, if it could all be false? One of my teachers thinks that members of Generation Z (born between 1997 and 2012) are nihilists, whether they know it or not. My friends and I struggle with these questions, and I think my teacher may be right. Generative AI needs a label How does this get fixed? The Common Sense study found that 74% of teens agree that generative AI 'should have visible warnings that its outputs could be harmful, biased, or wrong.' And 73% of teens want content generated by AI 'to be labeled or watermarked' to show its source. I believe this demand is because we're losing ground in what we can believe. I wish that I could trust what I see, and AI-generated content being required to be marked as fake is a reassuring idea. 'The growing mistrust in AI mirrors historical challenges with media literacy,' Torney said via email. 'Just as we learned to evaluate traditional media by asking 'who created this?' and why was it made?', we must now apply these same critical thinking skills to AI-generated content.' It's a good solution, no doubt. But it still means that we cannot initially trust what we see. Instead, we must examine it, question it, and have faith in ourselves that we've come to the right conclusion. People say that social media and increased time online is making us lonely because of the absence of physical connection. And maybe that's part of the deep loneliness, and the increasing anxiety and depression we know teenagers today report. But if this loss of trust continues, I think we'll become even more individualistic and form shallower connections with each other. Mistrust should not be our default setting If mistrust becomes the default approach to life, what would be the point of doing anything for other people that you don't already know? Will we be able to get to know each other deeply, or successfully communicate? Can you really build a life when you don't know what is real and what is fake, when you can never trust what you see, what you learn, or how the world works? Artificial intelligence is just that: artificial. When some of the main tools you use as a teenager to connect with real friends — memes and social media — are corrupted with artificiality, how do you make real connections? When the main way teenagers learn about the bigger world outside of their schools and towns is the internet, can we ever understand the world?


CNN
29-01-2025
- CNN
What this teenager wants you to know about the damaging effects of AI
My friend Sammy turned around at his desk and grinned at me. He held out his phone and waved it in front of my face, the universal sign for 'Look at this!' It was a video of a capybara in a swimming pool, seemingly treading water like a human does, feet down and arms circling. Last year, I would have believed the post I saw on that screen and accepted it as part of what I know to be true about the world: capybaras tread water like people. Now, I wasn't sure. Even though teens spend so much of our lives online, a new study by Common Sense Media found that teens between the ages of 13 and 18 increasingly do not trust the content they consume online. With the emergence of generative AI — a type of artificial intelligence that produces content such as images, text and videos — it has become easy to quickly produce fake visual content. My friends and I have noticed these AI-generated images flooding social media platforms. What's real and what's not? Many teens struggle to figure out what online content is real and what is fake, according to the study. Some 46% said they either know that they have been misled by content or suspect that they have been, while 54% have seen visual content that 'was real, but misleading.' Collected by Ipsos Public Affairs on behalf of Common Sense Media from March to May 2024, the survey data is from a nationally representative survey that includes responses from 1,045 American adults (age 18 or older) who are parents or guardians of one or more teens age 13 to 18, and responses from one of the teenage children in each of those parents' families. Maybe the teens who reported that they haven't been misled are just absolute aces at spotting fake images, but I wonder if they just haven't realized that they have, in fact, been swindled by AI images. 'Many kids don't see flaws that they aren't trained to detect,' said Robbie Torney, Common Sense Media's senior director of AI programs, in an email. 'Knowing that the media we consume can be retouched, modified, or even fake, it's crucial that we all learn to think critically about information we encounter.' Most teens have spotted deceptive content I think that I'm generally pretty good at spotting AI images, but there was once a time when I believed that I couldn't be fooled. That was mostly in the early days of generative AI, when people in AI-generated photos typically had 15 fingers. But I no longer think I can't be fooled. More than 70% of the surveyed teens who have experienced deceptive visual content reported it has changed how they view the accuracy of content. I'm part of that group now. After a few too many cool posts of alleged historical pictures that turned out to be fake, I generally distrust most pictures I see if they are not posted by someone I know. I examine the fingers of any human picture first, since I know AI can still get the number of human fingers wrong. I have begun checking the comments of nearly every post I come across to see whether other users believe it. Doubt in the back of my mind now colors my response to everything I look at, and what I think. Since third grade, I have been warned about checking sources and not believing everything you read. I have been generally distrustful of information from the internet for years, so AI has not suddenly damaged my trust in online information. What has changed is my trust in images. Seeing is not believing anymore. I used to accept photos as fact. Before AI, there were programs people used to photoshop images, but those images are still real images of real people, just tweaked. Colors could be changed, and things could be cut out — but the original photo was an image of real life. As generative AI improves, however, completely fake content is getting harder for me to spot. Many teens speak through images One of the primary ways that my friends and I communicate is with images and videos. Memes are their own form of communication. Sharing an Instagram post about some weird scientific fact or interesting historical photo shows you know what a friend would find interesting. Sending someone funny video clips is a form of affection. It tells someone you remember them and lets you share an inside joke. While I'm no longer a heavy social media user, I still probably see hundreds of images a day online. Other teens who spend more time on Instagram than I do or who use TikTok might see thousands. If those 'photos' are just images cobbled together by a machine, wholly detached from anything real, what can I believe? I already doubt everything I read online. This mistrust is spreading into teens' offline lives. Studies show that teens already have low trust in institutions such as the government and news media. Doubt in the world around us is rampant among my friends. It's common for me to hear dismissive statements thrown around, even about the textbooks we read in class. I've heard refrains like 'Eh, maybe they did that,' 'Might be true,' and 'I don't believe that' thrown out about anything and everything. My classmates and I often doubt the news, history and authority figures. If teens can no longer trust what they read, hear or see online, why should they trust anything? In a generation in which mistrust could very well become the default approach to life, what can a person rely on? Or hope for? What's the point of caring about anything, if it could all be false? One of my teachers thinks that members of Generation Z (born between 1997 and 2012) are nihilists, whether they know it or not. My friends and I struggle with these questions, and I think my teacher may be right. Generative AI needs a label How does this get fixed? The Common Sense study found that 74% of teens agree that generative AI 'should have visible warnings that its outputs could be harmful, biased, or wrong.' And 73% of teens want content generated by AI 'to be labeled or watermarked' to show its source. I believe this demand is because we're losing ground in what we can believe. I wish that I could trust what I see, and AI-generated content being required to be marked as fake is a reassuring idea. 'The growing mistrust in AI mirrors historical challenges with media literacy,' Torney said via email. 'Just as we learned to evaluate traditional media by asking 'who created this?' and why was it made?', we must now apply these same critical thinking skills to AI-generated content.' It's a good solution, no doubt. But it still means that we cannot initially trust what we see. Instead, we must examine it, question it, and have faith in ourselves that we've come to the right conclusion. People say that social media and increased time online is making us lonely because of the absence of physical connection. And maybe that's part of the deep loneliness, and the increasing anxiety and depression we know teenagers today report. But if this loss of trust continues, I think we'll become even more individualistic and form shallower connections with each other. Mistrust should not be our default setting If mistrust becomes the default approach to life, what would be the point of doing anything for other people that you don't already know? Will we be able to get to know each other deeply, or successfully communicate? Can you really build a life when you don't know what is real and what is fake, when you can never trust what you see, what you learn, or how the world works? Artificial intelligence is just that: artificial. When some of the main tools you use as a teenager to connect with real friends — memes and social media — are corrupted with artificiality, how do you make real connections? When the main way teenagers learn about the bigger world outside of their schools and towns is the internet, can we ever understand the world?