logo
#

Latest news with #Futre

Podcast: The Premortem on AI in Education
Podcast: The Premortem on AI in Education

Yahoo

time30-04-2025

  • Yahoo

Podcast: The Premortem on AI in Education

is an education podcast featuring author Michael Horn and Futre's Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on Apple Podcasts, Google Play or Stitcher. In this episode of Class Disrupted, hosts Michael Horn and Diane Tavenner chat with Rebecca Winthrop, a senior fellow and director at the Brookings Institution, about the impact of AI on education. The conversation kicks off by highlighting Rebecca's idea of a premortem approach, which involves anticipating the negative impacts of AI before they occur and strategizing how to mitigate these risks. They identify key concerns such as offloading critical thinking, manipulation, and the effects on socialization — and consider how this technology might catalyze a rethinking of the purpose of education. Listen to the episode below. A full transcript follows. Michael Horn: Hi everyone, this is Michael Horn. And what you're about to listen to on Class Disrupted is the conversation Diane and I had with Rebecca Winthrop. Rebecca is the coauthor of a terrific new book, The Disengaged Teen. She is the head of the center for Universal Education at the Brookings Institution, and she has helped stand up a global task force there on AI and education, which forms the basis for our conversation today. Rebecca brings forward a couple interesting perspectives that I want to highlight here. Number one, the importance of doing a premortem on the impact of AI in education. And as she said, a premortem doesn't focus on the optimistic case for AI. It fast forwards the story to say, knowing what we know now, let's get ahead of this and imagine the negative impacts from AI and then guard against that. Second, in her mind, the big premortem risks to worry about are three things. Number one, we can offload cognitive tasks to AI, but as she said, the child development people don't know what kids have to do on their own and what actually can be offloaded to AI without harmful consequences. Second, she worries about manipulation. And third, she worries about the impact to software socialization from AI. One thing I'm leaving this conversation with is… Rebecca hopes I guess I would say that AI can be this thing that spurs us to have this national dialogue around the purpose of education so that we can really rethink what schooling looks like. Is that the way that this happens? Is it such a big shock that we'll all come together and have these conversations? Or is it more likely that the real action around system reinvention or system transformation will occur from the grassroots? That is, as in individual communities, education entrepreneurs create new forms or systems of schooling that gain traction over time as more and more people migrate to them and we are left with a series of different systems that have a series of different purposes to them. That's the question that I'll leave thinking more about from this episode that you're about to hear. I hope you enjoy. Michael Horn: Hey Diane, it is good to see you in a school as well. That is probably pretty energizing. And I will say on this show, the hits keep on rolling. I'm loving all that our guests who have such different perspectives on the vantage point and the question around AI and education are bringing and I am very certain today will be no different. Diane Tavenner: I couldn't agree more, Michael. And as those interviews start to become public, we are now hearing from our listeners, which we love and honestly, it's one of the best parts of doing this podcast, besides getting to have really fun conversations with you and geeking, I'm. Michael Horn: I'm okay taking a backseat to the listeners. Diane Tavenner: But I hope we keep hearing more questions and suggestions, especially at this time in the season when we start to think about what's next. But before I get too far ahead of myself, we have a real treat here today. I think we do. Michael Horn: Indeed. We have my friend Rebecca Winthrop on the show, and Rebecca is a senior fellow and director of the center for Universal Education at the Brookings Institution. Her research focuses on education globally. That's how I got to know her most deeply. She pays a lot of attention to the skills that young people need to thrive in work, life and as constructive citizens. So really big, weighty questions. She's also the co-author with Jenny Anderson, of a very highly acclaimed new book, the Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better. Definitely check it out. Michael Horn: It's obviously sort of a zeitgeist at this moment, sadly. And the book does a great job, I think, tackling it, helping people put in perspective and sort of think about where do I want my kid on these different journeys as they're learning? And it's not necessarily what you think the answer might be for those listening. So definitely check it out. For our purposes in this conversation, I will say not only does the book talk a lot about the the themes that we talk a lot about on this podcast, but Rebecca is also spearheading the Brookings Global Task Force on AI and Education, and we will link to that and the book in the show notes. But suffice to say, she's been thinking a lot about the questions were most interested in, Diane. And I feel lucky we get to record with her because Rebecca has been like getting to hang out with like people like Drew Barrymore. And I think Hoda was at one of your book events, Rebecca, so you are rolling. The book has definitely hit a nerve. Thank you so much for joining us. It's great to see you. Rebecca Winthrop: Oh, it's a total pleasure to be here. It's a treat for me, too. Michael Horn: You can lie if you say that, given all the folks you're getting to hang out with. But before we get into the approach of your thinking around AI and education and some of the questions that you're asking, I would love to hear how and why you got interested in this topic in the first place and how you've gone about learning about, you know, AI in general and AI in education specifically. Rebecca Winthrop: Maybe in reverse order how I've gone about learning about it. I mean the I think all of us, I would assume all of us, it certainly, maybe I shouldn't make this assumption, are out trying stuff in our own lives. So I've gone about it. You know, when something new hits, I just want to check it out. So I've, you know, I'm now a steady user of GPT4, paying my little, you know, subscription. And it is so much better. And I've tried the, you know, the, the dollies and the this and then that, like PowerPoints. Make an illustration. Do this. What can it do? Like, what can it do? Just, just because you get a little, it's experiential learning, right? Like you get a little bit more of a sense of its power and its limitations. Well, maybe that's just how I learn than just reading the text. So in terms of going about learning about it, the first thing I've done is just been playing around with it. And I'm no expert by any means, but it certainly has helped me wrap my head around the massive seismic shift, that generative AI is, I think that's the thing that most. And this gets to the first part of your question that I was most, you know, almost emotionally struck by was how crazy it is to be able to interact with a machine in my own words. Before we had to learn a different language. We had to learn code to interact and make machines do things. And now it's in our own language. And that right there to me is a huge fundamental shift that we need to take incredibly seriously. And so then from there I started getting really interested in it because who can, who can not be interested in, if you're in education and everyone's talking about it. But also I started being really worried. I was initially very worried about it because I just come out of all this book research Jenny and I had been doing for the Disengaged Teen. And the big highlight message there is kids are so deeply disengaged in school. And Diane, this has been your life's work to find a new way of doing school that they're not disengaged. So this is no new. And Michael, you have been on the forefront of how to use tech well for a long, long time. So I've, I've been learning from you for years. So it's not news to both of you. But this book is a sort of broad audience book. And we found there's four modes of engagement that kids show up, they show up as passenger mode. Most kids we partnered with Transcend, 50% of kids, that's kind of their experience in middle school and high school. Achiever mode. They're like trying to be perfect at everything that's put in front of them and end up actually being very fragile learners. Resistor mode. These are the quote unquote, you know, problem kids. That's who we think is disengaged. We broadly society and they're avoiding and disrupting, but they have a lot of agency, a lot of gumption. And if you can switch their context, they can get into explorer mode. And the thing that I thought about, GPT3 launched in mid, sort of. Right. We were sort of towards the end of writing the book and I was so worried that it would massively scale how many kids were in passenger mode if we didn't do it right, if we didn't figure it out. And so that's why we, you know, lots and lots of people are doing incredibly good work in different pockets around the globe. And anyways, that's why we launched our Brookings Global Task Force on AI to try to bring those questions together and bring a different, slightly different methodology. Diane Tavenner: Rebecca that sort of leads into the first place I'd love for us to go, which is, you know, one of the ways that you approach this work is through premortems. And for, you know, people who don't know what a premortem is, oftentimes we do post mortems after something to, you know, digest what, dissect what went wrong and what went right and whatnot. But the premortem is when you try to think about that before you're even in it to really, you know, visualize and imagine the potential negative impacts that could materialize so we can do something about it before we get there. It's conceptually a more empowering way of thinking about things. And so, you know, I, I'd love to unpack your sort of premortem thinking about this. And we're going to start with the positive. So let talk us through, if you will, the positive case for AI in education. You know, as you've done this sort of premortem forward thinking. What are the, what are you excited about? What's the possibility? Right. Rebecca Winthrop: Yeah, well, Diane, I will, I'll get there on the positives, but I want to talk a little bit about the premortem piece because what you just did is exactly what everyone in education has done. When we started this premortem exercise because the premortem is you do not start with the positive, which actually has been a problem. The people in education, our people, all of us in our community are sunny optimists. We believe in the potential of human development. And every time we finally had to switch it up, like every time we did the proper premortem. There's a whole science behind premortem thinking and starting with the risks. And people like rebelled. They didn't like it, they felt uncomfortable. So anyways, that's an interesting observation but the idea of the premortem came out of sort of discussions we've been having internally. We had actually came out last, almost a year ago February. Last February we had a great meeting with our leadership council. We have a leadership council at our center and HP hosted us. We were in the Hewlett garage and it was amazing. And then we did a broader conference and we were just around the table trying to figure out how to wrap our hands around how different Gen AI is and what it means for education and knowing that there's incredible conversations happening in a range of other pockets. And one of the things that I believe strongly in is that we should always look broadly across, not just a solution set can come from anywhere. And so even outside of our sector, from the health sector, in this case from cybersecurity. So this is a typical thing done in other sectors, cybersecurity being one. And we never, we can't there, your listeners might know, but we can't find a single instance where it's done in education. And I actually think we should do it for every tech product before we roll it out. And it basically is, let's figure out how it could all go wrong. And then put that all on paper and then figure out how to mitigate those so it doesn't all go wrong. And we should have, should have done this with social media 10 years ago. If we'd had child development folks, educators, teachers, therapists, counselors sitting around the table designing social media with developers, we, I'm sure, I am sure we could have avoided at least 70% of the harms. Now would companies have gone along with it? That a different, you know, question. Let's parenthesize that like we, these are things that you can, if you go through a very systematic thought process and, and we have an incredible, Mary Burns is an incredible colleague working with us leading this where you, you literally, you know, it's a very sort of systematic process to think about the risks. Yeah, you want to speed up and go straight to the benefits. Diane Tavenner: Flip it. We don't have to follow that. Like, let's flip it. And so let's start with that. Like, I mean the worst case scenario of a premortem is the patient dies. Rebecca Winthrop: Right. Diane Tavenner: And so like what, what's the kind of patient dying of AI and education make that case for us and yeah, let's do it in that order. Rebecca Winthrop: Yeah, the premortem is like moving the autopsy forward and like, right. How could they die? So I want to caveat this and, you guys have thought about this deeply. So please chime in with your own versions that we are in the midst of the premortem research on the risks side, which includes lots of focus groups with educators, you know, with kids, with ED leaders, our steering group members, etc. So a few of the things. So this is going to be the Rebecca version. This is not the entire task force. A few of the things on the risks that give me pause are talking to, and we have, you know, a number of colleagues on our team who are learning scientists, neuroscientists, and then talking to other colleagues outside of Brookings who know sort of child development, no brain science, no brain development. And as far as I can tell, we do not know. We royal, we, the people in child development, do not know what are the things that kids have to do on their own to develop critical thinking? You know, agency, key skills, and what could you offload to AI? And to me that is like, I actually am quite. I like just saying that I'm like, oh my God, I'm so nervous. Like, I'm really nervous. I'm nervous for my kids, I'm nervous for the students of the world because, you know, obviously Gen AI can do so much for us. So if one of the main ways that kids develop critical thinking through education at the moment, pretend is learning to write an essay with a thesis statement, picking evidence that supports their argument, putting it in logical order, and, and let's be honest, like the what seventh graders produce as essays, it's not a great contribution to humanity. It's not the product of the essay. Related Rebecca Winthrop: It's the process that they have to go through to that logical thinking process, understanding what, how you parse truth from fiction. It's as basic as that. Like where, what, where is data? What is evidence? How do you analyze it for arguments? So there may be another way to develop that critical thinking skill, but at the moment that's sort of one of the main ways and until we replace, come up with another way that all kids can do it makes me very nervous that sort of Gen AI will sort of, kind of basically offload critical thinking development to our kids. That's the thing I'm most worried about. And the second I'm most worried about is just, I mean we are at the tip of the iceberg with what this technology can do. And I'm, you know, I am sure we're going to have all sorts of incredible things in the next seven years that we couldn't even. That are like straight up Star Trek. Right. With neural, you know, being able to talk to technology. We can already do that. Like and you know, robotic, you know, R2D2 type scenarios. And so I do worry about manipulation and I do worry about socialization, interpersonal socialization because we see what just a phone flat screen text message interaction does, but for kids, sort of ability to interact face to face. So those to me are the three things that I'm most worried about. But the first one is what makes me really worried. Are you guys worried about that? Like how do you, how are you thinking about this? Michael Horn: Oh, I love when you turn it back on us. We're asking all you folks so we could develop a point of view on this. I think this, the quick answer for me is yes, I am nervous about it given the current way schooling is designed that we have not thought about how to mitigate it. Which maybe is my chance to turn it back to a question to you which is part of the premortem is identifying. And so all three of these risks I think are big. Manipulation is big socialization, we had an entire episode on that question and, and what do relationships look like in the future? Forget about schooling for a moment. Right. With AI bots. Yeah. Right. And so I guess having identified those as three big ones. What should we do to. You know like you're starting to think about the. Yeah. What's the mitigation piece? Right. Structurally, assignment wise. How do we think about this so that we don't, you know, we don't live right into those. Rebecca Winthrop: Yeah, we haven't gotten there yet in the task force. So this again. Michael Horn: Yeah, just speculation. Yeah, well, but, but let me sharpen the question actually Rebecca, because you just wrote this big book, right. Or I should say important book, the Disengaged Teen, where you thought a lot about the negative implications. Right. Of being in passenger mode and sort of the listlessness, which I think could be a byproduct of, of maybe all three of these. Certainly two of the three. And so how have you thought about that? Related Rebecca Winthrop: Yeah, well, I think for me, the mitigation piece I'm going to take your question broadly, Michael. For me, I think we have to, I have like a sort of sequence of types of, levels of types of things we have to think about. So, like, for me, the biggest thing, and you guys have talked about this on your podcast, is really thinking through and being very clear when we're talking about adult mediated use of particularly Gen AI, less predictive AI and student mediated or child mediated. And I mean that for right now, like, we're in a massive point of transition. We will eventually come to some new normal eventually. But in our current sort of transition, the discourse around AI and education is so fuzzy and flimsy and unrigorous. You guys are great because you're surfacing that. And so often we hear, you know, AI can transform education. It'll be great. And people reference. And I think, you know, it depends. And when people, certainly from technologists, you know, discourse, you know, it's true that AI can transform many, many things. It's unbelievable. Like protein folding, incredible. Spotting viruses in wastewater, amazing. Like just rapid breakthroughs that are incredible. And all of those are run with by adults who have deep critical thinking and subject matter knowledge and are using the AI as a tool. And that's very. And then the discourse goes. And then we'll just give it to schools and it'll be great and kids can blah, blah, blah. And it's like, no, well, give it to schools who. So, like, let's be very clear. Like, is it helping teachers massively teach better or is it helping them do the same more efficiently? Diane, this, you've made this point, you know, those are two different things. And it's very different from giving just sort of blanketing Gen AI in pedagogy for students to use. You know, given the example of the essay. Right. Like, it might actually, first of all, kids don't have the content knowledge to understand. So I've spent my whole, you know, 20 years talking about the sort of academic skills plus. And now I'm like, oh, my God, let's not forget about the content knowledge. Like, how will we know, how will kids know how to assess if this, the sniff test, does this seem right? Michael Horn: Actually, can we put a pin on that just for one sec? Because that's interesting. Like, you've been pushing us to be like, okay, not knowledge for just its own sake, but to do these skills and now you're worried we might all sort of like sort of blow past it and forget that the knowledge actually is an important base. Is. Am I hearing you right? Rebecca Winthrop: 100%. Like I've been absolutely pushing, which you know, you both have too, with the bringing together of knowledge acquisition with knowledge application. And I do think if we do it right, that's to me the sunny possibility with Gen AI, maybe it could bring those two things closer together in a more scalable systematic education system wide effort. But I am very worried that people will be like, well, they'll forget about the knowledge acquisition pieceand that is very scary. Diane Tavenner: Can we stay here for a minute? Because I keep asking people to think about the system and no, no one seems to want to go there with me. You're the first person. So sorry, I can't help myself. I'm so excited that someone wants to actually talk about a system and especially this space because, you know, I love this space. So you're thinking that there's this process of acquiring knowledge and like I think we're aligned on this great knowledge for knowledge sake is not super useful if you don't have skills like what are you doing with that knowledge? Are you analyzing? Are you, you know, making an argument? What do you. The skills you need to bring the. So tell, like paint me a picture of how AI might help bring those closer together in a learning system, if you will. Do you have any like, I mean, can you imagine that the. Rebecca Winthrop: I'm not sure I have a clear vision at a classroom level, but I have a clearer vision at System Transformation Lever. Diane Tavenner: Okay, okay, that's great. Michael Horn: Let's go there. Rebecca Winthrop: So one of the things that, you know, sort of in system transformation theory there's the real sort of shifting of the purpose of a system which is the hardest. This is straight up Donella Meadows Systems transformation theory, which argues just maybe some of the listeners aren't familiar, you know, that you know, there's different levers to shift systems sustainably and you know, some of them are shifting how we measure things. Shifting how we allocate resources and those are all important and good, but we tend broadly people who shift systems, but certainly in education to get stuck there. Which means let's shift our assessment, which is important. We need to do it. You know, let's shift how we put money and you. It's much harder to really shift a system that way than if you shift the shared vision and purpose of what a Education is for. And so that's a cultural shift. It's a mindset shift. It includes you know and underneath that it includes shifts in power dynamics. So to me if, if the way in to me Gen AI provides an opportunity to do some be a lever to shift sort of the purpose of ed. So if, if ChatGPT and any other Gen AI tool can pass all the exams that we're gatekeeping and systems for can do all the most of the assignments and if it can't do it now it will do you know what I mean? Like it's going so fast. Exactly. So then we have to, it will force us. It is forcing us, which is part of the big discussion in this why we did this Brookings Task Force, to think deeply about what is the purpose of education. So we're bringing, we have, I mean it's a massive freaking logistical enterprise getting all kids in a jurisdiction to a place at the same time of day. Like that's a, it's, it's just, it's incredible what schools do logistically. Like so what are we. If you know, we might not. It might be hard to break that up until we have a different world of work because we, you know mainly schools are also doubling as childcare in every single country in the world. It's the largest nationalized, you know, government supported child care system. So I'm not sure we're gonna just kids roving around the world. Rebecca Winthrop: But if we have something we're doing with kids at certain hours a day, what is the purpose of it? Like is it to identify a problem in their community and then start working backwards about what needs to be fixed, they need to fix it and try to learn the stuff. Here's content knowledge that they may need that would inform them on how to fix it. And teachers are scaffolding and you know, curating problem solving expeditions and that's the core thing of what we do. And you sort of learn knowledge and you're using Gen AI as a dialogue agent. I mean I think Convigo is really interesting and I think it's a useful use case of how to student. You know interfacing could be helpful for students but more does it free up teachers ability to teach differently? Because I don't think we will get away from teachers nor do I think we should get away from teachers because human, the human connection piece is so crucial. So to me it's really we, we cannot. It's the deep thought about what's the purpose of education now. Like we can't just keep going along, assigning the same tests and trying to ban cheating, you know, like, which is a short term, totally understandable emergency response because we don't know what we're doing and we haven't got our hands around this. And boy, I wish, you know, tech companies would have given school districts a heads up, you know, like. Related Diane Tavenner: Yeah maybe I'm not sure that that would have mattered. I must say, I do love what you're saying. You know, years ago we created this whole experience for educators to go through. That was how do you create an aligned school model, sort of an elegant model. And literally, step one is to determine the purpose of education. So you're speaking my language here. And it's an interesting thought that this could be the lever that sort of forces us to rethink because the purposes it's serving right now are so obviously met in some other way that we don't have a choice. We have to revisit that. It's a fascinating way to think about how it could drive system change. Rebecca Winthrop: Just on that, Diane, Jenny and I, in our book, in the Disengaged Teen book, our meta argument around why engagement matters. And really we're focused on, you know, explorer mode. We all need more time in explorer mode, which is agentic engagement, the marriage of agency and engagement. And our sort of big argument is it's really time to move from an age of achievement to an age of agency in education. And we are seeing the age of achievement fraying. We're seeing it in mastery, competency based, you know, College Board shifting up its, its, you know, ways of assessing new AP test versions. You know, we're, we're seeing it fraying and Gen AI, I think, just accelerate the fraying of the age of achievement, which is all about sort of, you know, content acquisition and synthesis and skills within that and sort of repetition back out. But really following instructions. Diane Tavenner: Yeah. Talk for a moment about the benefit of an age of agency. What does that look like? Why is that a direction we would want to go? And how does maybe AI support it? Rebecca Winthrop: Right. I think AI could it. I'm not sure where it. I think it could go either way at the moment. I think it really depends on how we use it. But when we talk about an age of agency, the piece that we are really leaning into is all the evidence around the marriage of, of basically agentic engagement, which, you know, Diane, Summit, you designed for agentic engagement. So this idea that when kids have agency over their learning and they have an opportunity to influence the flow of instruction in Little or big ways Summit is on the extreme. That's a total redesign. But you can do it in schools. Educators can do it in their classrooms by giving choice, by asking for feedback, by before starting a lecture, asking kids, where do you want to start? Do you have any questions about this topic? Like we're doing the solar system, where do you want to start? You know, just that shifts the entire mindset of a learner. Right. Much more engaged. So A, they're more engaged, B, they're developing skills to really be able to independently chart their learning journey, which is what they're absolutely going to need when they leave school. No one will be, you know, spoon feeding them. And we see that in the kids who knock it out of the park in the age of achievement. We found so many kids in our research who were excellent achievers in school and fell apart in college because no one is there, you know, spoon feeding them. And so for us, and the other piece is they're more engaged, they have, they're getting sort of agency over, they're learning much better skills and they're much happier. It's so much more fun to have some autonomy and ownership over your life and to try to be the author of your own life. And those are all the reasons why we think it is really imperative and that Gen AI has accelerated this need because, you know, more than ever now, kids are going to have to navigate this world where you've got Gen AI, you're going to have advanced robotics, you're going to have neural links, you're going to have like, sooner we're going to be, I'm sure, interacting with, you know, new robotic people. There's a whole, it's a, it's a wild world that's coming down the pike and our kids need to lead it rather than be led by it. Diane Tavenner: That's. Yeah, Michael, I feel like I'm hogging all the time. Do you have a question? Michael Horn: Well maybe last question before we wrap up, Rebecca, which is so let's say we have the purpose conversation. We, if not nationally, at least in strong pockets of communities, we commit to an age of agency and we start to think about what that is. Where does AI like what you know, you've been impressed by it in certain cases. So where do you see it perhaps what's the positive case to be made for it in this rethought, purpose of schooling with a coherent design? Rebecca Winthrop: I mean, I think the thing that I am most potentially optimistic about, and I know Diane, I think you disagree with me, but in the age of agency, I think if we're rethinking the purpose, a huge barrier to that is teacher expertise, practice prep. And we've got a ton of teachers who've been trained in the age of agency and it is not their fault. They're teaching their heart out and they're doing their job. And you know, we're very clear in the book that we, you know, this is not a problem with teachers. They're squished from above with the system and squished from below, frankly, with parents sort of pressuring them. And so could Gen AI really unlock teacher ability to be experts in a new sort of let's pretend the school is around solving problems? I think we need a huge piece of that solving problems, being around citizenship and civic in sort of personal, collective and community wide problems. But I feel like that it could just, if done well, it could really be a massive boost for educators. So it isn't so scary they're not thrown into a whole new purpose of ed, a new, entirely new system with different, you know, ways of succeeding without some serious support. Michael Horn: No, that's super helpful. I like the vision in general. I'm taking from this conversation that whereas it's kind of hard to have these national dialogues or dialogues even in communities around purposes, maybe AI is such an abrupt big shift that it actually brings us to the table to say, what the heck are we doing here? Because every single one of the stakeholders is like, this ain't working. And so let's talk about what are we actually trying to accomplish here? So maybe we'll leave it there, Diane, and shift to the last part. Rebecca. We have this tradition that our readers enjoy. Yep. For better or worse. They keep lists apparently of what Diane and I have read or watched. So. But we want to hear yours. What do you, you know, what have you, or what are you reading, watching, listening to, often outside your day job. But it's okay if it intersects with it. Rebecca Winthrop: Well, I have, Well, I don't watch much, I must say, except for Shrinking, which I rushed through and through. Loved it, loved it, loved it. That was the best. Michael Horn: Incredible. Rebecca Winthrop: I can't wait for like the next season. But I actually don't watch a lot of stuff. But I do love to read. So I have two things here. One is Unwired Gaining Control over Addictive Technologies by Gaia Bernstein. She. It's awesome. She's a lawyer at Seton hall and she. It's a really good book and I'm not all the way done. And then the other one is a novel called Dust by Josh. Classy that just came out. It's like a sci fi. It's like a new Lord of the Rings. Michael Horn: Oh, cool. Rebecca Winthrop: Wow. Wow. Michael Horn: All right. I like that. Diane Tavenner: Yeah, I like that too. That's fun. Well, I have, I have one this week. I was telling Michael, you know, he's not the only sort of fan, fan, author, fanboy, fan girl. This week I met a woman named Samara Bay, and she has authored a book called Permission to Speak How to Change What Power Sounds Like Starting with you. She's fascinating. And I got to have coffee with her last week and we did like a joint book club. We switched books and then got to sit down and talk about them. I know, super, super fun. She's got this incredible journey. She wanted to be an actor. She became a dialect coach. She worked with tons of famous people like Gal Gadot, et cetera, et cetera, and now has turned her passion of helping people to people who are really trying to bring impact to the world and drive impact in the world and helping them find their voices in public speaking. It's which, you know, here's the inside secret. It's basically figuring out how to get out of your own way is really the secret to it. And so it's a beautifully written book. It's also a super practical guide in many ways and so highly recommend it. Really enjoying it. Michael Horn: Awesome. Awesome. Diane. I realized, like, I'm starting to outpace. Sorry, the podcast recordings are outpacing my ability to keep up with the reading and so forth. And like Rebecca, I'm not a huge TV person outside of sports and shrinking. So yes, there we go. Yeah, but, so I, but I'm almost done with a book. Task versus Skills. Squaring the Circle of Work with Artificial Intelligence by Mark Stephen Ramos, he was the Chief Learning Officer at Cornerstone, is no longer there, but has been starting to do some writing and thinking about how AI changes our learning organizations or organizations where people need to be upskilling and reskilling. So far it has been interesting, deeply technical, and kind of enjoyed it. And I'm not at all getting out of work. So apologies on that, but no apologies for having Rebecca here. This has been fantastic. Diane Tavenner: Thank you. Michael Horn: Yeah, thank you so much for joining us. And a thank you again to all of you, our listeners. A reminder to check out Rebecca's book with Jenny Anderson, the Disengaged Teen helping kids learn better, feel better and live better. Check it out, read it, digest it. We'll have more conversations about it, I suspect. And let's all stay curious together. We'll see you next time on Class Disrupted.

OpenAI's Education Leader on AI's ‘Massive Productivity Boost' for Schools, Teachers
OpenAI's Education Leader on AI's ‘Massive Productivity Boost' for Schools, Teachers

Yahoo

time12-03-2025

  • Yahoo

OpenAI's Education Leader on AI's ‘Massive Productivity Boost' for Schools, Teachers

is an education podcast featuring author Michael Horn and Futre's Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on Apple Podcasts, Google Play or Stitcher. In this episode of Class Disrupted, Michael and Diane chat with Siya Raj Purohit, who works on education initiatives at OpenAI, about the transformative potential of AI in education. Siya shares her career journey and how it led her to focus on bridging the gap between education and workforce development. Highlighting the immense value of AI tools like ChatGPT, particularly in university settings, she underscores its potential to personalize learning, reduce teacher burnout and enhance classroom interactions. Siya also addresses concerns around AI by emphasizing that, while AI can elevate thinking and productivity, the irreplaceable human element in teaching — such as mentorship and personal inspiration — remains vital. Listen to the episode below. A full transcript follows. Michael Horn: Hi there, Michael Horn here. What you are about to hear is a conversation that Diane and I recorded with Siya Raj Purohit from OpenAI as part of our series exploring the potential impact of AI on education from the good to the bad. Now, here are two things that grabbed me about this episode. First, I was struck by how much Siya uses ChatGPT in her daily workflow already. Yes, she works at OpenAI, but it has seemingly revolutionized her life. As she said, it's a massive productivity tool. From using it as a tutor to helping her figure out what projects to prioritize, what to learn, this is just part of how she works now. Second, I was struck by how much she's really on the ground level with universities, particularly professors, helping them figure out how to make it part of their workflow as well for teaching and learning, and how deep she is in specific use cases as a result, and how she sees this, frankly, as an important tool to free up teacher time, elevate student thinking, and the like. As the conversation wrapped up, I've also been reflecting on a couple things. First, what would it take for ChatGPT to be a massive productivity tool for me personally? And if that's the framing, what does it mean this technology can and can't be used for in education? I was also struck by how OpenAI has decided to go deep on supporting those in college and beyond with their tool, but they haven't yet created their own products or services for students who are under 18. Candidly, that's not something I had really realized or reflected on before this conversation. So I'm excited to reflect a lot more with Diane after we talk to a number of people about this topic. But for now, we'd love to hear your thoughts about this conversation. Please share it with us over social media or through my website, And with that as prelude, I hope you enjoy this conversation on Class Disrupted. Diane Tavenner: Hey, Michael. Michael Horn: Hey, Diane. Good to see you. Diane Tavenner: I confess I am really excited about today's conversation because the first two we've had about AI have been super interesting and have been raising some big questions for me around the assumptions that I had coming into these conversations and AI and schools, and in particular how we organize schools themselves around new technologies. But it's made me even more curious to talk to other people and get other perspectives. So I'm really, really looking forward to talking today. Michael Horn: As am I, Diane. And I. I agree that the first two episodes have piqued my attention on different things, and I'm looking forward to digging in on more at some point. But whereas our last episode featured someone who is, I think it's fair to say, largely skeptical about AI, I suspect we will get a very different take today, given our guest actually works on Education at OpenAI, the company that of course developed and operates ChatGPT. Her name is Siya Raj Parohit, and she has been focused on supporting ed tech and workforce development in the startup community and at AWS over the past decade before she more recently joined OpenAI to work specifically on education. We're going to get to hear about all that up front. But first, Siya, welcome. Michael Horn: It is so good to have you. Siya Raj Purohit: Thank you so much for having me. Michael Horn: Yeah, you bet. So before we get into a series of questions, questions starting to dissect AI and its impact, or not on education, I would just love you to share with the audience a little bit about how you got so deep into AI around the question of education, perhaps specifically, and maybe you'll also humor me as you do so, because I'm curious OpenAI's interest in all this because it seems like more than maybe any product launch other than the iPad that I can remember anyway, I can't think of any other consumer tech product or service that has made education such a cornerstone of all of its announcements and sort of promise and potential for the new technology. So maybe you can tell us a little bit both about your journey, but also how OpenAI sees education. Siya Raj Purohit: Absolutely. So I've spent my career at the intersection of education, technology and workforce development. This all started when I was 18. During college, I published a book about America's job skills gap, talking about how American universities weren't teaching the skills that students needed to land jobs in industry. This stemmed from my own experiences and the fear that I may not be able to land the jobs that I aspire to. And that's something that I think a lot of young adults relate to. But I've spent the next 10 years from that just trying to help bridge that gap. I worked at early stage startups, venture capital funds, and most recently Amazon, trying to bridge that gap between learning and opportunity, helping make economic mobility more possible for different types of learners. Siya Raj Purohit: I joined OpenAI about 8 months ago to help build up our education vertical. As you all might remember. November 2022, ChatGPT launched and suddenly became like such a used product around the world. And what was interesting for OpenAI is that learning and teaching was one of the most common use cases on why people were engaging with ChatGPT. So this year we launched a product called ChatGPT EDU is designed for universities and school districts to be able to use an enterprise grade version of ChatGPT. With that, it brings all sorts of different types of benefits. There are all sorts of network effects that can exist on a campus once all students, faculty and staff have licenses. Siya Raj Purohit: I will share a couple of examples of what that looks like. But a big part of my job is to help education leaders, educators and students start using AI more effectively on different types of campuses. Michael Horn: Perfect. Perfect. Go ahead, Diane. Diane Tavenner: Yeah, I mean, I think it sounds like rightfully so. Michael and I are both operating under the assumption that you're probably biased towards seeing AI as something that offers real opportunity to improve and transform education. And clearly your personal pathway and journey is leading you to that impact. And so one of the things we're interested in is having you sort of make the best case for how AI will impact education in a positive way. And we have a lot of things in our minds that we've thought about, but we're really curious to be expanded in our thinking and have you make that very best case for us. Siya Raj Purohit: So I believe for education as a sector, personalized learning was always the holy grail. We always said that if we achieve that, we have made it, like we have accomplished a lot of education goals with that. And I think that with ChatGPT, it exists. I have a personalized tutor that I talk to every day. It knows my projects, the skills I'm developing, like my aspirations. And it helps me become a better knowledge worker every day. And I think that in education, it's making high quality tutoring available to anyone with an Internet connection and supporting educators by automating a lot of the time consuming jobs that they do to let them focus on what matters a lot for them, which is like mentoring and inspiring students. Diane Tavenner: That's interesting. Let's stick on that one for a moment because, and we'll get to this a little bit later, but like I wonder, does that mean that the schools don't actually end up changing very much because the tutor and the sort of automated assistant just allow students and teachers to do things the way that they have been doing them, just better and more efficiently? I'm curious what you think about that. Siya Raj Purohit: So right now the most interesting examples we're seeing is that educators accrediting ChatGPT for reducing teacher burnout, which as you both know is a big problem in America. Teachers who used to spend so much time doing lesson planning, quiz grading, like all the preparation for classroom activities are able to outsource a lot of that work or kind of use ChatGPT, do a lot of that work. And so then they can focus on those classroom interactions and the engagement within different peers in the classroom, which I think is much more valuable. As far as the classroom dynamics go, I think that it is a big compliment in the way that it brings personalized support and tutoring to individuals. But at the same time I do think that there's still value in students being grouped with others that are of the same age as them because then you develop a lot of social skills and you learn how to interact more. So I'm not of like mind that like people should just do online school and have ChatGPT because I think that social component is becoming increasingly more important. Diane Tavenner: Got it. I'm thinking back to your 18-year-old self who wrote a book, which we could spend a lot of time just even talking about that, but having we've both written books, we know what it takes. We weren't writing them at age 18, I don't think. And your whole premise there that like I'm not learning the skills that I'm going to need to be successful in the jobs that I want to have or the careers I want to have. How do you see AI and what you're doing with ChatGPT contributing, you know, making that not true or improving that. What is the intersection there of your personal sort of passion? Siya Raj Purohit: The reason I wrote that book and I felt so passionately about that and I guess that passion still, like it's so deep in me is because at first I thought it was a Siya problem. Like Siya was not able to be learning engineering skills to be able to land a job that she wanted. And then I did enough research by speaking with some really accomplished individuals to then realize this was actually a system problem. And the book was like my attempt to capture like the scale of this problem and also prove to myself that this is not just like the thing that I'm struggling with. And then I think the next part of that was like, how can I free other people from the struggle? And that's when like, this journey to try to make economic mobility more accessible has become like my life passion. So I think with ChatGPT one thing that it does really phenomenally, which I hope the students will take advantage of, is it helps elevate our thinking. A lot of times I share my thoughts on a project and I'm like, how can I elevate my thinking? How would a COO of a rocket ship company approach this? And it helps kind of expand my thought process much more. Siya Raj Purohit: And I think while doing that, it helps us feel like less alone in a lot of these things that we encounter a lot of the problems because we can find the right examples, we can think bigger about this, we can find our own gaps. And I think these things are very powerful. Diane Tavenner: Yeah. One of the things that's interesting about talking to you that I'm observing is when we ask other people to make the best case scenario for AI, it's a little bit detached from them. But what I hear in you is literally this is what you're doing. This is how you're working every day. It sounds like you are a true believer. Am I missing anything or am I hearing that right? Siya Raj Purohit: I used to work really hard at AWS, but I accomplish about three times more every day at OpenAI just because I have AI now. I use it a lot to up level myself, but also to uplevel the project outcomes I provide. Diane Tavenner: Interesting. Awesome. Well, this next question might be more challenging for you. Michael Horn: It's a massive productivity tool for you. And I'm interested in your book. There's this common theme, right? You used 'me search', as we would say, not just research around your book. And then you were doing the same thing with this tool because you're living it in terms of your massive productivity boost. But I guess I'm curious, like the flip side of some of these things because I, you know, there's a lot of skeptics, as you know about, oh AI might not even just like not have these transformational impacts, but also might undermine certain things. And so I'm sort of curious where you come out on some of this stuff. And I'll Just name two. And then you can go wherever you want on it. Michael Horn: Which is one, you said in some ways it actually makes you feel like you have a companion alongside of you to elevate your thinking. Some people said that actually could be dangerous because maybe you'll be in isolation. Right. And not feel like you have to connect with others. And then you talked about elevating thinking. And I think that's the other big worry that people have is that it'll actually do the thinking for you. Right. And we won't do the difficult, effortful work to learn about how to construct an argument and, you know, critical thinking and build knowledge so that we can analyze it and so forth and so on. Michael Horn: And I'm just sort of curious, like I kind of want you to steel man the argument and make the best skeptics take, but I almost more want you just to start to dig into these different use cases, you've heard the ones that I just named and others and sort of talk us through how you think about them. Related Siya Raj Purohit: Yeah. So let's first talk about the human connection piece. It's really interesting because a lot of educators come talk to me about their own doubts and concerns about the future of their profession. They're like, will I still like be a teacher or educator given that ChatGPT exists and it's getting so good? And this question honestly surprises me a lot because the reason that I remember educators that have influenced my journey is because of who they were and how they made me feel and who they told me I could become. Right. These are things that ChatGPT doesn't do, because ChatGPT and AI know about me what I tell it, right? But great mentors can see things about me that I don't even know about myself. And I think that's a really important distinction. And I think that educators have this really unique opportunity in this era to double down on those things, they got into teaching to mentor and inspire and find these connections. Siya Raj Purohit: And now they have the opportunity to do more of that because if they can help increase the potential or vision for more people, that's the true power of education. I'm really excited about that. And I don't think that ChatGPT will replace human relationships. I think it's just gonna become like a support system. So like, the reason, like how I use ChatGPT on my personal career front is that I tell it like the things that I might want to become, like, this is like my 5 year goal, this is my 10 year goal. Can you create a really robust roadmap on how I can get there. And it gives me really, like, precise instructions as I join these types of organizations, publish this type of content, think about taking on these types of projects at work. It's really detailed. Siya Raj Purohit: But what it misses out on is, like, when my manager comes in and goes like, hey, this is your superpower. You should double down on this. You know, like, forget, like these type of strategic projects. They just hone in on what makes Siya, Siya. Right. And that's what we need more people to do for other people. Michael Horn: Super interesting talk about the other part of this. The you mentioned elevating thinking, giving you a personal roadmap. It's amazing. Again, the other fear that I hear a lot of is people say, well, it's actually going to cause people to not do the effortful work to actually learn or even get to the questions that you're able to ask of it. How do you think about that concern? Siya Raj Purohit: I think educators need to show more about what an extraordinary outcome looks like. And we need to just be able to showcase what amazing end products look like in different verticals and different domains. And the reason for that is that if you give a generic input to ChatGPT, you'll get a very generic output, which a lot of students are realizing, because they're just like, okay, I'm going to plug in my homework, get a very generic output, submit that. And that's not what professors are looking for. So I think one of the most creative use cases I've seen is a professor at the Wharton School. He always had an essay as a final submission for his MBA class. And he says, he's like, what is the value of an essay? The value of an essay is not necessarily in its output, but in the conversational skills and critical thinking skills that go into getting to that output. So now he requires the students use ChatGPT. Siya Raj Purohit: He's like, they are going to use it anyway, might as well make it a requirement. And now he measures the number of prompts they use to get to an essay that they're really satisfied with. Some students are so good at prompt engineering that they take like two or three prompts and they have a really good essay. And some students go back like 18 or 19 times to get to a good essay. And he uses that as their ability to clearly articulate what they're looking for, which he thinks is a really important skill. So if he can teach students how to communicate those skills, like in terms of communicating that output that they want to see, and also be able to visualize some really extraordinary output, then they're going to be able to use AI as just a tool to get there. Related Michael Horn: So maybe this is the last question in this section that I have because building off that, I think it's almost an implied set of knowledge and awareness, right, that students need to have as baseline to be able to have those expectations or hopes for outcomes and things of that nature. I'm sort of curious, you also mentioned that what the purpose of an essay is implicit in all of that is that some of the artifacts that we have used historically to gauge, you know, thinking processes and argumentation, et cetera, et cetera, like they might change in the future. Right. The example we've used a few times at this point is Brorr Saxberg, one of our friends likes to say Aristotle worried deeply that the written word would mean people didn't memorize Homeric epic length poems anymore. And he was absolutely true. Michael Horn: And I don't think any of us regret that. And so I'm sort of curious, your take of like, you know, sort of how we do work or the artifacts of what we think of as representing learning, how might those change even in the future? And maybe some of these concerns, they won't all be that relevant because we will show our knowledge and skill development through other means. Siya Raj Purohit: So I think a lot of like basic calculations, basic strategic work, all of that is going to become much less important. I think a lot of listeners would probably relate when their teachers told them they wouldn't always have a calculator around, so they needed to learn basic math early. And now we do. So it's just like these kind of like, the basic elements of strategic thinking, I think are gonna be less important than they used to be. But the things that are going to be more important is like, like critical thinking, but also emotional reasoning and the ability, like emotional intelligence to be able to these outputs and make sure that they match the type of Persona that you're serving. So right now in my current role, I do a lot of like, I guess, partnerships and BD work and those kind of things. And like, yes, I use AI to create the different types of documents and slides and those kind of like assets that we share. But the way that I communicate them to the end user to kind of inspire confidence or interest is like the unique ingredient here. Siya Raj Purohit: And we need to be able to teach that. So when the strategic work, as our reasoning models get smarter and do more of that strategic work, that human element helps people distinguish their work and stand out. Diane Tavenner: Interesting I'm so curious because I think you maybe more than other people have started to maybe personally see some changes happening in schools because of AI and like how it looks different and how it feels different and/or I bet you can imagine them a little bit better than a lot of people. And one of the things that I think we suffer from is just imagination in this space, right? Like we all know what school looks like and we have a really hard time breaking out and imagining something different. So can you just take us there? Like what could possibly look different, feel different for a teacher, for a student in a school? What are you seeing? What are you predicting? Siya Raj Purohit: For this one, I'm going to actually focus more on the university setting because that's where we're seeing the fastest changes happen. Our current thinking around what an AI native university looks like is that every campus will have multiple AI touch points across that help enhance the student/faculty/staff experience on campus. So basically the idea is that we're going to take the knowledge of the campus, make it conversational and more accessible to these users. So when students come on campus, they're going to have these orientation GPTs which, where they can ask questions like where's the best pizza place in town? Or how do I change my roommate? Or any of these kind of preterm questions that they have. Then they're going to come into classrooms where professors will have designed these custom GPTs that are just basically that have learned from the professor's material and help answer questions. So a professor at HBS, Jeffrey Buskyang, was telling me that most of his class uses custom GPTs between 12am and 3am when like a human tutor is not available. And they can ask questions like which CEOs handle layoffs well and get the exact examples to help understand these kind of concepts. So classroom conversations will become much more in depth because of this. Siya Raj Purohit: But also students will be able to do things like I have a statistics exam coming up, can you give me some practice quiz questions that relate to the same like level as my professor provides and just be able to go back and forth in classroom content that way. They'll go to career services where they'll be able to use the university's proprietary data to practice interviewing with a McKinsey partner and McKinsey recruiter, all with like AI. So like all of these experiences will happen, student clubs, career services, classrooms, and it's going to happen seamlessly for students. So they'll be able to navigate between this very easily as they try to like grow as students and professionals. Diane Tavenner: Super helpful I want to dig a little bit more and this might be surprising to you, but I actually think a number of people who listen to our podcast, maybe fewer that listen to our podcast, but sort of in education, have literally never even used ChatGPT yet. They haven't logged into it. So let's spend just a moment helping them picture what it means to have a GPT. Is it on their phone? Is it on a computer, Is it on a kiosk? What does it literally look like if I'm a student when I'm engaging? And what makes it seamless? Siya Raj Purohit: I saw a meme recently which I thought was really funny in Harry Potter and the Chamber of Secrets. Harry starts writing in this diary and it's like Tom Riddle responding at the other side. But I really liked that example because your first experience of ChatGPT feels similar to that. You just start writing. It's a blank screen and you have a conversation and it converses back with you. And it's actually a very magical feeling because you're able to have conversations with the super intelligence that exists outside of our brains, which is very powerful. So I think that it's really important to be able to first start having this conversation. You can use you can use your mobile app, you can start actually on WhatsApp now or even call in. Siya Raj Purohit: There's a 1-800 ChatGPT number. So any of these mediums that make sense for you, you can start and you can ask basic questions. What we see most people do is start with very basic questions and kind of start building up as they gain more confidence in the back and forth interactions of this and then they're able to do more and more complicated jobs. So how we think about transformation for organizations is the very first step is at an individual level. So when individuals start writing emails better, they start doing better, like project planning or activity building. Then it shifts up to the department level. That's when people start collaborating together on different projects. One of the best examples I saw of this is that a school district told me it takes 40 people several weeks to assign which class goes into which room on campus. Siya Raj Purohit: And now ChatGPT can do that in a few minutes. So hugely empowering at the department level. And then finally get to that organization wide level, which is when you'll have so many different AI touch points and make that experience much easier as you navigate different levels of knowledge on campuses. Diane Tavenner: I think the other thing that you're saying that I'm not sure everyone will pick up unless we call it out. So I'm going to ask you to call it out is the reason, this is not like going to be a generic GPT. The intersection with the campus is that you're actually taking the data and the information and the expertise of the campus and well, you'll tell me the right words, but like mixing it with the power of GPT to make it sort of a customer customized experience. Did I get that right? What does that look like? What's going on there? Siya Raj Purohit: So basically there's ChatGPT, which is accessible to everyone. Everyone will have slightly different experiences as they go through it, but it's basically a knowledge base and a conversational platform. Custom GPTs are specific instances of ChatGPT which are basically trained to do very specific tasks. So a professor can be like, this is my six months of curriculum. This is all the case studies I provide. Just reference these when answering all student questions. So now that super intelligence is focused. So it doesn't like look at the web, it doesn't research answers, it focuses on the six months of curriculum, goes very deep and helps students be able to learn from that more effectively. Siya Raj Purohit: And you can use these custom GPT instances for any type of knowledge base. One of my favorite examples of this is that a professor at the University of Maryland told me that they created a custom GPT of themselves. They uploaded about 24, 25 pieces of research work that they've done. And like there are different pieces of writing and now they talk to what they call Virtual Dave and get good ideas on what their next research project should be. So it's like having a thought partner which is only limited to a finite amount of information that you share, but it's super intelligent itself. Diane Tavenner: Interesting. And let's just stay here for one more quick beat because you're leading us into what, maybe the work looks like for the teacher or the professor, but like just get a little bit more concrete. So that professor literally like copied and pasted his stuff into GPT? Tell it, tell us a little bit about what that, what's his work now? What's he doing? Siya Raj Purohit: Yeah, so it takes about 15 minutes to build a custom GPT. You upload PDFs or documents and so you don't need to copy/paste and you give it instructions. Again, this is where the assistant piece comes in. You explain to the custom GPT what his job is. So in this case, this professor is like, you are going to be my virtual thought partner. As I think about my next research papers. As I think about my next book or my LinkedIn posts, I need you to sound the same as I have in my career so far. So maintain the same tone and professionalism, but help me ideate on what the next iterations of these projects can look like and give me like very honest feedback. Siya Raj Purohit: So these are the instructions it gave and then the professor just has conversations with it. It's just like, could I go in this direction? And custom GPT is like, no, it's a little bit like overdone. Why don't we look at this path and it just becomes a good like research assistant for you. Diane Tavenner: Awesome. Michael, here's the jobs to be done at the moment, I think. Michael Horn: Seriously, right. What we're going to flag that for coming back to Diane? Diane Tavenner: For sure. So let's now bring in. I promise we will stop really soon as soon that we're getting to the end here. But I know that OpenAI you think a lot about, you talk a lot about, you focus a lot on policy and you're engaging with the policy, you know, field and whatnot. You know, what are you learning about the intersection of education policy and policy around AI? Like what, what should we be looking at, looking for, watching out for, paying attention to from your perspective as educators, as people who are leading schools and school systems and universities, you know, what, what do you see coming? What's important, what should we be thinking about? Siya Raj Purohit: So right now universities are in a couple of different groups when they're thinking about AI policy. Some have like very established guidelines and clarity in terms of where AI plays the role in their student journey. So like, I think some of the most forward thinking education leaders that I'm working with are like, okay, like AI is accessible. The cat is out of the bag, it's going to happen. And now I need to think about how I change my curriculum at the university to be able to use AI and help students prepare for the future. The best examples of this is Harvard Business School, there's a professor named Jake Cook who teaches a digital marketing course and he's mapped out what a digital marketing marketer's journey looks like now in the profession and the seven different jobs that a digital marketer does and where does AI enable each of those jobs? And he's turned all of his projects, Related Siya Raj Purohit: So now you use AI to do competitive research, AI to create marketing assets and images, AI to help you with the copy and website and all of these kind of elements of what he thinks the students will graduate into the workforce and need to know, and like policies that enable this kind of forward thinking nature are really helpful for students because then they go into Enterprise and have ChatGPT Enterprise and actually are able to use that effectively. And then there are other institutions that I think are still trying to figure it out. They're concerned about how it might change their former assignments, how they can't use the same kind of syllabus they might have used in the past years. And a big part of our job right now is to help kind of showcase these examples of the forward thinking institutions and help these other universities learn, kind of grow their own thought process. At the end of the day, universities are the ones best suited to make these decisions for their students because they understand them the best. And it's so interesting because when you like speak with a state school, you realize they care a lot about like navigation of tools and being able to help students find the right information on a campus that is 50-60,000 students whereas a small liberal arts schools are just like, how can I help the student be able to voice their opinion more effectively? And all of these things have AI solutions. But it's universities that need to kind of figure out what they want to become and how AI can help with that. Diane Tavenner: Interesting. I could ask 27 more questions, but I'm going to ask Michael to rein me in and either wrap up with something something or Michael Horn: No, I think this is super helpful, Siya. I guess my last question is you're clearly spending a lot of time with colleges and universities. Are there others in the OpenAI team? Are you spending similar amounts of time with K12 institutions or how do you think that's going to evolve over time? Because clearly it seems like the colleges and universities are, not all as you just said, but many of them are wrestling with this yesterday. Are you seeing similar movement among K12 schools and districts or not? In which case that also tells us something. Siya Raj Purohit: They have a growing number of K12 customers. But the big caveat is we don't have an under 18 product right now. So it's not for students, it's for like teachers and staff members in K12. Michael Horn: Gotcha. Okay, super helpful. All right, well let's maybe wrap up there. Something we love to do, Siya, though, before we let our guests go, is to wonder what else you're reading or watching or listening to outside of your day jobs. And so maybe ChatGPT has recommended you reading lists or watching lists. But I'm just sort of curious, one thing outside that maybe you could point us to. Siya Raj Purohit: It's interesting to say that I've actually been asking ChatGPT a lot for book recommendations because I think it's very magical when you find the right book at the right stage of your life. And I want to see if ChatGPT can help make that happen more often. It's mixed results so far. Michael Horn: Okay. Siya Raj Purohit: One book that I'm reading right now which is super fascinating, it's called Say It Well, it's written by one of President Obama's former speechwriters, and he intertwines, like, how to be a good public speaker with stories from President Obama. And it's just super fascinating to read about how, like, things that President Obama slipped on in different talks, which make him much more human and accessible, but also like the ways that he thought about providing great speeches and connecting with audiences around the world. So I'm finding the book really interesting so far. Michael Horn: Very cool. What about you, Diane? Diane Tavenner: Awesome, thanks for sharing Okay. Well, I am going to turn to TV because we've been talking so often. I've exhausted all the books I'm reading right now, and I'm a little slow on this one, about a year behind. But we just watched the series on FX, Shogun, and I was. I must say, I was a little skeptical going in. I was a young kid when the book came out and then the miniseries on tv, and I was like, there's no possible way this could be done well or without some real issues. Diane Tavenner: And you all may know it's won 18 Emmy awards, the most ever for a single season. It's truly extraordinary and really thought provoking. Yeah. Highly recommend. Michael Horn: So I was gonna say, you could imagine it winning awards, but someone who'd read the books being like, it still didn't quite deliver, but it delivered for you, it sounds like. Diane Tavenner: Well. And I never read the books or watched the original series. Michael Horn: Okay. Okay. Okay. So. Diane Tavenner: But I just had this image in my head, and as I understand it, the current version is very different from the old ones, but it's. It's great. Michael Horn: Very cool. It's been teasing me for a while, so that is a good endorsement. For mine. I. I guess I, I want to say, like, the NFL football playoffs or Australian Open, but I feel like that gives away when we're recording, but too late, I've given it away. But I'll give you one other. I've actually really been enjoying or I enjoyed because I finished it in a day, a book recommendation that one of my daughters gave me, or she actually ordered me to read it. Michael Horn: She had finished, it's called the Girl with the Secret Name by Yael Zoldon. And I'll apologize if I've mispronounced her name. But it's a historical fiction, takes place during the Spanish Inquisition and it was fascinating. It was a history that I knew at a high level, but not with any depth at all, I will say, like literally zero. And so my daughter was teaching me quite a bit. It was fun. So, that's mine. Diane Tavenner: I love when that happens. Michael Horn: Yeah, no, know you've had that experience with Rhett giving you many recommendations. So now maybe this is the first of many for me. But I'll, let's, let's wrap up there, Siya, a huge thank you for joining us for shedding light on this topic, for sharing frankly how you are using it in your daily life to both on your learning journey but also in your work itself on, on a day to day basis. So really appreciate it and we hope you'll keep staying in touch so we can stay ahead of the curve as well alongside you. But huge thank you. And for all of you tuning in, we will see you next time on Class Disrupted.

Class Disrupted Podcast: Ben Riley on Why AI Doesn't Think Like Us
Class Disrupted Podcast: Ben Riley on Why AI Doesn't Think Like Us

Yahoo

time21-02-2025

  • Science
  • Yahoo

Class Disrupted Podcast: Ben Riley on Why AI Doesn't Think Like Us

is an education podcast featuring author Michael Horn and Futre's Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on Apple Podcasts, Google Play or Stitcher. Techno-optimists have high hopes for how AI will improve learning. But what's the merit of the 'bull case', and what are the technology's risks? To think through those questions, Michael and Diane sit down with Ben Riley of Cognitive Resonance, a 'think and do' tank dedicated to improving decisions using cognitive science. They evaluate the cases made for AI, unpack its potential hazards, and discuss how schools can prepare for it. Listen to the episode below. A full transcript follows. Diane Tavenner: Hi there, I'm Diane, and what you're about to hear is a conversation Michael and I recorded with our guests, Ben Riley. It's part of our series exploring the potential impact of AI in education, where we're interviewing optimists and skeptics. Here are two things from the episode that I keep thinking about: First, our conversations are starting to make me wonder if AI is going to disrupt the model of education we've had for so long, as I think Ben perhaps fears, or if it's actually going to strengthen and reinforce our existing models of the schoolhouse with classrooms filled with a teacher and students. The second thing that I was really thinking about and that struck me was that Ben's sort of one case for what could be beneficial about AI is something that's directly related to his work and interest in understanding the brain. And kind of how learning occurs. To be fair, there's a theme emerging across all the conversations we're having with people where they see value in the thing that they value themselves. And perhaps that's an artifact of the early stages and who knows, but it's making me curious. And speaking of curious, a reflection I'm having after talking with Ben is about the process of change. Ben is a really well reasoned, thoughtful skeptic of AI's utility in education. He comes to his views at least partially from using AI. I would consider myself much more of an optimist and yet I'm finding myself a little bit annoyed right now, that every time I want to write an email or join a meeting or send a text or make a phone call that I've got AI pretty intrusively jumping in to try to help me. And it's really got me thinking about the very human process of change, which is one of the many reasons why I'm really looking forward to sense making conversations with Michael after all of these thought provoking interviews. In the interim, we'd both love to hear your thoughts and reflections. So please do share. But for now, I hope you enjoy this conversation on Class Disrupted. Related Michael Horn: Hey, Diane. It is good to see you again. Diane Tavenner: You too. And I'm really excited to be back. Coming off of our last conversation around AI and education, it's making me even more excited about what we're going to be learning in this series. And I think today will be no exception in really stretching our minds and our thinkings about the possibilities, the limitations, the potential harms of AI and its intersection with education. Michael Horn: Yeah, I think that's right, Diane. And to help us think through these questions, today, we're bringing someone on the show that I think both of us have known for quite a long time. His name is Ben Riley. He previously founded the Deans for Impact in I believe 2014. And Deans for Impact is a nonprofit that connects cognitive science to teacher training. And then Ben stepped aside a couple years ago, and has most recently founded Cognitive Resonance, which is a think and do tank, in its words, and a consultancy organization that's really, its focus actually is on this topic of AI and learning, which is perfect and makes Ben the perfect guest for us today. So, Ben, welcome. Ben Riley: Thanks so much for having me. We'll see if you still think I'm the perfect guest by the end of it, but I appreciate being invited to speak to both of you. Michael Horn: Absolutely. Well, before we get into a series of questions that we've been asking our guests, we'd love you to share with the audience about how you got into AI so deep, specifically because I will confess and I'll give folks background, I've been reading. I've actually been an editor on a couple of the things that you've submitted into Education Next on AI, and I found them super intriguing. And then somehow I had no idea that you created this entire life for yourself around AI and education. And you have some language on this that I think is really interesting on the site where you say the purpose is to influence how people think about Gen AI systems by actually using the lens of cognitive science. And you believe that will help make AI more intelligible, less mysterious, which will actually help influence what people do with it in the years to come. And then you write that you see it as a useful tool, but one with strengths and limitations that are predictable. And so we really have to understand those if we want to harness them in essence. So how and why did you make this your focus? Related Ben Riley: Yeah. Well. And thank you for clearly having read the website's or the Substack Build Cognitive Resonance, in many ways, the organization reflects my own personal journey because several years ago I started to become aware that something was happening in the world of AI, and at the time it was called deep learning, and that was the phrase that was starting to emerge. And to be completely candid, my focus has always been, and in some ways still very much is on how human cognition works. And so AI, artificial intelligence, is considered kind of one of the disciplines within cognitive science, along with psychology and neuroscience and linguistics, philosophy. There's like it's an interdisciplinary field. And for me, quite honestly, AI was sort of like this thing happening somewhere over there that I had maybe a loose eye on. And I got in touch with someone named Gary Marcus at the time, and we'll come back to Gary in a second, and then just said, hey, Gary, can you explain deep learning to me and what it is and what's going on? And that, you know, sort of began that conversation. And then quite frankly, I just kind of squirreled away and didn't think much about it. And then, like it did for all of us, ChatGPT came into our lives. And I was stunned. I was completely stunned when I first sat down with it and started using it. And what really irked me was that I didn't understand it. You know, I was like, I don't get how this is doing, what it's doing. So I am now going to try to figure out how it's doing, what it's doing. And that is not easy. At least it wasn't easy for me. I don't think it's even now. I don't think it's easy for those who might have spent their entire lives, much less those of us who are coming in late in the game or just trying to make sense of this new technology in our lives. And what I was able to draw upon was both sort of the things that I do know and have learned over the last decade plus around human cognition and frankly draw on a lot of relationships I have with people who are in cognitive science broadly, and just start having a bunch of conversations, doing a bunch of reading, and really trying to, you know, build a mental model of what's taking place with these tools and with large language models specifically. And when I finished all that, I thought, well, geez, it seems like, you know, that took a lot of work. Maybe it would be helpful to sort of try to pass this along and bring others into the conversation. So that's really the thesis of Cognitive Resonance. Diane Tavenner: Ben, everything you just described is just so consistent with my experience with you over the years and the conversations that we've had and what my perception is what you care about. And I'm so glad you brought it together in that way, because I'll be honest, when I was like, wait, Ben is doing AI? Like, that didn't totally land with me. And so what I'm hearing from you is like, well, I'm super curious for this conversation because I'm. I'm not getting the vibe that you're a total AI skeptic. I'm not getting the vibe that you're a total cheerleader. I'm guessing we're gonna have a really nuanced conversation here about this right now. So let's start there. Like, let's start with kind of that polar, and then see where we go. Can you make the argument for us of how AI is going to positively impact education? And I'm not saying it has to be your argument, but can you just stand up an argument for us based on what you've learned about how it could. Like, what's the best case to be made for AI positively impacting it? Ben Riley: Yeah. So this is what people are now calling steel manning, right? Like, can you steel man the argument that you may not agree with. I had a law school professor who taught me that the best way to write a good legal brief is to take the other side's best argument, make it even better than they can make it, and then defeat it. And you all gave me this question in advance, and I've been thinking about it since you did, and I don't know if I can make one best case. What I want to do is make three cases which I think are the positive bull cases. So number one, one that I think should be familiar to both of you because we've been having this debate for nearly a decade, is sort of personalized learning, a dream deferred, but now it can be real. When we said we were going to use big data analytics and use that to figure out how to teach kids exactly what they want to know, when they need to know it. Like, what we meant was we needed large language models that could do that. And now, lo and behold, we have that tool. And as Dan Meyer likes to joke, it can harness the power of a thousand suns. It's got all of the knowledge that's ever been put into some sort of data form that can be scraped from the Internet or from other sources, not always disclose what those sources are, but nonetheless, there's a lot of data going into them and using these somewhat mysterious processes that they have of autoregression and back propagation. And we can go as deep as you want in the weeds on some of those terms, but we doing that, we can actually finally give kids like an incredibly intelligent, incredibly patient, incredibly, some would even say loving, some have said that, tutor. And we can do that at scale, we can probably do it cheaply. And boom, Benjamin Bloom's dream, two sigma gains. It's happening finally. There we go. All right, so that's argument number one. Call that personalized maximization argument. Argument number two, I think, is the sort of AI as a fundamental utility argument. And the argument here is something along the lines of, look, this is a big deal technologically in the same way the Internet or a computer is a big deal technologically, and it's one of those technologies that's going to become ubiquitous in our society, the same way the computer or the Internet has become ubiquitous in our society. And we don't even know all the many ways in which it's going to be woven into the fabric of our existence. But that includes our education system. And so some benefits will accrue as a result of its many powers. Okay, so that's the utility argument. The third argument would say something like this. It would say the process of education fundamentally is the process of trying to change mental states in kids. And I mean, frankly, doesn't have to be kids, but we'll just talk about it from teachers to students. Michael Horn: Sure. Ben Riley: And, there's some really big challenges with that. When you just distill it down to the act of trying to make a kid think about something. One of the challenges is that we cannot see inside their head. So the process of what's taking place, cognition or not, is opaque to us, number one. And number two, experiments are really, really hard. They're not impossible. But you can't really do the sort of experiments that you can do in other realms of life the same way. It's just for ethical reasons, but also just frankly from like scientific, technical reasons. Because again, we can't see what's happening in the head. So even when you run an experiment, you're getting approximations of what's happening inside the head. Some would then say, well, now we have something that is kind of like a mind and we can kind of emphasis on kind of, see inside it. And we definitely can run experiments on it in a way that doesn't implicate sort of the same ethical concerns and others. That argument, and I'll call that the cognitive arguments, human and artificial, would say that can use this tool to better help us understand ourselves. In some ways it might help us by being similar to what's happening with us, but in other ways it might help us by being different and showing those differences. So those are the three arguments that I see. Diane Tavenner: Yeah. Super interesting. Thank you for making those cases. Which of any of them do you actually believe? Now you, I'm curious about your opinion and why? Ben Riley: Yeah. So I have bad news for you. The first one, the personalized maximization dream, is going to fail for the same reason that I would like to say I predicted that personalization using big data analytics would fail. We could spend the entire podcast with me unpacking why that is. I'm not going to do that. So I'm going to limit it just to two arguments. Okay. The first would be that these tools fundamentally lack a theory of mind. Okay. So that's a term that cognitive scientists will use for the capacity that we humans have to imagine the mental states of another. And these tools can't do that. There's some dispute in the literature and researchers will say, well, if you run these sort of tests, maybe they're kind of capable of it. I'm not buying it. I don't think it's true. And there's plenty of evidence on the other side as well saying that they just don't have that capacity. Fundamentally, what they're doing is making predictions about what text to produce. They're not imagining a mental state of the user who's inputting things into it. Number two, I would say, is that it obviously misses out on a huge part of the cultural aspect of why we do and why we have education institutions and the relationships that we form. And I think that the claim that students are going to want to engage and learn from digitized tutors the likes of which Khan Academy and others are putting out, I think is woefully misguided and runs counter to literally thousands, if not hundreds of thousands of years of human history. Okay, so number one, doomed. Number two is to me like a kind of like, so what? Right? So I use the example of computers and the Internet as ubiquitous technologies that AI might join. So, like, let's say that's true. Let's say that comes to pass. So what? Like, we have the Internet now, we have computers now. We've had both of these things for decades. They have not, I would argue, radically transformed education outcomes. The ways in which technologies like this become sort of utilities in our lives, transforms our day to day existence. But just because a technology is useful or relevant in some way or form does not mean emphasis, does not mean that it is somehow useful for education purposes and for improving cognitive ability. So I have absent a theory as to in what ways these tools are going to do that. Whether or not they become, you know, ubiquitous background technologies is kind of a, so what for me. Number three, the argument, the cognitive argument that this tool could be a useful example and non example of human cognition, I have a great deal of sympathy for. I am very curious about. There's a lot, a lot that has changed just within linguistics, I would say, in the last several years in terms of how we conceptualize what it is these tools are doing and what that says about how we think and deploy language for our own purposes. We may have just scratched the surface with that. The new models that are getting released that are now quote unquote reasoning models have a lot of similarities in their functionality to things in cognitive science like worked examples and why those are useful in helping people learn. A worked example being something that sort of lays the steps out for a student as to here, think about this, then think about this, then think about this. Well, it turns out if you tell a large language model, do this, then do this, then do this, do then this, or just sort of program it to do that, their capabilities improve. So you know, without sounding too much like I'm high on my own supply, this is the cognitive resonance enterprise. It's sort of to say, okay, let's put this in front of us and instead of focusing so much and using it as a means to an end, let's study it as an end unto itself, as an artificial mind, quote unquote, and see what we can learn from that. Michael Horn: Super interesting, Ben, on, on that one. And I'm just thinking about an article I read literally this morning about where it falls short of mimicking, you know, the true neural networks, if you will, in our brain. So I'm pondering on that one now. I guess I, before we go to the outright skeptic take if you will, I'm sort of curious on like other things that you think AI won't help with in your view, beyond what you just listed in terms of, you know, this broad notion of personalizing learning or AI as utility, if you will, and, and the so what question, like are there other things that people are making claims around where they think AI is really going to advance the ball here. And you're like, I just, I don't see that as a useful application for it. Ben Riley: Well, you know, we launched into this conversation and we didn't define what we're talking about when we talk about AI. Right, sure. Michael Horn: There's different streams of it. Yep. Ben Riley: Yeah. And I think that, like, when I'm talking about AI, and least have been talking about it in this context thus far, I'm talking about generative AI, mostly large language models, but it includes any sort of version of generative AI that is in essence, sort of pulling a large amount of data together and then sort of trying to make predictions based on that, using sort of an autoregressive process or diffusion in the case of imagery, but sort of like trying to essentially aggregate what's out there, and as a result of that, aggregation produce something that sort of relates to that. If you're talking about beyond that, like, who knows? I mean, there's just so many different varied use cases. There's, I was mentioning off air, but I'll say now on air, there's a great book, AI Snake Oil, written by a couple of academics at Princeton, which talks about sort of the predictive AI, which they put in a sort of separate category from generative AI, and they're very skeptical about any of those uses. My fundamental thing is that to the extent people think like the big claim, right? And unbelievably, Sam Altman, the CEO of OpenAI, just a few days ago declared that, like, we've already figured out how to create artificial general intelligence. In fact, that's like a solved problem. Now we're on to super intelligence. I think people should be very, very skeptical of that claim. And there's a lot of reasons why I would say that, which again, could eat up the entire podcast. But I'll just give you one. What we now know is true, I think from a scientific perspective about human thought, is that it exists, it does not depend on language. Language is a tool that we use to communicate our thoughts. So if that's true, and I would argue in humans, it is almost unassailably true. And I can give you the evidence for why I think we think that or why we know that, then it would be very strange if we could recreate all of the intelligence that humans possess simply by creating something like a large language model and using all of the power of all the Nvidia chips to harness what's in that knowledge. Now what people will say, and frankly, this is where all the billions and the leading thinkers on this are trying to do is okay, well now we can only go so far with language. How about we try to do it for other cognitive capacities? Can we do that? Can we create neuro symbolic, as it's called, AI that is as powerful, powerful as generative AI with large language models and sort of start to piece this together in the same way that we may piece together various cognitive capacities in our own brain and then loop that together and call it intelligence. To which I say, well, good luck. I mean, honestly, good luck. But there's no reason to think that just because we've done it with large language models that we're going to have the same sort of breakthroughs in the other spaces. So don't know if this fundamentally answers your question, Michael, but I would say that it's sort of like, you can have progress in this one dimension. It can actually be quite fascinating and interesting. But I would urge people to sort of slow down in thinking that it just means that, you know, all of science and humanity and these huge questions around whether we will ever be able to fully emulate the human mind have suddenly been solved. Diane Tavenner: Yeah. Wow. So fascinating. I have so many things coming to me right now, including my long journey and experience with people who make extraordinary com, you know, claims and then kind of make the work a little bit challenging for the rest of us who are actually doing it behind them. But let's turn now, we're kind of steering in that direction, but let's go all the way in on the skeptical take. And so I feel confident you've got some good material here for us. Like what is AI going to hurt specifically in education? Let's start there, and how's it going to do harm? Ben Riley: Yeah, well, I don't think we should use the hypothetical or the future. Let's talk about what it's harming right now. So I mean, the big danger right now is that it's a tool of cognitive automation. Right? So what it does is fundamentally offer you an off ramp to doing the sort of effortful thinking that we typically want students doing in order to build the knowledge that they will have in their head that they can then use in the rest of their life. And this is so fundamentally misunderstood. It was misunderstood when Google was starting to become a thing and the Internet was becoming a thing. You would hear in education, well meaning people say, well, why do we need to teach it? If you can Google it. Right? That was a thing that many people said, put up on slides. I used to stop and listen and look. It makes sense if you don't spend any time with cognitive science and you don't spend any time thinking about how we think. And so I don't, I don't want to throw those people too far under the bus, but just a little, because now we know. We know this. Like, this is a scientific, like, as established as anything else is established. It's like our ability to understand new ideas in the world comes from the existing knowledge that we have in our head. That is the bedrock principle of cognitive science, as I like to describe it. So suddenly we have this tool that says, you know, to the extent you need to express whether or not you have done this thinking, let me do that for you. You know like, this exists in order to, to, to solve for that problem. And guess what? It is very much solving for that problem. Like, I think the most stunning fact that I have heard in the last year is that OpenAI says that the majority of its users are students. Okay, the majority. Now, I don't know what the numerator and denominator is for that, and I'm talking to some folks trying to figure that out, but they have said that at the OpenAI education conference, Lea Crusey, who some of you may know who was over at Coursera, got up and said, and they said, and I think they meant this is like, they were happy about this, that their usage in The Philippines jumped 90% when the school year started. What are those kids using it for? Yeah, you know, what are those kids using it for? Like, I don't think, like, we need to stop pretending that this isn't a real issue. And for me, people sort of go, well, it's plagiarism, you could always plagiarize. And it's like, not exactly. Not exactly like. And I think it actually is sort of both overstates and understates the case to talk about it in the context of plagiarism. Because again, the real issue here is that we will lose sight of what the education process is really about. And we already have, I think, too many students and too much of the system sort of oriented around get the right answer, produce the output. And I think teachers make this mistake, unfortunately, too often, I think a lot of folks in the system make this mistake of we just want to see the outcome and we are not thinking about the process because that's really what matters. And building that knowledge over time. And you've got now, I mean I literally sometimes lose sleep over this. You've got a generation of students whose first experience of school was profoundly messed up because of the pandemic. And then right on top of that, we have now introduced this tool that can be used as a way of offloading effortful thinking. And I don't think we have any idea what the consequences are going to be for that cohort of students and the potentially, like, dramatic deficiencies in a quality education that they will have been provided. That's one big harm. There's another. I mean, there's many others, but there's another that I'll highlight here, too. I don't know if you, either of you watched, I imagine you did, the introduction of ChatGPT multimodal system last year, which included the family Khan, Sal Khan and his son Imran were on there. I thought it was fascinating and speaks again to the amount of users who are students that OpenAI chose Saul and his son to debut that major product. If you watch that video closely, and you should, you'll see something, I think, that is worth paying attention to, which is at multiple points, they interrupt the multimodal tutor that they're talking to. And why not, right? It's not a life form. It doesn't have feelings. And we know that, it's a robot. You know, to a degree. I don't think we've really grappled with the implications of introducing something like human like into an education system and then having students who are students who are still learning about how to interact with other humans, that's another part of education and saying, you know what, it's okay to behave basically however you want with this tool, right? Like the norms and the sort of, you know, ways in which schools inculcate values and inculcate, sort of how it is we relate to one another could be profoundly affected in ways that we haven't even begun to imagine, except in the realm of science fiction. And I think it's worth looking at science fiction and pointing to how we tell these stories. I don't know if either of you watched HBO's Westworld, particularly the first season before the show went off the rails. But if you watch the, if you watch. Diane Tavenner: Season one was a little intense, too. Ben Riley: Season one was intense, but it was good. I thought it was good. And, and, but it was haunting. And one of the things that was haunting about it is it's like for those who haven't watched the show, it's a It's filled with cyborgs who are quasi sentient, but they, you know, people come and they're at amusement parks and it's like the old west and what can you do? You can kill them. You can kill them and people do that or worse. Diane Tavenner: Right, yeah. Well, talk about the other bad thing. Ben Riley: Right, right. I mean, but, you know, but it's sort of like the fact that we now can imagine that sort of thing being a future where you could like humans, but not. The philosopher Daniel Dennett, who passed away, talked about the profound dangers of counterfeiting humanity. And I think that's the sort of concern that is just almost not even being discussed at any real level as we start to see this tool infect the education system. Related Michael Horn: I suspect that's going to be something we visit a few times in this series. But you've just, you've done a couple things there. One, you've, I think, more articulately answered, you know, a lot of the bad behavior we've seen on social media. How that actually could get exacerbated is not through deep fakes per se, but in terms of actually how we relate to one another. But you also answered another one of my questions that I've had, which is I can't remember a consumer technology where education has been the featured use case in almost every single demo repeatedly. And you may have just answered that as well. I'm curious, a different question because I know you and Bror Saxberg have had sort of a back and forth about, you know, where is certain things that maybe it's harming going to be less relevant in the future. And he loves to cite the Aristotle story. Right. About we're not going to be memorizing Homeric length poems anymore. And maybe that's okay because it freed up working memory for other things. I'm sort of curious to get your reflection on that conversation at the moment because I think Diane and I would strongly agree. Replacing effortful thinking, thinking that you can just, you know, have people not grapple with knowledge and build mental models and things like that, that's going to have a clearly detrimental impact. Are there things where you say actually it's going to hurt this, but that may be less relevant because of how we accomplish work or something like that in the future? I don't know your take on that. Ben Riley: Yeah, I don't think you'll like my answer, but I'm going to give you my honest answer. Michael Horn: I don't know that I have an opinion. Like, I'm just curious. Ben Riley: Yeah, I mean, I'm not a futurist and I've made very few predictions ever in my life, at least professionally. One of the few that I did was that I thought personalized learning was a bad idea in education. And I'd be curious, I don't know in this conversation another, whether you two reflecting back on that would go actually, you know, knowing what we know now, there were reasons to be skeptical of it and the, the I'm annoyed at the turn he seems to have taken because I used to like to quote Jeff Bezos. So with all the caveats around, you know, Jeff Bezos and anybody right now from big tech, he has said something that I think is relevant, which is he said, he's asked all the time, you know, how the, what's going to change in the future and how to prepare for that. And he says that's the wrong question. He says, you know, the thing that you should plan around is what's not going to change. He's like, when I started Amazon, he was like, you know, I knew that people wanted stuff, they wanted variety, they wanted it cheap and they wanted it fast. And he's like, that, as far as I could tell, wasn't going to change. Like, people weren't going to like, I want to spend more or take longer to get to me. And it's like I said, once you have the things that won't change, build around those. So I said it earlier, I'll say it again. The thing that's not going to change is fundamentally our cognitive architecture is the product of certainly hundreds of thousands, if not millions of years of biological evolutionary processes. It is further, I think, the product of thousands of years, tens of thousands of years of cultural evolution. We now have something, we have digital technologies that can affect that culture. So it does not mean, and I am not contending that our cognitive architecture is some sort of immutable thing, far from it. But on the other hand, it would suggest that what we should do is A, not plan around changes that we can't possibly imagine, but B, maybe more importantly, and I would say this to both of you, not try to push for that future, you know, that we should fundamentally be small c, very small c, conservative about these things, because we don't know, you know, I don't know what the amount of time that took place back in Socrates and Aristotle's time in terms of the cognitive transitions that took place, but they took place. My strong hunch not so much as the product of any deliberate choice, but to get a sort of social conversation about which ways in which should we talk to one another. And it was clearly the case that writing things down proved to be valuable in many dimensions. It may prove to be the case that having this tool proves very valuable in many dimensions. But let the time and experience sort that out rather than trying to predict it. Diane Tavenner: Super helpful. I love where you're taking us, which is into actual schools. So I appreciate that you're like, let's talk about what's actually happening right now. And, you know, that is where my, like, heart and work always is, is in real schools. And so given what we are seeing, what you're articulating about what's actually happening right now in schools, and given that, well, I won't say it as a given. What do schools need to do to mitigate the challenges you just said to, to recognize this as a reality that is coming our way that maybe can't be put back in the box. Now, I'm going to say that with a caveat because I'm reading in the last day or two too, that it's people declaring, you know, that they've won the cell phone war and cell phones are going to be out of schools here pretty soon. So maybe, maybe you actually believe it's possible to kind of put it back in the box in schools. But, like, what's the impact on schools and what do they do literally right now, given what you're saying is actually happening already? Ben Riley: Yeah. So great questions, all of them. So, I mean, thank you for bringing up the cell phone example, because I cite that often and even before there was this sort of wave now, both at the international level, national level, state by state, district by district, to suddenly go, these tools of distraction aren't great for the experience of going to school and having you concentrate on hopefully what the teacher is trying to impart through the act of teaching. So we can, it's not easy, but we can take control of this. Nothing is inevitable. So, you know, people always say, well, you can't put it back in the box. You know, AI will exist, but how do we behave towards it? What ethics and norms do we try to impart around it? These are all choices we get to make. I like the phrase, and I'm borrowing this from someone named Josh Break, who's a professor at Harvey Mudd. He has a wonderful Substack called I think It's Just the Absent Minded Professor. But he writes a lot about AI in education. And his phrase is just you have to engage with it, but that doesn't mean integrate. Right? So what I do think, you know, Diane, you kept saying schools. I just think it's teachers, educators need to engage with it. That can still mean that the answer after you engage with it is no, not for me, and also no, not for my students. I think that's a perfectly acceptable thing to say. And look, maybe the students won't follow it, but that, you know, you've done what you can, right? And, and that is all you can do. There's a teacher out there who I'm desperately trying to get in touch with, but she made waves. Her name is Chanea Bond. She teaches here in Texas. She made waves on Twitter a while back by saying, look, I've just banned it from my kids because it's not good for their thinking. People are like, what? And it was like, she was like, yeah, no, it's not good. Like it's interfering with their thinking. So I've banned it. So that's a perfectly reasonable answer. I also think that, you know, once you start to understand it at a basic level, I'm not talking about getting a PhD in back propagation and artificial neural networks, but just starting to understand it, you'll start to understand why it's actually quite untrustworthy and fallible and that you know, if you just think that everything it's telling you is going to be accurate, you have another think coming, you know, and one of the things in the workshops that I've led that I've been very satisfied by is when people come out on the other side of them, they're like, yeah, okay, so this thing isn't reasoning and it's not this all knowing oracle. And once you have that knowledge, once you've demystified it a bit, I think it gets a lot easier to sort of grapple with it and make your own choices and your own decisions about how you want to do it. I will say that right now, in the education discourse, it's like, you know, things are way out of balance between sort of the hype and enthusiasm versus the sort of, hey, pump the brakes, or at least have you thought about this, if you'll forgive me, but again, sort of, you know, it's a, it's a free resource. But if you go to we've put out a document called the Education Hazards of Generative AI, which literally just tries to, in very bite size and hopefully accessible form, sort of say, here are all the things you really need to think about and might be some cautionary notes across a number of dimensions, whether you're using it for tutoring or material creation, for feedback on student work. Like, there's a lot of things that you need to be thinking about and aware of. One of the things that frustrates me is that I see a lot of enthusiasts and this ranges from nonprofits to the companies that make these tools, sort of saying, well, teachers, fundamentally, it all falls to you. Like, if this thing is not factual or it hallucinates, like, it's your job to fact check it. And it's like, well, come on, like, A, that's never going to happen, and B, like, not fair, you know, like not fair to put that on educators and just kind of wipe your hands clean. So I do think that's something that, like, we're still going to have to sort of sort through society on a, you know, social level as well as within schools and well as like individual teacher and ultimately students are going to have to bear some agency themselves about what choices they make around whether and how to use it at all. Diane Tavenner: I'm so appreciative of this idea of agency here. And I do think that that's like, certainly a place that I've always been and is core to my values and beliefs as an educator is the importance of agency, not only for educators, but for young people themselves. And so, I love that this is such a rich conversation. We go on and on and on. But I feel like maybe leave it there. Like really real people, real teachers, real students, real agency. So grateful for everything that you brought up, so much to think about. And we're gonna pester you for one last thought, which is Michael and I have this ritual of, at the end of every episode, we share what we've been reading, watching, listening to. We try to push ourselves to do it outside of our day jobs. And sometimes we seep back into the work because it's so compelling. And so we want to invite you, if you have thoughts for us and to share them. Ben Riley: So I told you I had a weird one for you here. So I was just in New Orleans and when I was in high school, for reasons that I won't go in detail here, my family got really into the Kennedy assassination and the movie JFK by Oliver Stone came out. And I don't know whether either of you have watched that film in a long time. It's an incredible movie. It's also filled with lies and untruths, and it's much like in large language. Michael Horn: I think we watched it in high school, but keep talking. Ben Riley: Yeah. Yeah. Well, the thing that, the reason I bring it up is because Lee Harvey Oswald lived in New Orleans in the summer of 1963. And that movie is based on the case that was brought by the New Orleans District Attorney, a guy named Jim Garrison. But there's a bunch of real life people who are in that movie or portrayed in that movie. And I just started to think about accidents of history where all of a sudden you could be, you know, just a person of relative obscurity as far as, you know, anyone broadly paying attention to your life. And all of a sudden something happens and now you become sort of this focus of study. And trust me when I tell you that every single person who had any connection with Lee Harvey Oswald in his life has become this object of study to people and books have been written. And so I'm trying, this is very bizarre, I know, but what I'm trying to do is think about and understand what it is like for people in that situation. Like what it is like to suddenly have your story told that you don't have control of it anymore, you know, and if you know where, this isn't supposed to be work related but in a way I think it does connect backup because it goes back to the fact that these tools are taking a lot of human created knowledge and sort of reappropriating it for their own right. And we haven't got touched on that. I don't think we need to now. But it's sort of like it's, there are a lot of artists who feel a profound sense of loss because of what's happening in a our society today. That's another thing I think worth thinking about. Diane Tavenner: Wow, you're right. I didn't see that one coming. But it's fascinating. Thank you for sharing it. I am unfortunately not going to stray from work today. I can't help myself. Three of my very good friends have recently released a book called Extraordinary Learning for All. And that's Aylon, Jeff Wetzler, Janee Henry Wood. And it's really about the story of how they work closely with communities on the design of their schools and in a really profound and inclusive way. And so I'm deep in that, been involved in that work for a long time and think it's just a really powerful kind of inspiration slash how to guide of how communities can really take agency over their schools and own them and figure out what they want and what matters and what they need and how they design accordingly. Michael Horn: So I was gonna say now, Jeff has appeared twice in a row in our book recs, I think, on episodes or something like that. So love that. Diane, I'll wrap up with saying I'm gonna go completely outside of, I think, the conversation today. But, Ben, you may say it actually relates as well, because I've been binging on season two of Shrinking. I loved season one and season two, with the exception of a couple episodes in the middle has been no exception, I think. So I'm. I'm really, really enjoying that so far. And I suppose you could connect that back to. Ben Riley: What is Shrinking? I don't know. I have to. I don't know what it is. Michael Horn: Okay, it's basically about three therapists in a practice and one who's grappling with the deep personal tragedy. And Harrison Ford is outrageously hilarious. Yeah. Diane Tavenner: So good. It's so good. Okay, well, I'm gonna tag on to your, you know, out of work one and say yes, we love Shrinking as well. Michael Horn: Perfect. Perfect. All right, well, we'll leave it there. Ben, huge thanks for joining us. For all of you tuning in, huge thanks for listening. We look forward to your thoughts and comments off this conversation and continuing to learn together. Thank you so much as always, for joining us on Class Disrupted.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store