logo
Artificial Intelligence in Education: Risks, Opportunities and What's Next

Artificial Intelligence in Education: Risks, Opportunities and What's Next

Yahooa day ago

is an education podcast featuring author Michael Horn and Futre's Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic — and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on Apple Podcasts, Google Play or Stitcher.
In the last episode of the season, Michael Horn and Diane Tavenner come together, in person, to reflect on the arc of their artificial intelligence-focused series. They discuss key themes and takeaways, including the enduring importance of foundational knowledge, skepticism around the speed and impact of AI-driven change within traditional schools, and how transformative innovation is more likely to emerge from new educational models. Their conversation explores the challenges and opportunities AI brings — particularly in developing curiosity as a critical habit for learners — and revisits how their own perspectives shifted throughout the season.
Listen to the episode below. A full transcript follows.
Michael Horn: Hey, Diane, it is good to be with you in person.
Diane Tavenner: It's really good to be in person. It's a little funny where we are in person, but it's kind of the perfect setting to end our A.I. you know, miniseries season six. We are at the air show. I think that's what it's called, the AI show in San Diego.
Michael Horn: I'm gonna take a selfie, as we say.
Diane Tavenner: We're gonna send you a picture of this. So we're. We're recording here from the floor that is filled with educators and edtech companies and AI. AI. AI!
Michael Horn: Because AI is the thing, which is perfect because our season this year has almost exclusively focused on the question of what will the impact of AI be in education? How do we shape that? What do we want it to be? All these questions, frankly, in ways that neither of us had imagined fully. I think when we started this and we did a first sort of rapid reaction.
Diane Tavenner: We did. Were we starting our kind of baseline assessment of what we thought and our knowledge and what we were curious about?
Michael Horn: Yep. And we've gone through this journey, and now today, we sort of get to tidy it up with our very sharp, insightful takes. No pressure on us.
Diane Tavenner: No pressure for those key headlines. But, you know, along the way, we interviewed a bunch of really interesting people, some skeptics, some really positive folks. And we benefited a lot from it.
Michael Horn: I learned a ton. My understanding of the space. I don't know if I conveyed it on our prior episode, but I think it's a lot deeper than it was when we started.
Diane Tavenner: For me, too. I really appreciate them. And then, you know, in true fashion, we just publicly processed out loud last episode.
Michael Horn: We do.
Diane Tavenner: And now we're going to try to actually pull it together with some key takeaways. So that's how we're going to wrap it today. And so we kind of outlined, you know, three big categories here. And the first one is, I want to ask you what belief was confirmed for you as we made our way through this season?
Michael Horn: Yeah. So people obviously heard where we started, but I will confess, I've been struggling. I knew you were going to ask this question, and for days I've been wondering, what did it confirm for me? I think I will say two things. If that. And maybe that's cheating. But it's our podcast. Right. So, number one, I think it confirmed for me that foundational knowledge will still be important.
Diane Tavenner: Yes.
Michael Horn: And I think developing it into skills will still be important, just as Google did not change that reality, despite what a lot of educators and maybe more schools of education sadly were telling their students that became teachers. I don't think AI will change that either. We had a long conversation in the last episode around the nature of expertise and who AI is useful for. I think the second thing that maybe hit harder for me but, but confirmed something that we talked about in the first episode was I think the most transformational use cases of AI in education will be in areas outside of the traditional schools with new models that leverage AI that wrap around it to do things very differently from business as usual, frankly. Like why you started public school is outside of the traditional. Right. I think the other piece of that is I'm somewhat skeptical that venture capital will be the thing that funds a lot of these new models that emerge.
Diane Tavenner: Say more about that. Why?
Michael Horn
Well, I could be very wrong in the latter. I'm just coming, we're at this conference and I just coming from a place where a few people said no, we are funding these things. So I could be completely wrong. I guess my thoughts are that the time frames for explosive growth for VC are short; five to seven years.
Diane Tavenner: Yeah.
Michael Horn: The micro schools, the new emerging schooling models. I don't even know if micro schools will be the word we use in five years from now. I'm not convinced those are like zero to a hundred thousand student businesses.
Diane Tavenner: Yeah.
Michael Horn: And so I don't know, can you make a venture style business out of them? Venture might be funding the AI software that sort of makes those things go round and certainly the infrastructure that we've talked about.
Diane Tavenner: Right, right.
Michael Horn: But I, but I guess I think that's going to be the really interesting hotbed of activity to look at. And we had this dichotomy on the first show, teacher facing versus student facing. I think that's less present in my mind at the moment. But the student facing stuff I think will be in these new models, not the traditional ones.
Diane Tavenner: Fascinating.
Michael Horn: What about you?
Diane Tavenner: Well, I think that, you know, when.
Michael Horn: I feel free to disagree with me also I think.
Related
Diane Tavenner: Well, I think my confirmed belief is sort of a dimension of what you're talking about, maybe the flip side of what you're talking about or connected to it and I can't decide if it's in conflict with what you're saying or not. So let me just put it out there and we'll see. I will say that I think of myself almost as always an optimist, but I am a skeptic in one area and I believed coming into this that we weren't going to hear that schools were being redesigned or that even had been. And so it sort of confirmed my belief that I don't know what is going to bring about this kind of change. And so you are saying it's going to happen outside of the. Yes, because that's the only place that.
Michael Horn: It's the only place for transformational use cases.
Diane Tavenner: And it may be yet.
Michael Horn: And it may be yet. And I think the confirmed belief for me at the moment, it's great when you're wrong and you learn something new. I will say. But at the moment, it confirmed my sense that it will, look at our field, they tend to be consumed with the hardest, most intractable problems at the center of the field. And this is gonna be the periphery. It's not gonna be the bulk of it. So there's a little bit of a cognitive dissonance if you.
Diane Tavenner: I think you're right. And it's. It's so interesting. The story in America is truancy and absenteeism. So data tells a story along that. But if you're processing that, that is the biggest problem. And then you're creating, using AI to create a solution structure.
And what is happening in the school day is the problem. Families are voting with their feet.
Michael Horn: So it's so interesting you say that. I'm rereading Bob Moesta's book, Five Skills of Innovators. I almost mailed you a copy over the weekend. They're solving a problem rather than asking, what is the system supposed to do and how do you tighten the variance around that? And as he says, you can solve the problem, but create five others. Or you say, what is the system supposed to do now? Yeah. And so that's why I think we got to bust out. So let me ask you, Let me ask you the next question. Where did it change your mind or beliefs? Anything that we learned?
Diane Tavenner: Well, I do. I do think it changed my mind. And I'll point to our episode with John Bailey. That's how we kicked off this series. And I think I've talked to so many people who love that episode, and they're like, oh, my gosh, I had no idea all the different ways that I could use ChatGPT or Claude or whatever AI I'm using. And it's true. I mean, John, you know, talked about how we now have an expert in our pocket on every possible topic. And so it really pushed me to think about how I was using it in my life, both in.
In my personal life, in my professional life, and in our product. Now there's Some challenges with this expert idea that I think came up for both of us.
Michael Horn: Yeah. And maybe that's where I, maybe that's where it changed my beliefs. I think I had a sense and you can read my quotes in newspapers and stuff like that. That or newspapers exist. Ed weeks, stuff like that. That. I think this series really gave me a much deeper set of questions around what kinds of students will actually be able to take advantage of these types of tools. I won't go into it again. Did it the last episode around this novice expert, unknowing, knowing, sort of two by two.
Related
Michael Horn: And so I think that's like something that I'm really wrestling and revising in my head coming out of this. I think along those lines, it gave me a much deeper concern over a lot of the things that could go wrong if we're not super intentional and thoughtful about that game. But I think it's like how we leaned into it. And I, I will say, I don't know if this is a revision for me. You may tell me I'm leaving my principles behind, but I sort of scoffed a couple years ago when districts would say, we need an AI strategy. And I was like, no, that's focusing on the inputs, not the outcomes you want. But I think I've revised my stance in that I do think that there needs to be more thoughtfulness around what are our beliefs and values and so forth in an era of AI, and what does that mean for what we think about teaching and learning? And maybe that's your AI strategy.
Diane Tavenner: Well, and this harkens back to the episode with Rebecca Winthorp. Will AI provoke schools to go back and have the real conversations about what is the purpose of education? What are we trying to do? What matters now? How are we using this new, very powerful tool to further our purpose?
Michael Horn: Look, I would hope that they would, but, I mean, I think this is the answer, you know, see number one, where I think it's more likely that these conversations happen in embryonic education communities than the traditional, despite how broken this could look in five years if we go down this road. But that's, I left with a lot of concerns.
Diane Tavenner: Yeah. And I'm curious in my own use of AI, if I'm missing out or losing anything, because I'm not, like, processing some of my thinking and work in the way that I used to, like, no doubt more efficient, certain brain work during that process.
Michael Horn: So was it creating cognitive laziness that.
Diane Tavenner: I have no evidence that that's true. But I do wonder.
Related
Michael Horn: And on my other podcast, Future U, Jeff Salingo talked about how his daughter, one of his daughters, asked what you did when you didn't have phones. And her visual image wasn't like, oh, you memorized stuff and had to learn a lot. Her visual image was literally like, we have a phone in front of us, navigating us. We must have had a large fold out map. She couldn't imagine that we would write down the directions and so forth and then. And occasionally you pulled over and had to recalibrate, but. And so he was like, oh, so this is an example of cognitive laziness. And I was like, I actually think that's an example of freeing up the brain to do other things that I think is.
Diane Tavenner: Well, and in a whole other part of our lives. We both care a lot about longevity and the science and whatnot. And so there's certainly some evidence over there that we are not helping our brains when we're taking all those tasks out of our life. So I want to switch gears and name something else that it changed for me, and that's curiosity. I think we both came to this. And for me, here was the big aha, like I have for years. Like, I built the summit model with the habits of success, and curiosity was one of the parts of that. But curiosity has always gotten sort of shortchanged, if you will, because everyone's like, well, that's great, but how do you teach it and how do you assess it? And it's sort of sitting up there and to me, like, curiosity comes roaring back in.
It is having its shining moment.
Michael Horn: Like the habit.
Diane Tavenner: Yes.
Michael Horn: That you will need to be a thriving adult in this world. So you don't take things on face value. So you are inquisitive, so you ask. So you're always needing to use this, I think, to figure out what is truth, if you will. That's perhaps a real skill that we will need to be better at developing.
Diane Tavenner: You know, I would probably call it more of a habit, but it is a skill. It's one of those weird ones because I feel like we're born naturally curious, not feel like there's a lot of evidence of that. I sadly believe that our education system actually rings that curiosity out of us.
Michael Horn: It doesn't reward it. Right?
Diane Tavenner: It doesn't reward it. And you know what's interesting? In my current work, you ask employers, you know, who would you provide job shadow opportunities for, who would you have as an intern, those sorts of things. And when you talk to them, curiosity rises to the top. What do they want? A young person who comes in, who's a signal that you do have a growth mindset and you are interested in growing and you do want to learn and you're just. Yeah, it's just such an important quality, I think.
Michael Horn: Yeah, I think that's right. And it. And it connects all these things. My own worry is that if people don't have enough foundational knowledge, they'll actually be far less creative in this world of AI where they're just doing what is sort of told to them and unable to ask big questions. If I ask you to learn how to ask really big questions that break out of status quo systems and things of that nature.
Diane Tavenner: Exactly to that point. I think the other thing that I've been thinking differently about is throughout this series, as you know, my biological son is a history guy.
Michael Horn: Someone after my heart, I know, said.
Diane Tavenner: To me, the other one is obsessed with AI, so it's an interesting combo.
Michael Horn: But yeah, the other one I have no chance of understanding.
Diane Tavenner: But yes, yeah, she said to me, you know, mom, because we're talking about the speed of how the development of the innovation, but the human part is still really real. And so one of the things he said to me is, you know, do you know how long it took for America to fully adopt electricity after it was invented?
Michael Horn: It was like rebuilding of models around it that are native to that at the center.
Diane Tavenner: Yes. And I just think it's so interesting. Like I had a conversation with ChatGPT about why did it take so long. And so some of the things I learned and my kiddo is like, there's infrastructure. In the case of electricity, there was a cost. I would argue there's like hidden costs to it.
Michael Horn: I think there's huge costs. This is not the zero marginal cost world anymore of Silicon Valley.
Diane Tavenner: Right, right.
Michael Horn: It's different.
Diane Tavenner: Right. There was a lack of immediate need or use. Why are you getting on AI like, and even the two of us saying, you know, we now almost never go on Google and search Google anymore because we've transformed our behavior over. But it took a minute even for us to sort of figure that out, change our behavior.
Michael Horn: Interesting. So this guy Horace Dediu, I was not going to go here until you just brought this up. Who runs the Asymco sort of community podcast, speaks a lot about Apple. He was with the Christensen Institute for a hot minute.
Diane Tavenner: OK.
Michael Horn: And he was doing his research around the adoption of refrigerators and dryers. Adoption of refrigerators was relatively fast, but the adoption of dryers was really, really slow. Oh, and dryers were really, really slow adoption because you had to change the component into which it fit in the house. Right.
Diane Tavenner: And so it requires a different plug.
Michael Horn: Infrastructure. Tells you how fast it will go.
Diane Tavenner: Yeah.
Michael Horn: And we don't ever ask, have that conversation right around thinking about, you know, how much do you have to redesign huge parts to make really it useful.
Diane Tavenner: And I would assume the case with dryers to households across the country. And I. I think that when people look back on this moment in history, they'll probably blur the time period it takes. But we're going to live through, I think, a much longer time period.
Michael Horn: It's interesting, a lot of my early funders at the Christensen Institute, people like Gisèle Huff, who I adore, they would get annoyed with me. I mean, when I said patience is going to be required because we have an install base, we have a system.
Diane Tavenner: Right.
Michael Horn: I, on the last one, expressed my belief that some of these dynamics could change around disruptive innovation actually now being welcomed for the first time.
Diane Tavenner: So I'm laughing at us a little.
Michael Horn: Bit because of our naivete.
Diane Tavenner: 2020 to do a little. Well, back in 2020, but then we thought we were going to do a little AI miniseries and then we'd figure it all out. But I think that as we wrap this season, season six, we actually have even more questions and curiosity ourselves.
Michael Horn: Well, and we'd love to hear from folks who are tuning in. This is a welcome invitation to just pester us less with your pitches and more with, like, what are you curious about?
Diane Tavenner: Yes.
Michael Horn: Who would you like to hear from? Not in your orbit, but, you know, people that would further both your understanding and ours.
Diane Tavenner: Yes. And what are you doing and what are you seeing and how can we sort of come along on this journey together?
Michael Horn: So let me end with this one question. Will AI have an impact on young people? If so, when and how?
Diane Tavenner: Yes.
Michael Horn: My answer to that question is like, despite what at least one of our guests said is, I can't imagine it will not have a big impact on individuals. I think AI is going to be much more pervasive, in fact. And look, I'm not one of those people that says just because it's in the working world, they need to use it now because we're preparing them for that world.
Diane Tavenner: It's already impacting them. So it is having an influence on the work that's available to them. The way employers think about work. The what, what. Where it's going to have an impact on.
Michael Horn: Particularly in high school, I think it's going to be like the old world of like, here's the curriculum. Go learn. It, I think, is massively thrown out the window. Right. Like, Maybe K through 8th is a little bit more constant because it is foundational. I, I don't think it should change as much, but high school, I think, is different. It already should be much more experiential and exploratory in my view. But I, I think it'll be, I think it should be extremely so now.
All right, let's wrap. What are you reading, watching, listening to that I should be clued into.
Diane Tavenner: Well, I'm still on all of the ancient Greek fun, so I have gotten a lot of very polarized reactions to this, but hear me out. So Gavin Newsom has a new podcast.
Michael Horn: He does.
Diane Tavenner: I've been reading about it and lots of people have been reading about it. I live in California, as you know.
Michael Horn: So he's your Governor.
Diane Tavenner: He is my governor. You have to listen to this. The first episode where he interviewed Charlie Kirk. And for those who don't know, the premise is he's talking to people who he really disagrees with. Here's why I'm going to promote it. I love it. These are, they're getting into the nuance of policy and how things work. And I am learning a lot and I want to be able to make my own decisions.
Diane Tavenner: So I want to hear the full scope of things and feel like. And I don't. So this is the kind of conversation I want to exist out there.
Michael Horn: Well, so you're learning from that and I'm learning from you. I, I am, I'm, I'm not just reading non fiction. I've also been embracing some fiction books. I'll name one. Yeah, there you go. Right. I'll name one which is Paradise. And I'm gonna mess up the author's name.
Michael Horn: I'm gonna apologize, but Abdulrazak Gurnah. And I'm reading this book Paradise, because I'm, I'm learning from you that it's nice to read fiction from the country where you're about to travel. And as you know, I'm headed to Tanzania with Imagine Worldwide. I'm on the board there.
Diane Tavenner: Are you enjoying it?
Michael Horn: I'm still trying to make sense from it.
Diane Tavenne: Yeah.
Michael Horn: It's less. The fiction that I read around Sierra Leone in particular was like very of the Civil War moment and like I could really figure out where that is. But in Paradise, there are a lot of currents going on in this book. I'm trying to sense make. And it's really interesting.
Diane Tavenner: How beautiful.
Michael Horn: And thank you to all of our listeners once again. And thank you, of course, to the 74 for distributing this. And it's how so many of our listeners connect with us. And so to all of you, we will see you next season on Class Disrupted.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta invests in AI firm Scale and recruits its CEO for 'superintelligence' team
Meta invests in AI firm Scale and recruits its CEO for 'superintelligence' team

Yahoo

time41 minutes ago

  • Yahoo

Meta invests in AI firm Scale and recruits its CEO for 'superintelligence' team

Meta said Thursday it is making a large investment in artificial intelligence company Scale and recruiting its CEO Alexandr Wang to join a team developing 'superintelligence' at the tech giant. The move reflects a push by Meta CEO Mark Zuckerberg to revive AI efforts at the parent company of Facebook and Instagram as it faces tough competition from competitors such as Google and OpenAI. Meta announced what it called a 'strategic partnership and investment' with Scale late Thursday but didn't disclose the financial terms of the deal. Scale said the added investment puts its market value at over $29 billion. Scale said it will remain an independent company but the agreement will 'substantially expand Scale and Meta's commercial relationship.' Meta will hold a minority of Scale's outstanding equity. Wang, though joining Meta, will also remain on Scale's board of directors. Replacing him is a new interim Scale CEO Jason Droege, who was previously the company's chief strategy officer and had past jobs at Uber Eats and Axon. It won't be the first time a big tech company has gobbled up talent and products at innovative AI startups without formally acquiring them. Microsoft hired key staff from startup Inflection AI, including co-founder and CEO Mustafa Suleyman, who now runs Microsoft's AI division. Google pulled in the leaders of AI chatbot company while Amazon made a deal with San Francisco-based Adept that sent its CEO and key employees to the e-commerce giant. Amazon also got a license to Adept's AI systems and datasets. Wang was a 19-year-old student at the Massachusetts Institute of Technology when he and co-founder Lucy Guo started Scale in 2016. They won influential backing that summer from the startup incubator Y Combinator, which was led at the time by Sam Altman, now the CEO of OpenAI. Wang dropped out of MIT, following a trajectory similar to that of Meta CEO Mark Zuckerberg, who quit Harvard University to start Facebook more than a decade earlier. Scale's pitch was to supply the human labor needed to improve AI systems, hiring workers to draw boxes around a pedestrian or a dog in a street photo so that self-driving cars could better predict what's in front of them. General Motors and Toyota have been among Scale's customers. What Scale offered to AI developers was a more tailored version of Amazon's Mechanical Turk, which had long been a go-to service for matching freelance workers with temporary online jobs. More recently, the growing commercialization of AI large language models — the technology behind OpenAI's ChatGPT, Google's Gemini and Meta's Llama — brought a new market for Scale's annotation teams. The company claims to service 'every leading large language model,' including from Anthropic, OpenAI, Meta and Microsoft, by helping to fine tune their training data and test their performance. It's not clear what the Meta deal will mean for Scale's other customers. Wang has also sought to build close relationships with the U.S. government, winning military contracts to supply AI tools to the Pentagon and attending President Donald Trump's inauguration. The head of Trump's science and technology office, Michael Kratsios, was an executive at Scale for the four years between Trump's first and second terms. Meta has also begun providing AI services to the federal government. Meta has taken a different approach to AI than many of its rivals, releasing its flagship Llama system for free as an open-source product that enables people to use and modify some of its key components. Meta says more than a billion people use its AI products each month, but it's also widely seen as lagging behind competitors such as OpenAI and Google in encouraging consumer use of large language models, also known as LLMs. It hasn't yet released its purportedly most advanced model, Llama 4 Behemoth, despite previewing it in April as "one of the smartest LLMs in the world and our most powerful yet.' Meta's chief AI scientist Yann LeCun, who in 2019 was a winner of computer science's top prize for his pioneering AI work, has expressed skepticism about the tech industry's current focus on large language models. 'How do we build AI systems that understand the physical world, that have persistent memory, that can reason and can plan?' LeCun asked at a French tech conference last year. These are all characteristics of intelligent behavior that large language models 'basically cannot do, or they can only do them in a very superficial, approximate way,' LeCun said. Instead, he emphasized Meta's interest in 'tracing a path towards human-level AI systems, or perhaps even superhuman.' LeCun co-founded Meta's AI research division more than a decade ago with Rob Fergus, a fellow professor at New York University. Fergus later left for Google but returned to Meta last month after a 5-year absence to run the research lab, replacing longtime director Joelle Pineau. Fergus wrote on LinkedIn last month that Meta's commitment to long-term AI research 'remains unwavering' and described the work as 'building human-level experiences that transform the way we interact with technology.' Sign in to access your portfolio

Meta invests in AI firm Scale and recruits its CEO for 'superintelligence' team
Meta invests in AI firm Scale and recruits its CEO for 'superintelligence' team

Associated Press

timean hour ago

  • Associated Press

Meta invests in AI firm Scale and recruits its CEO for 'superintelligence' team

Meta said Thursday it is making a large investment in artificial intelligence company Scale and recruiting its CEO Alexandr Wang to join a team developing 'superintelligence' at the tech giant. The move reflects a push by Meta CEO Mark Zuckerberg to revive AI efforts at the parent company of Facebook and Instagram as it faces tough competition from competitors such as Google and OpenAI. Meta announced what it called a 'strategic partnership and investment' with Scale late Thursday but didn't disclose the financial terms of the deal. Scale said the added investment puts its market value at over $29 billion. It won't be the first time a big tech company has gobbled up talent and products at innovative AI startups without formally acquiring them. Microsoft hired key staff from startup Inflection AI, including co-founder and CEO Mustafa Suleyman, who now runs Microsoft's AI division. Google pulled in the leaders of AI chatbot company while Amazon made a deal with San Francisco-based Adept that sent its CEO and key employees to the e-commerce giant. Amazon also got a license to Adept's AI systems and datasets. Wang was a 19-year-old student at the Massachusetts Institute of Technology when he and co-founder Lucy Guo started Scale in 2016. They won influential backing that summer from the startup incubator Y Combinator, which was led at the time by Sam Altman, now the CEO of OpenAI. Wang dropped out of MIT, following a trajectory similar to that of Meta CEO Mark Zuckerberg, who quit Harvard University to start Facebook more than a decade earlier. Scale's pitch was to supply the human labor needed to improve AI systems, hiring workers to draw boxes around a pedestrian or a dog in a street photo so that self-driving cars could better predict what's in front of them. General Motors and Toyota have been among Scale's customers. What Scale offered to AI developers was a more tailored version of Amazon's Mechanical Turk, which had long been a go-to service for matching freelance workers with temporary online jobs. More recently, the growing commercialization of AI large language models — the technology behind OpenAI's ChatGPT, Google's Gemini and Meta's Llama — brought a new market for Scale's annotation teams. The company claims to service 'every leading large language model,' including from Anthropic, OpenAI, Meta and Microsoft, by helping to fine tune their training data and test their performance. It's not clear what the Meta deal will mean for Scale's other customers. Wang has also sought to build close relationships with the U.S. government, winning military contracts to supply AI tools to the Pentagon and attending President Donald Trump's inauguration. The head of Trump's science and technology office, Michael Kratsios, was an executive at Scale for the four years between Trump's first and second terms. Meta has also begun providing AI services to the federal government. Meta has taken a different approach to AI than many of its rivals, releasing its flagship Llama system for free as an open-source product that enables people to use and modify some of its key components. Meta says more than a billion people use its AI products each month, but it's also widely seen as lagging behind competitors such as OpenAI and Google in encouraging consumer use of large language models, also known as LLMs. It hasn't yet released its purportedly most advanced model, Llama 4 Behemoth, despite previewing it in April as 'one of the smartest LLMs in the world and our most powerful yet.' Meta's chief AI scientist Yann LeCun, who in 2019 was a winner of computer science's top prize for his pioneering AI work, has expressed skepticism about the tech industry's current focus on large language models. 'How do we build AI systems that understand the physical world, that have persistent memory, that can reason and can plan?' LeCun asked at a French tech conference last year. These are all characteristics of intelligent behavior that large language models 'basically cannot do, or they can only do them in a very superficial, approximate way,' LeCun said. Instead, he emphasized Meta's interest in 'tracing a path towards human-level AI systems, or perhaps even superhuman.' LeCun co-founded Meta's AI research division more than a decade ago with Rob Fergus, a fellow professor at New York University. Fergus later left for Google but returned to Meta last month after a 5-year absence to run the research lab, replacing longtime director Joelle Pineau. Fergus wrote on LinkedIn last month that Meta's commitment to long-term AI research 'remains unwavering' and described the work as 'building human-level experiences that transform the way we interact with technology.'

Scale AI Announces Next Phase of Company's Evolution
Scale AI Announces Next Phase of Company's Evolution

Business Wire

timean hour ago

  • Business Wire

Scale AI Announces Next Phase of Company's Evolution

SAN FRANCISCO--(BUSINESS WIRE)--Scale AI, Inc. ('Scale' or the 'Company'), the humanity-first AI company, today announced a significant new investment from Meta Platforms, Inc. (Nasdaq: META) that values Scale at over $29 billion. The agreement will also substantially expand Scale and Meta's commercial relationship to accelerate deployment of Scale's data solutions. In addition, Scale's founder, Alexandr Wang, is joining Meta to work on Meta's AI efforts. Wang will continue to serve as a director on the Scale Board of Directors and support Scale's ongoing work to unlock the power of AI and keep human values at the forefront. Scale's Board of Directors has appointed Chief Strategy Officer, Jason Droege, to serve as the Company's Interim Chief Executive Officer. Droege joined Scale in September 2024 with more than 20 years of experience building and leading iconic technology companies that improve everyday life, including Uber Eats and Axon. At Scale, Droege's leadership has been instrumental to the growth and achievements of AI labs and work with enterprise customers. Scale will utilize the proceeds of the deal to accelerate innovation and strengthen strategic partnerships with customers. In addition, Scale will distribute proceeds from Meta's investment to Scale shareholders and vested equity holders, providing them with substantial liquidity as well as the opportunity to continue participating in Scale's success as ongoing equity holders of the Company. Following its investment, Meta will hold a minority of Scale's outstanding equity. Scale remains an independent leader in AI, committed to providing industry-leading AI solutions and safeguarding customer data. Having played a pivotal role in accelerating AI development from its inception, Scale will continue to partner with leading AI labs, multinational enterprises, and governments to deliver expert data and technology solutions through every phase of AI's evolution. Alexandr Wang, Founder of Scale: 'AI is one of the most revolutionary technologies of our time, with unlimited possibility and far-reaching influence on how people, businesses and governments succeed. Scale bridges the gap between human values and technology to help our customers realize AI's full potential. Meta's investment recognizes Scale's accomplishments to date and reaffirms that our path forward – like that of AI – is limitless. 'I'm delighted that Jason will lead the next steps in Scale's journey and bring his experience and skill to continue its success. Our bench is deep and I'm confident that we'll continue best serving our valued customers, providing the highest quality data, and building customized AI applications that transform businesses and governments.' Jason Droege, Interim Chief Executive Officer of Scale: 'Meta's new investment and our broadened commercial agreement is a testament to the incredible work and dedication of the entire Scale team, and the tremendous upside that lies ahead for Scale. Scale has led the charge in accelerating AI development because we don't just adapt to change, we drive it. We have built the strongest foundation to tackle AI's data challenges and push the boundaries of what's possible. I'm dedicated to working with our talented team to continue realizing Alex's vision of bringing the benefits of AI to everyone. To our valued customers, we remain committed to partnering with you to build customized AI solutions that transform your organizations.' About Jason Droege Jason Droege is a seasoned technology executive and entrepreneur. Before joining Scale, he was a Venture Partner at Benchmark, an investment firm focused on early-stage venture investing in consumer, marketplaces, open-source, AI, infrastructure, and enterprise software. He joined Benchmark in 2021 to collaborate closely with portfolio founders, leveraging his extensive experience in building consumer businesses. Prior to his role at Benchmark, Droege was a founder of Uber Eats, Uber's food delivery platform. Under his leadership as Vice President, Uber Eats grew from its first order in Toronto in December 2015 into one of the world's largest marketplaces with a $19 billion gross merchandise volume run rate globally at the time of his departure in 2020. Before his tenure at Uber, Droege played a pivotal role at Taser (now Axon Enterprise, Inc.), where he developed the Company's cloud strategy, transforming it from a hardware-focused business into a hybrid hardware and software company. He attended the University of California, Los Angeles (UCLA). Advisors Wilson Sonsini Goodrich & Rosati served as legal advisor to Scale and Centerview Partners LLC served as financial advisor. Joele Frank served as Scale's strategic communications advisor. About Scale AI Scale delivers high-quality data for AI development through our Data Foundry, and we provide technology solutions that enable our enterprise and public-sector customers to build, deploy, and oversee AI applications. By aligning AI with human values, we are accelerating progress while providing the necessary oversight to strengthen human sovereignty. Scale was founded in 2016 and is headquartered in San Francisco.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store