logo
As AI Advances, Is Teaching Kids To Be High-Agency Generalists The Answer?

As AI Advances, Is Teaching Kids To Be High-Agency Generalists The Answer?

Forbes12-05-2025

As AI Advances, Is Teaching Kids To Be High-Agency Generalists The Answer?
It can be argued that most schools still prepare kids for a world that no longer exists. Add to the mix the rapid advancement of artificial intelligence and the gap between education and reality may be a gulf.
This is the premise of a recent conversation hosted by Steven Bartlett on The Diary of a CEO podcast. Joined by Daniel Priestley, Bret Weinstein and Amjad Masad, as part of a wider conversation about the future of AI, they considered what today's children need to learn.
The Australian entrepreneur Daniel Priestley noted how schools treat learners. He compared classrooms to language models such as ChatGPT or Google Gemini. "We're essentially treating them like learning LLMs: prompt them, expect the right answer and then say, 'Off you go into the world.'' In a world of advanced language models, are we teaching our children the same skills as an AI?
According to Priestly, many young people reach adulthood unsure about how money functions, how relationships grow and how systems interact. He described the issue as a relevance problem. Careers pivot, knowledge evolves and innovation rewards agility.
Bret Weinstein, known for his work in evolutionary biology, extended that thought. He described traditional classrooms as relics. They served a different economy. They produced efficiency and compliance. But today's challenges ask for problem-solving, emotional balance and healthy living.
Weinstein observed that those best placed to teach many of the skills needed by young people, often work outside education.
Replit founder Amjad Masad turned to what he believes works. He explained that educational reform is crowded with ideas, but that most have limited effect. One approach he advocated for is personal one-on-one tutoring and the research backs him up. "There's one intervention that shows two sigma improvement: one-on-one tutoring. It puts you ahead of 99% of others. So the real question is: How can we provide every child in the world with one-on-one tutoring? The answer: AI."
The cost of one-to-one support has kept it rare. But AI shifts that. Masad believes digital tools could scale the tutoring model. No longer must they wait for school to be open and for the attention of the teacher in a busy classroom.
Masad also described moments of creative play. He and his children use AI to explore stories. Their ideas evolve through dialogue with one another and the AI. One starts with a cat on the moon. Then asks: what would it eat? How would it survive? What if the moon changes? These prompts teach generative thinking. Their learning isn't through rote memorization, but through imagination.
Masad and Weinstein stress an important limit. AI works best when paired with reality. Recalling a practice from his own teaching, Weinstein explained, "If an engine won't start, you can't debate your way to a solution, you have to figure out what's wrong and fix it. I say as little as possible and let the physical system provide the feedback."
Abstract accuracy has value. But it differs from applied success. Many systems today, from supply chains to ecosystems, resist easy prediction. They demand responses, not scripts.
Weinstein advised preparing children for these systems. The advice wasn't complicated. Prototype. Watch. Adjust. Repeat. Think like a navigator. Not a builder following blueprints.
Priestley summarized the goal. Raise high-agency generalists. 'I want my kids to be motivated, self-starting, and equipped with a wide-ranging toolkit. I imagine them instructing robots and AI agents, generating ideas, writing books, organizing festivals, running podcasts, starting businesses... all at once.'
He explained that, in his home, learning doesn't stay within one subject. His kids play chess. Practice jiu-jitsu. Perform. Code. Experiment. They build things. They sell things. They make decisions. And they make mistakes.
This lifestyle teaches through action. And it draws a clear line between making and scrolling. One path leads to growth. The other often doesn't. AI can either amplify a child's creativity or consume their time. The difference lies in intention.
Weinstein spoke sharply about attention. He warned that content designed to trigger dopamine changes behavior. Children may not notice the shift. But it shows in what they pursue. And what they avoid.
Masad added that rapid idea generation is an advantage. Creativity isn't just nice to have. It's essential. AI can aid this. But it shouldn't guide it fully. The spark must begin within the learner.
Taken together, the conversation suggested a shift already underway. A new kind of learning. It asks children to do more than remember. It asks them to respond. To adapt.
Many parents already see this. They don't focus on grades alone. They ask what challenges their children might solve. What systems they'll enter. What roles they'll create.
The world children inherit grows more complex. But they won't navigate it with memorized answers. They'll do it by engaging, testing and building.
This conversation conveyed that the real task is not preparing them to follow. It's preparing them to shape.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Is Nvidia stock a massive bargain — or a massive value trap?
Is Nvidia stock a massive bargain — or a massive value trap?

Yahoo

time10 minutes ago

  • Yahoo

Is Nvidia stock a massive bargain — or a massive value trap?

AI has transformed demand for computer chips and the most obvious beneficiary of that has been Nvidia (NASDAQ: NVDA). With a stock market capitalization of $3.4trn, Nvidia might not seem like an obvious bargain. But what if it is really worth that much – or potentially a lot more? I have been keen to add some Nvidia stock to my portfolio, but I do not want to overpay. After all, Nvidia has shot up 1,499% in five years! So, here is what I am doing. For some companies in which I have invested in the past, from Reckitt to Burberry, I have benefited as an investor from a market being mature. Sales of detergent or pricy trenchcoats may grow over time, but they are unlikely to shoot up year after year. That is because those firms operate in mature markets. On top of that, as they are large and long-established, it is hard for them to grow by gaining substantial market share. So, market maturity has helped me as an investor because it has made it easier for me to judge what I think the total size of a market for a product or service may be – and how much of it the company in question looks likely to have in future. Chips, by contrast, are different. Even before AI, this was still a fast-growing industry – and AI has added fuel to that fire. On top of that, Nvidia is something of a rarity. It is already a large company and generated $130bn in revenues last year. But it is not mature – rather, it continues to grow at a breathtaking pace. Its first-quarter revenue was 69% higher than in the same three months of last year. Those factors mean that it is hard to tell what Nvidia is worth. Clearly that is not only my opinion: the fact that Nvidia stock is 47% higher than in April suggests that the wider market is wrestling with the same problem. Could it be a value trap? It is possible. For example, chip demand could fall after the surge of recent years and settle down again at a much lower level. A lower cost rival could eat badly into Nvidia's market share. Trade disputes could see sales volumes fall. With a price-to-earnings (P/E) ratio of 46, just a few things like that going wrong could mean today's Nvidia stock price ends up looking like a value trap. On the other hand, think about those first-quarter growth rates. If Nvidia keeps doing as well, let alone better, its earnings could soar. In that case, the prospective P/E ratio based on today's share price could be low and the current share price a long-term bargain. I see multiple possible drivers for such an increase, such as more widespread adoption of AI and Nvidia launching even more advanced proprietary chip designs. So, I reckon the company could turn out to be either a massive bargain at today's price, or a massive value trap. The price does not offer me enough margin of safety for my comfort if the stock is indeed a value trap. So, I will wait for a more attractive valuation before buying. The post Is Nvidia stock a massive bargain — or a massive value trap? appeared first on The Motley Fool UK. More reading 5 Stocks For Trying To Build Wealth After 50 One Top Growth Stock from the Motley Fool C Ruane has no position in any of the shares mentioned. The Motley Fool UK has recommended Burberry Group Plc, Nvidia, and Reckitt Benckiser Group Plc. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors. Motley Fool UK 2025

A solar company brought Peter Lorenz to Albuquerque. Now he is helping shape the city's economic future
A solar company brought Peter Lorenz to Albuquerque. Now he is helping shape the city's economic future

Yahoo

time10 minutes ago

  • Yahoo

A solar company brought Peter Lorenz to Albuquerque. Now he is helping shape the city's economic future

Jun. 8—There's no reason for Peter Lorenz to be in Albuquerque. "Except that I saw an opportunity that I was very excited about ... and just went for it," says Lorenz, the CEO of Albuquerque-based Unirac Inc., which makes mounting platforms for solar systems. "I've been here now 13 years, which, for the solar industry, is a pretty long time. ... I love it here. It's beautiful; it's gorgeous. I think the people are kind." Like those in New Mexico's largest city, Lorenz is also kind. He spends much of his free time — of which there is not much — working to improve a lagging education system and advocating for many of the small businesses scattered across the city. It's a job Lorenz, who is originally from Germany, sees as a priority in the place he and his family now call home. Starting in July, Lorenz will become the chairman of the Greater Albuquerque Chamber of Commerce's board of directors — a role that, in many ways, can influence lawmaking in Santa Fe. It will be Lorenz's second stint in the role since 2022. He says the GACC's priorities are the same as they have been in years past, focusing on the big issues the city is facing: education, public safety and Downtown transformation. "What I love about the chamber is we're not active politicians, so you get continuity with us," he says. "We don't need to look for instant gratification. We have the time to work on these big issues and figure out how to effect positive change for everybody. In that sense, I love that mandate and that aspiration." What's your focus as the chamber's board chairman? We have to address public safety, which is not an easy problem (to fix) because you have, on the one hand, violent crime, youth crime. At the same time, you also have mental health, right? And then homelessness, that kind of sits somewhere in there, too. I think we want to continue challenging different stakeholders, focusing on what different solutions are, and then effect positive change. We don't need to find one solution that solves it all, and we also don't need to find a solution that is the right answer. We need to find solutions that move us in the right direction, and then we need to collectively have the courage to say, "Look, this is not working well enough. We had good intentions. Let's fix this and go in a different direction," as opposed to, "I only want to do this because this is what I believe in." How has your Unirac leadership shaped your approach at the chamber? I think it's always good to ask yourself what drives you, what motivates you, and what is your unique contribution. When I look at what I do here at Unirac, it is so much about building a good team and then removing obstacles for my team members and allowing them to do great things. ... We have an amazing (chamber) board. It's really about bringing out the different perspectives of the board, and then also engaging the board so that the different stakeholders — let it be the city, our legislators, APS — don't just hear from Terri Cole, the CEO of the chamber, or our senior board members, but the whole board. I think that's important. Tell me about a hardship you've experienced. I was very successful in consulting. I was kind of ready to get promoted, and I expected to get promoted. To get promoted, you have to stand for something, right? It was kind of like three or four things: problem-solving, developing new knowledge, client leadership and team leadership. I was always known for client leadership and team leadership. People wanted to be on my team. But what happened is this: I was told that I would not get promoted because they found two people who said they would never work with me again. And this was super painful because I thought I was such a great leader. I was young. I was 30 at that time. How did you overcome that? I had a choice where I could easily find another job — a better-paying job — or I could stay. I decided to stay and said, "OK, I've got to work on this." Because if somebody feels that way, there is a reason for it, right? Super painful at a very deep, personal level. But it really allowed me to say, "OK, I'm not as good as I think I am," and I need to constantly think about how I affect people around me and what motivates people around me. That was probably the one event that really kind of changed my professional and personal life. What's the best advice you've ever received? It's actually from my dad. It's to remember where you come from. I grew up very differently from the way I live now, and I would be nothing without my parents. ... I think you've got to be authentic and know where you come from. What do you do in your free time? I have two kids, and I spend a lot of time with my kids. I sit on the board of trustees of a university in New York, Manhattanville, and it's a liberal arts college. All of these colleges have funding problems, so I'm the first non-alumnus to be on that board of trustees. I really want to figure out how I can help that college thrive in five to 10 years. I call that fun. The other one is, I am really focused on mental health, so I do a lot of things for my mental health — I work out pretty much every day in the morning. I meditate. I read a lot. That's kind of like my me time, and how I take care of myself. And then, I do like our brewery scene, and I like meeting up with people.

We're offloading mental tasks to AI. It could be making us stupid
We're offloading mental tasks to AI. It could be making us stupid

Yahoo

time10 minutes ago

  • Yahoo

We're offloading mental tasks to AI. It could be making us stupid

Koen Van Belle, a test automation engineer who codes for a living, had been using the artificial intelligence large language model Copilot for about six months when one day the internet went down. Forced to return to his traditional means of work using his memory and what he had decades of experience doing, he struggled to remember some of the syntax he coded with. 'I couldn't remember how it works,' Van Belle, who manages a computer programming business in Belgium, told Salon in a video call. 'I became way too reliant on AI … so I had to turn it off and re-learn some skills.' As a manager in his company, Van Belle oversees the work of a handful of interns each year. Because their company has limits on the use of AI, the interns had to curb their use as well, he said. But afterward, the amount and quality of their coding was drastically reduced, Van Belle said. 'They are able to explain to ChatGPT what they want, it generates something and they hope it works,' Van Belle said. 'When they get into the real world and have to build a new project, they will fail.' Since AI models like Copilot and ChatGPT came online in 2022, they have exploded in popularity, with one survey conducted in January estimating that more than half of Americans have used Copilot, ChatGPT, Gemini or Claude. Research examining how these programs affect users is limited because they are so new, but some early studies suggest they are already impacting our brains. 'In some sense, these models are like brain control interfaces or implants — they're that powerful,' said Kanaka Rajan, a computational neuroscientist and founding faculty member at the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University. 'In some sense, they're changing the input streams to the networks that live in our brains.' In a February study conducted by researchers from Microsoft and Carnegie Mellon University, groups of people working with data worked more efficiently with the use of generative AI tools like ChatGPT — but used less critical thinking than a comparator group of workers who didn't use these tools. In fact, the more that workers reported trusting AI's ability to perform tasks for them, the more their critical thinking was reduced. Another 2024 study published last year reported that the reduction in critical thinking stemmed from relying on AI to perform a greater proportion of the brain work necessary to perform tasks in a process called cognitive offloading. Cognitive offloading is something we do everyday when we write our shopping list, make an event on the calendar or use a calculator. To reduce our brain's workload, we can 'offload' some of its tasks to technology, which can help us perform more complex tasks. However, it has also been linked in other research to things like having a worse memory. As a review published in March concluded: 'Although laboratory studies have demonstrated that cognitive offloading has benefits for task performance, it is not without costs.' It's handy, for example, to be able to rely on your brain to remember the grocery list in case it gets lost. So how much cognitive offloading is good for us — and how is AI accelerating those costs? This concept is not new: The Greek philosopher Socrates was afraid that the invention of writing would make humans dumber because we wouldn't exercise our memory as much. He famously never wrote anything down, though his student, Plato, did. Some argue Socrates was right and the trend is escalating: with each major technological advancement, we increasingly rely on tools outside of ourselves to perform tasks we once accomplished in-house. Many people may not perform routine calculations in their head anymore due to the invention of the calculator, and most people use a GPS instead of pulling out a physical map or going off physical markers to guide them to their is no doubt these inventions have made us more efficient, but the concern lies in what happens when we stop flexing the parts of the brain that are responsible for these tasks. And over time, some argue we might lose those abilities. There is an old ethos of "use it or lose it" that may apply to cognitive tasks as well. Despite concerns that calculators would destroy our ability to do math, research has generally shown that there is little difference in performance when calculators are used and when they are not. Some have even been critical that the school system still generally spends so much time teaching students foundational techniques like learning the multiplication tables when they can now solve those sorts of problems at the touch of a button, said Matthew Fisher, a researcher at Southern Methodist University. On the other hand, others argue that this part of the curriculum is important because it provides the foundational mathematical building blocks from which students learn other parts of math and science, he explained. As Fisher told Salon in a phone interview: "If we just totally get rid of that mathematical foundation, our intuition for later mathematical study, as well as just for living in the world and understanding basic relationships, is going to be off.' Other studies suggest relying on newer forms of technology does influence our brain activity. Research, for example, has found that students' brains were more active when they handwrote information rather than typing it on a keyboard and when using a pen and paper versus a stylus and a tablet. Research also shows that 'use it or lose it' is somewhat true in the context of the skills we learn. New neurons are produced in the hippocampus, the part of the brain responsible for learning. However, most of these new cells will die off unless the brain puts effort and focus into learning over a period of time. People can certainly learn from artificial intelligence, but the danger lies in forgoing the learning process to simply regurgitate information that it feeds us. In 2008, after about two decades of the public internet, The Atlantic published a cover story asking "Is Google making us stupid?" Since then, and with the emergence of smart phones and social media, research has shown that too much time on the internet can lower our ability to concentrate, make us feel isolated and lower our self-esteem. One 2011 review found that people increasingly turn to the internet for difficult questions and are less able to recall the information that they found on the internet when using it to answer those questions. Instead, participants had an enhanced ability to recall where they found it. 'The internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves,' the authors concluded. In 2021, Fisher co-authored research that also found people who used internet searches more had an inflated sense of their own knowledge, reporting exaggerated claims about things they read on the internet compared to a control group who learned things without it. He termed this phenomenon the 'Google effect.' 'What we seem to have a hard time doing is differentiating where our internally mastered knowledge stops and where the knowledge we can just look up but feels a lot like our knowledge begins,' Fisher said. Many argue that AI takes this even further and cuts out a critical part of our imaginative process. In an opinion piece for Inside Higher Education, John Warner wrote that overrelying on ChatGPT for written tasks 'risks derailing the important exploration of an idea that happens when we write.' 'This is particularly true in school contexts, when the learning that happens inside the student is far more important than the finished product they produce on a given assignment,' Warner wrote. Much of the energy dedicated to understanding how AI affects our brains has been focused on adolescents because younger generations use these tools more and may also be more vulnerable to changes that occur because their brains are still developing. One 2023 study, for example, found junior high school students who used AI more had less of an ability to adapt to new social situations. Another 2023 paper also found that students who more heavily relied on AI to answer multiple choice questions summarizing a reading excerpt scored lower than those who relied on their memory alone, said study author Qirui Ju, a researcher at Duke University. 'Writing things down is helping you to really understand the material,' Ju told Salon in a phone interview. 'But if you replace that process with AI, even if you write higher quality stuff with less typos and more coherent sentences, it replaces the learning process so that the learning quality is lower.' To get a better idea of what is happening with people's brains when using large language models, researchers at the Massachusetts Institute of Technology connected 32-channel electroencephalograms to three groups of college-age students who were all answering the same writing prompts: One group used ChatGPT, another used Google and the third group simply used their own brains. Although the study was small, with just 55 participants, its results suggest large language models could affect our memory, attention and creativity, said Nataliya Kos'myna, the leader of the 'Your Brain on LLM' project, and a research scientist at the MIT Media Lab. After writing the essay, 85% of the group using Google and the group using their brains could recall a quote from their writing, compared to only 20% of those who used large language models, Kos'myna said. Furthermore, 16% of people using AI said they didn't even recognize their essay as their own after completing it, compared to 0% of students in the other group, she added. Overall, there was less brain activity and interconnectivity in the group that used ChatGPT compared to the groups that used Google or their brains only. Specifically, activity in the regions of the brain corresponding to language processing, imagination and creative writing in students using large language models were reduced compared to students in other groups, Kos'myna said. The research team also performed another analysis in which students first used their brains for the tasks before switching to performing the same task with the large language models, and vice versa. Those who used their brains first and then went on to try their hand at the task with the assistance of AI appeared to perform better and had the aforementioned areas of their brains activated. But the same was not true for the group that used AI first and then went on to try it with just their brains, Kos'myna said. 'It looks like the large language models did not necessarily help you and provide any additional interconnectivity in the brain,' Kos'myna told Salon in a video call. 'However, there is potential … that if you actually use your brain and then rework the task when being exposed to the tool, it might be beneficial.' Whether AI hinders or promotes our capacity for learning may depend more on how we use it than whether we use it. In other words, it is not AI that is the problem, but our overreliance on it. Van Belle, in Belgium, now uses large language models to write social media posts for his company because he doesn't feel like that is where his skills are most refined and the process can be very time-consuming otherwise. 'I would like to think that I would be able to make a fairly decent LinkedIn post by myself, but it would take me an extra amount of time,' he said. 'That is time that I don't want to waste on something I don't really care about.' These days, he sees AI as a tool, which it can be — as long as we don't offload too much of our brain power on it. 'We've been on this steady march now for thousands of years and it feels like we are at the culmination of deciding what is left for us to know and for us to do,' Fisher said. 'It raises real questions about how best to balance technology and get the most out of it without sacrificing these essentially human things.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store