logo
The Secret To AI In Education: We Should All Be Frustrated Pragmatists

The Secret To AI In Education: We Should All Be Frustrated Pragmatists

Forbes20-04-2025

The Secret To AI In Education: Why We Should Be Frustrated Pragmatists
When it comes to AI in education, I've noticed two distinct camps beginning to form.
The first camp consists of teachers, school leaders and consultants who are focused on how AI can support the day-to-day work of educators. These are the pragmatists. The ones using AI to reduce teacher workload, automate repetitive tasks, streamline assessment and unlock extra time in increasingly stretched schedules. They see AI as a means to enhance the current system. In a global context of teacher shortages, low morale and burnout, their focus practical and, I would argue, noble.
The second camp sees things differently. For them, AI should not be a tool to help us do what we've always done, only faster or more efficiently. It's a transformative force. This group believes that the real potential of AI lies in its power to reimagine education. They are frustrated that education hasn't already changed and want AI to be the spark that ignites the revolution. They want to move beyond incremental improvements and into bold redesigns: new models of learning, new systems of assessment and new structures of schooling altogether. They are asking the big questions about relevance, purpose and the future of education.
Over the past couple of years, I've noticed a subtle but growing tension between these two perspectives. The second camp sometimes look down on the first, as if using AI to help with lesson planning or grading is somehow pedestrian, even counterproductive to real innovation. As if anything short of systemic transformation isn't worth talking about.
This, I believe, is a mistake.
Because the truth is, both perspectives are necessary. The answer isn't either/or. It's both/and.
I'm a frustrated pragmatist.
Back in 2022, I wrote a post referencing the three-box solution to innovation, a model developed by Professor Vijay Govindarajan from Dartmouth College's Tuck School of Business. I adapted it and applied it to education. I then expanded on it in my book The AI Classroom, and even more deeply in my latest book, Infinite Education. This framework offers one of the most helpful lenses through which to approach AI in education. A perspective that honours both present-day practicality and long-term reinvention.
At its core, the model divides innovation into two key categories: linear and non-linear.
Linear innovation is about optimizing and improving what already exists. It's evolutionary, not revolutionary. It enhances the current system. It can make schools run more efficiently, helping teachers manage workloads and freeing up time to focus on what matters most.
AI is proving to be incredibly effective in this space. It can support lesson planning, generate differentiated materials, summarise assessment data, automate feedback and assist with communication and reporting. These are not small upgrades. In many schools, they're game-changers.
As I work with educators around the world, I see firsthand the excitement, relief and even joy that comes from discovering AI tools that make their lives easier. These teachers aren't looking to overhaul the system, they're just trying to do their jobs well and get a bit of breathing room in the process. And when AI helps them achieve that, it's not 'false' innovation. It's real and meaningful progress.
Who are we to say that this isn't valid?
Who are we to dismiss these tools as unimportant or unimaginative? That kind of thinking is patronising and it's inaccurate.
Linear innovation may be the first step, but it's a vital one. Especially in a profession that's been pushed to its limits, finding new ways to support educators in their existing work is not a distraction from innovation. It's the foundation of it.
But we also can't afford to stop there.
Non-linear innovation doesn't seek to make the current system more efficient, but to question it. It asks: What if the way we've always done things no longer makes sense? What if there's a better model altogether?
This kind of thinking becomes crucial when new technologies arrive that don't just make old systems better, but have the potential to make them obsolete. AI is one of those technologies.
For decades, education has been shielded from true disruption. Schools have existed in protected ecosystems, relatively untouched by market forces or external competition. But with AI, that is changing. For the first time, we are seeing the emergence of powerful learning alternatives. ChatGPT apps that teach maths, AI-powered schools and fully online AI tutors.
This is the real disruptive force of AI. It's not just that it automates existing processes. It introduces competition on a level never seen before.
When students can access personalised, high-quality learning from anywhere, anytime and at little to no cost, schools must begin to ask:
As I recently said on the Joining the Dots podcast, AI isn't the ultimate goal in education; it's the lever for driving much-needed systemic reform. It's not the destination. It's the momentum builder. The accelerator. The great nudge we've needed to rethink education's purpose, design and delivery.
This is why I wrote Infinite Education. Not just to explore AI's classroom applications, but to provide a playbook for non-linear innovation. A guide for schools looking to evolve before they're forced to.
This dual innovation journey of linear and non-linear requires a new kind of leadership. Good leadership balances the linear and the non-linear. It manages the present while challenging its shelf life. It supports existing systems while creating new ones. It holds space for both security and disruption.
In education, the time has come for both managerial and heretical leaders. Managerial leaders keep the system functioning. They maintain stability, operations, safety and accountability. Their work is essential. But we also need heretical leaders. The ones who dare to imagine something different. The ones who ask uncomfortable questions. The ones who aren't afraid to disrupt their own assumptions. These leaders often face resistance, but they are the ones who move the system forward.
True educational innovation in the age of AI requires both types of leadership. Neither alone is enough.
So rather than choosing sides. Rather than dividing ourselves into camps, we must choose integration.
Let's build a culture that values both kinds of innovation:
Tools that help us survive today, and visions that help us invent tomorrow.
Let's honour the teachers using AI to reclaim time and energy and support those dreaming of systems not yet built.
Let's stop drawing lines and start building bridges between the now and the next, the practical and the possible, the performance engine and the innovation lab. Because if we can do that, we don't just adapt to AI. We lead with it.
That's the kind of education system the future needs.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I'm a Mom Who Uses ChatGPT for Help—Here's What I'm Learning
I'm a Mom Who Uses ChatGPT for Help—Here's What I'm Learning

Yahoo

time20 hours ago

  • Yahoo

I'm a Mom Who Uses ChatGPT for Help—Here's What I'm Learning

Fact checked by Sarah Scott How often should I wash my toddler's sweaters? Can you recommend a sugar-free muffin recipe? What's the difference between a pteranodon, a pterodactyl, and a pterosaur? Lately, I've been directing all questions like these to ChatGPT, and I'm getting answers in seconds. While I approach its responses cautiously, I'm not ashamed to admit that artificial intelligence (AI) has become an unexpected addition to my parenting village. With millions of weekly ChatGPT users, I'm certainly not the only parent checking in with a chatbot. One 2024 study found that about 71% of parents had used ChatGPT, and more than half used it specifically for parenting, including to find strategies and activities for their kids. 'Given how quickly these tools have been adopted everywhere, it's safe to say that a huge and growing number of parents are using them,' says Nicholas C. Jacobson, PhD, a computational psychologist and associate professor of biomedical data science, psychiatry, and computer science at Dartmouth College. But before we all become too comfortable, it's important to weigh up the facts. So this time, instead of asking AI, I put the questions to the experts. They agree AI can be beneficial to parents but say some caution is needed. One of the biggest trends for 2025 is the rise of the AI-powered chatbot. If you haven't downloaded an app yet, or you still think AI is a 2001 film starring Haley Joel Osment, we'll help you catch up. A generative chatbot is an AI application that simulates human conversation by producing text-based responses in real time. ChatGPT is one of the leading chatbots (and my personal go-to). It can answer questions, draft and edit content, analyze text, and generate images. ChatGPT users have almost doubled from 400 million weekly ones in February 2025 to almost 800 million in June 2025, and it's believed to be the fastest growing app of all time. While ChatGPT is one of the better known mainstream chatbots, developers are increasingly targeting parents with products like AI Chat for Parents and Parent GPT. And for those wanting more, apps such as Familymind, Milo, Goldee, and Claude Pro go beyond AI-powered conversation. They act as a virtual assistant for parents, offering AI-powered family management, to-do lists, and scheduling. As an overtired, overstimulated mom, deferring low-risk parenting queries to technology feels like a no-brainer for me. ChatGPT is quick, usually positive, and is available at all hours of the day. Claudia Hoetzel, parent coach, early childhood educator, and owner of House of Parenting, says some parents find it easier to ask a chatbot a question rather than reaching out to a partner or coach. I get it—AI won't judge me for asking questions I'm pretty sure I should already know (hello, laundry 101). AI isn't just answering questions—it's redefining parental support. Here's why some parents are making chatbots a go-to. Chatbots can come to the aid of parents seeking reassurance from the depth of the trenches. That's especially true for parents of babies who have tons of questions throughout the day. 'New parents have shared that they use AI chatbots to better understand their baby's behaviors, support sleep and feeding routines, and enhance their connection with their newborn,' says Sophie Pierce, PsyD, a clinical child and adolescent psychologist. 'Others turn to AI for interpreting pediatrician notes, tracking developmental milestones, or addressing behavioral challenges." Perhaps unsurprisingly, AI can come in handy in moments when uncertain or frustrated parents need a quick solution. It can offer them a sense of structure or clarity. 'Parents often turn to AI when faced with urgent, emotionally-charged moments, like 'my child won't go to bed' or 'my child won't leave the house in the morning,'' explains Hoetzel. 'These high-stress situations can trigger a strong desire for quick fixes, especially when parents are exhausted or overwhelmed.' As a busy mom of two, I find chatbots particularly intuitive and convenient for advice on the go, especially on those days when I feel like I'm pouring from an empty cup."Parents often turn to AI when faced with urgent, emotionally-charged moments, like 'my child won't go to bed' or 'my child won't leave the house in the morning.""Parents are also increasingly turning to chatbots for mental health support. Dr. Pierce says chatbots can be a meaningful early step for burned-out parents. 'Parenting can be incredibly overwhelming—so much so that even when parents know helpful tools or strategies, they may struggle to access or implement them,' she says. 'This overwhelm can also reduce their capacity for creative problem-solving. In those moments, turning to an AI chatbot may offer a way to break through the fog and begin addressing a challenge more effectively.' Yoky Matsuoka is the CEO of Panasonic Well, a health and wellness-focused initiative within Panasonic that develops AI-driven technologies to improve well-being. She's also a mom of four and uses chatbots every day. 'When I have faced challenging times as a parent and daughter, it has been helpful in suggesting concrete ideas to support my personal wellness, such as going for a run or meditating when I am feeling stressed,' Matsuoka shares. 'While I know these things, it is helpful to be reminded of the value of simple activities of self-care.' But AI should never replace professional help—even though studies show some parents trust ChatGPT over doctors. Experts urge parents dealing with symptoms of a mental health condition to speak with their health care provider. Beyond parental support, chatbots can be a goldmine of inspiration. I've turned to chatbots to brainstorm child-friendly activities, to weigh up stroller options, and to recommend theme park itineraries. Dr. Jacobson says he knows of parents also using AI to find recipes for picky eaters and getting help to simplify a complex topic. Additionally, Matsuoka says she's saved hours of research and trip planning by using AI to find accommodation, restaurants, and family-friendly activities. She adds that chatbots have also helped her draft notes to the kids' teachers, improve email tone, and translate documents. While chatbots like ChatGPT might sound like an online superhero, Dr. Pierce says they can both help and hinder parenting. For some, chatbots help 'get the ball rolling,' reactivating problem-solving skills when those feel out of reach, she says. 'On the other hand, parents are already inundated with vast and often conflicting information about parenting approaches,' says Dr. Pierce. 'AI chatbots can sometimes add to this overload, amplifying confusion or self-doubt. As with all tools, their impact depends on how, when, and why they're used.' Dr. Jacobson, who in 2019 launched Therabot, a generative chatbot that uses AI to provide evidence-based mental health support, is well aware of risks to parents who rely heavily on chatbots for advice. 'General-purpose models aren't trained on validated parenting science,' he says. 'Their advice can be generic, wrong, or reflect the biases in their training data–i.e. the open internet. The AI doesn't know your child, your family, or the situation. It can't replicate the clinical judgment of a doctor or the deep, intuitive knowledge a parent has.' What's more, a parent's reliance on chatbot technology could intensify anxiety symptoms in some. 'Engaging in reassurance seeking and getting it can worsen one's anxiety,' says Dr. Jacobson. Also, over-reliance on AI tools may contribute to further isolation for those already struggling, warns Dr. Pierce. 'Real-world support systems foster creative problem-solving, perspective-taking, and belonging,' she says. 'Without that, some parents may internalize AI-generated advice in ways that make them feel more inadequate or disconnected from their intuition.'"The AI doesn't know your child, your family, or the situation. It can't replicate the clinical judgment of a doctor or the deep, intuitive knowledge a parent has."Like most areas of parenting, introducing tools like AI into the mix is all about balance. Experts believe AI tools can be helpful for gathering quick information. 'It's a way to access basic facts at the moment when time or mental energy is low,' says Hoetzel. 'Used as a starting point, it can offer accessible guidance or ideas that parents might explore further. But it's not a replacement for experience, intuition, or professional support.' Dr. Jacobson agrees, saying, 'Treat AI as a brainstorming partner, not an expert. Use it for ideas, but ensure that you don't over-rely on it for medical or mental health topics. Always filter what it says through your own common sense and what you know about your child—you're the real expert.' For Dr. Pierce, context is critical. 'The information it provides may be general or based on broad patterns, rather than tailored to individual factors, such as personality, coping capacity, support systems, or cultural background,' she says. 'While chatbots can be helpful for ideas or perspective-taking, ultimately parents benefit most when they attune to their own intuition and lived knowledge of their child.' Don't forget about protecting your privacy either. 'Don't share sensitive family information unless you're really comfortable with that,' adds Dr. Jacobson. As for the future of AI, the potential is unlimited. What we need now is more research to steer us in the next direction. 'The technology is moving so fast,' says Dr. Jacobson. 'We really need the science to catch up so we can understand the effects and build the right kinds of safeguards.' Read the original article on Parents

Apple Wants To Get Into Your Head, Literally.
Apple Wants To Get Into Your Head, Literally.

Forbes

time15-05-2025

  • Forbes

Apple Wants To Get Into Your Head, Literally.

Apple Wants To Get Into Your Head, Literally. In the book Infinite Education I wrote about the advancing front of brain-computer interfaces (BCIs). When exploring AI powered implants that detect and record electrical signals from the brain, it's not difficult to imagine a world approaching where thought could replace touch. Apple has now entered the ring. Apple's move into BCIs through a partnership with Synchron is more than a headline. It's a glimpse into the future of how we interact with machines. The company that made the smartphone mainstream is now validating the idea that the brain itself can drive our devices. The significance lies in scale, signal and simplicity. When Apple shifts, the world takes notice. Their support for Synchron's BCI technology could move the entire conversation forward on the use of such devices. Synchron's Stentrode device is the first BCI implant that doesn't require open-brain surgery. That fact alone will bring relief to a lot of people. The device is inserted via the jugular vein and interfaces with the motor cortex. It reads neural activity and converts it into control signals. It's already helping people with paralysis send texts, browse the web and interact with software using thought alone. While Elon Musk's Neuralink has demonstrated amazing results, it still requires invasive open-brain surgery, which presents significant barriers to widespread adoption. What makes this moment historic is Apple's introduction of a new software protocol called BCI HID (Human Interface Device). This is Apple's way of telling developers and device makers: brain input is no longer fringe. It's officially part of the Apple ecosystem. The same ecosystem that powers iPhones, iPads and the new Vision Pro headset. Is brain activity now joining voice, touch and gesture as a recognized input method for devices? This would open the door to software that responds to thought, hardware that adapts to neural patterns and user experiences that center around intent rather than action. It could make accessibility smarter. Could it even make device interaction more human? Still, context matters. Apple has been slow to enter the AI race. While companies like OpenAI and Google rapidly released generative AI tools, Apple has remained measured, if not hesitant. Their public AI strategy has lacked the urgency or visibility of competitors. The Vision Pro headset, a major foray into spatial computing, has received mixed reviews. Critics argue it lacks compelling use cases. Sales have not met expectations. Some see it as a flop. Others view it as a necessary step toward a larger ecosystem. For users with ALS, spinal cord injuries or locked-in syndrome, the implications are life-changing. A person who cannot move or speak might now be able to control a digital environment through seamless native tools built by Apple. This matters. It's not a technology searching for a use case; it's a technology that could potentially change many lives. But it goes deeper. Apple is known for shaping culture. The iPhone didn't just succeed because it was smart. It succeeded because it redefined what phones could be. The Apple Watch didn't just track steps. It made wearable tech feel essential. If they use the same playbook, then Apple could be signalling to the world that brain-based computing is not just possible, but desirable. This normalization is the most powerful part. It takes BCIs out of the niche and into the mainstream. Not just for medical use, but for everyone. This partnership could lead to apps where students think their notes into existence, professionals control presentations with their minds or artists draw with neural commands. Creativity without friction? Expression without constraint? There are obviously some serious ethical questions. As BCIs evolve, we will need rigorous safeguards. Neural data is the most intimate form of information we can generate. It must never be exploited or used without transparent consent. Apple's history of prioritizing user privacy gives some reassurance, but it will be critical to watch how this evolves. Regulators, ethicists and technologists must collaborate to write new rules for a new reality. In education, the implications are profound. Students with learning differences could gain new forms of input. Those who struggle with motor control could gain a direct link to learning platforms. Teachers might one day read engagement not just by facial expression but by neural signals. The classroom itself could adapt to cognitive states, adjusting pace and content in real time. For entrepreneurial parents and innovative educators, this may be a frontier worth exploring. The tools our children will use in ten years may not be bound by keyboards or touchscreens. Is the new frontier of education to build learning systems that are ready for this level of interface? Systems that are ethical, inclusive and meaningful. In Infinite Education, I warned that education systems stuck in the finite game would miss the transformation. This could be one of those moments. The shift around us is happening. The arrival of Apple into this space is a signal that the age of the interface could be ending. The age of integration has begun. Our tools are becoming extensions of our minds. Not just in metaphor, but in fact. If Apple pulls this off, it will not be a small step; it will be a paradigm shift. One that will demand new thinking. The courage to rethink what it means to connect. What it means to learn. What it means to be human in a world where your thoughts can shape reality.

A New Study Shows How Gen AI May Transform Access To Mental Health Services
A New Study Shows How Gen AI May Transform Access To Mental Health Services

Forbes

time29-04-2025

  • Forbes

A New Study Shows How Gen AI May Transform Access To Mental Health Services

With how rapidly AI is advancing, there is huge potential for the technology to transform the ... More wellness and mental health services space. In a new study published in the New England Journal of Medicine, AI Edition, scientists describe the promise of generative AI applications for mental health treatments. The primarily Dartmouth College based scientists conducted a randomized controlled trial testing a Gen AI powered chatbot named Therabot and its efficacy in providing timely and lasting mental health treatment. The experiment involved a national cohort of more than 200 adults that had been previously diagnosed with either major depressive disorder (MDD), generalized anxiety disorder (GAD), or having a clinically high risk for feeding and eating disorders. Participants were either assigned to a group having access to Therabot or a control group without access; they were then followed for outcomes across a variety of factors, including symptomatic changes, patient engagement and acceptability and the depth of the patient-therapist relationship. The results indicated that users of the chatbot showed significantly greater reductions in symptoms of all three diagnoses relative to the control group, and overall, patients rated that their therapeutic alliance and depth of the patient-therapist relationship was, on average, comparable to that of a human counterpart. Overall, this success reflects a growing trend in the generative AI landscape and its use for mental health applications. Studies are increasingly showing that advanced AI models may provide numerous capabilities in diagnosing mental health conditions. There are also numerous conversations about how these systems may provide modalities for mental health support to augment what human clinicians can provide. In fact, increased access to mental health services is of paramount importance in today's healthcare landscape. The American Psychological Association reports that nearly one third of respondents in a 2022 survey indicated that they could not get the mental health services they felt they need; this, combined with growing mental health service needs globally, does not bode well for the limited supply of mental health professionals. Technology companies are quickly ramping up in this space to provide users with more creative ways of accessing services. Meditation phone apps have seen some of the quickest growth metrics in the past decade; take for example Calm, a meditation and sleep app that has hundreds of millions of downloads. Another example is famous meditation and wellness application HeadSpace, which has seen significant growth and investment in the last few years. Though these are not apps specifically focused on any one form of therapy, they aim to promote mental wellness. Now, with how rapidly natural language processing has progressed, these systems are increasingly capable of having human like conversations with users, including the use of slang, common parlance and other aspects of conversation which may make users more comfortable. Indeed, once this technology is perfected, it may enable a huge boon to patients worldwide.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store