logo
Educators seek to combat AI challenges in the classroom

Educators seek to combat AI challenges in the classroom

Yahoo11-05-2025

Educators are reaching into their toolbox in an effort to adapt their instruction to a world where students can use ChatGPT to pull out a five-page essay in under an hour.
Teachers are working to make artificial intelligence (AI) a force for good in the classroom instead of an easy way to cheat as they balance teaching the new technology with honing students' critical thinking skills.
'Even before the AI era, the most important grades that we'd give at the school that I led and when I was a teacher, were the in-class writing assignments,' said Adeel Khan, CEO and founder of MagicSchool and former school principal, noting the assignments worth the most are normally final exams or end-of-unit tests.
Khan predicts those sorts of exams that have no access to AI will be weighted more heavily for students' grades in the future.
'So, if you're using AI for all of the formative assignments that are helping you practice to get to that final exam or that final writing test … then it's going to be really hard to do it when you don't have AI in those moments,' he added.
The boom of generative AI began shortly after students got back in the classrooms after the pandemic, with educators going from banning ChatGPT in schools in 2023 to taking professional development courses on how to implement AI in assignments.
President Trump recently signed an executive order to incorporate AI more into classrooms, calling it the technology of the future.
The executive order aims to have schools work more closely with the private sector to implement programs and trainings regarding AI for teachers and students.
'The basic idea of this executive order is to ensure that we properly train the workforce of the future by ensuring that school children, young Americans, are adequately trained in AI tools, so that they can be competitive in the economy years from now into the future, as AI becomes a bigger and bigger deal,' White House staff secretary Will Scharf said.
Dixie Rae Garrison, principal of West Jordan Middle School in Utah, describes herself as an early advocate for AI in schools.
She said her classrooms have had 'an overwhelmingly positive experience' with the technology.
Garrison remarked the problems with AI need to be resolved through innovative thinking, not passivity.
'There needs to be a shift from the types of questions we were asking students, so shifting away from repetitive exercises,' Garrison said, adding educators 'really have to think about the way that you're teaching students to write, the way that you're framing your questions.'
One way her school has used AI to help students is by creating more avenues for pupils to study for exams such as the AP U.S. history test.
Teachers are 'able to provide the students with more frequent opportunities to practice' by inputting the AP rubrics into a generative AI tool, leading the students to get feedback 'instantaneously' on their work.
Another strategy used for preparing students to work with AI as well as lower concerns about cheating is to create collaborative projects.
'I think in the younger classes there is a shift towards project-based learning, and even homework is more sort of collaborative, which is harder to replicate' with AI, said Tara Chklovski, founder and CEO of Technovation.
The integration of AI varies across the United States, with about 60 percent of principals reportedly using AI tools for their work, according to a survey by RAND, a research nonprofit.
Among teachers, only 25 percent are using AI for their instructional planning or teaching, although English language arts and science instructors were twice as likely to use the technology than mathematics educators.
Educators in higher poverty schools are also less likely to use AI and are more likely not to have guidance on AI implementation compared to lower poverty schools, according to RAND.
The lack of guidance makes it even more difficult for educators as concerns of cheating with generative AI become louder.
'Pragmatically, on the ground, some teachers are shifting towards more short, oral questioning of students. … In fact, for some kids — I hear this from science teachers that I work with — the ability to ask kids questions orally, instead of writing on a test, helps reveal' they might know more 'than they would have been able to express on a written test,' said Bill Penuel, a professor at the University of Colorado Boulder.
For many, it is still a challenge to balance the benefits of AI with the drawbacks in the classroom.
Most educators don't want AI 'to be used as a shortcut for thinking, but they want people to be able to use it as a tool to help them solve problems, to give them feedback on things that they're working on and writing, maybe even support folks who are multilingual learners in classrooms,' Penuel said.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack
Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack

Yahoo

time42 minutes ago

  • Yahoo

Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack

A key artificial intelligence pioneer is concerned by the technology's growing propensity to lie and deceive — and he's founding his own nonprofit to curb such behavior. In a blog post announcing LawZero, the new nonprofit venture, "AI godfather" Yoshua Bengio said that he has grown "deeply concerned" as AI models become ever more powerful and deceptive. "This organization has been created in response to evidence that today's frontier AI models have growing dangerous capabilities and [behaviors]," the world's most-cited computer scientist wrote, "including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment." Of all people, Bengio would know. In 2018, the founder of the Montreal Institute for Learning Algorithms (MILA) was presented with a Turing Award alongside fellow AI pioneers Yann LeCun and Geoffrey Hinton for their formative roles in machine learning research, and he was listed as one of Time magazine's "100 Most Influential People" in 2024 thanks to his outsize impact on the ever-accelerating technology. Despite the accolades, Bengio has repeatedly expressed regret over his role in bringing advanced AI technology — and its Silicon Valley hype cycle — to fruition. This latest missive seems to be his most stark to date. "I'm deeply concerned," the AI pioneer wrote in his blog post, "by the behaviors that unrestrained agentic AI systems are already beginning to exhibit." Bengio pointed to recent red-teaming experiments, or tests that push AI models to their limits to see how they'll act, showing that advanced systems have developed an uncanny tendency to keep themselves "alive" by any means necessary. Among his examples was a recent report from Anthropic detailing how its Claude 4 model, when told it would be shut down, threatened to blackmail an engineer with incriminating emails if they followed through. "These incidents," the decorated researcher wrote, "are early warning signs of the kinds of unintended and potentially dangerous strategies AI may pursue if left unchecked." To put such behavior in check, Bengio said that his new nonprofit is building a so-called "trustworthy" model, which he calls "Scientist AI," that is "trained to understand, explain and predict, like a selfless idealized and platonic scientist." "Instead of an actor trained to imitate or please people (including sociopaths), imagine an AI that is trained like a psychologist — more generally a scientist — who tries to understand us, including what can harm us," he explained. "The psychologist can study a sociopath without acting like one." A pre-peer-review paper Bengio and his colleagues published earlier this year explains it a bit more simply. "This system is designed to explain the world from observations," the paper reads, "as opposed to taking actions in it to imitate or please humans." The concept of building "safe" AI is far from new, of course — it's quite literally why several OpenAI researchers left OpenAI and founded Anthropic as a rival research lab. This one seems to be different because, unlike Anthropic, OpenAI, or any other companies that pay lip service to AI safety while still bringing in gobs of cash, Bengio's is a nonprofit — though that hasn't stopped him from raising $30 million from the likes of ex-Google CEO Eric Schmidt, among others. More on creepy AI: Advanced OpenAI Model Caught Sabotaging Code Intended to Shut It Down

Superblocks CEO: How to find a unicorn idea by studying AI system prompts
Superblocks CEO: How to find a unicorn idea by studying AI system prompts

TechCrunch

timean hour ago

  • TechCrunch

Superblocks CEO: How to find a unicorn idea by studying AI system prompts

Brad Menezes, CEO of enterprise vibe coding startup Superblocks, believes the next crop of billion-dollar startup ideas are hiding in almost plain sight: the system prompts used by existing unicorn AI startups. System prompts are the lengthy prompts — over 5,000-6,000 words — that AI startups use to instruct the foundational models from companies like OpenAI or Anthropic on how to generate their application-level AI products. They are, in Menezes view, like a master class in prompt engineering. 'Every single company has a completely different system prompt for the same [foundational] model,' he told TechCrunch. 'They're trying to get the model to do exactly what's required for a specific domain, specific tasks.' System prompts aren't exactly hidden. Customers can ask many AI tools to share theirs. But they aren't always publicly available. So as part of his own startup's new product announcement of an enterprise coding AI agent named Clark, Superblocks offered to share a file of 19 system prompts from some of the most popular AI coding products like Windsurf, Manus, Cursor, Lovable and Bolt. Menezes's tweet went viral, viewed by almost 2 million including big names in the Valley like Sam Blond, formerly of Founders Fund and Brex, and Aaron Levie, a Superblocks investor. Superblocks announced last week that it raised a $23 million Series A, bringing its total to $60 million for its vibe coding tools geared to non-developers at enterprises. So we asked Menezes to walk us through how to study other's system prompts to glean insights. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW 'I'd say the biggest learning for us building Clark and reading through the system prompts is that the system prompt itself is maybe 20% of the secret sauce,' Menezes explained. This prompt gives the LLM the baseline of what to do. The other 80% is 'prompt enrichment' he said, which is the infrastructure a startup builds around the calls to the LLM. That part includes instructions it attaches to a user's prompt, and actions taken when returning the response, such as checking for accuracy. He said there are three parts of system prompts to study: role prompting, contextual prompting, and tool use. The first thing to notice is that, while system prompts are written in natural language, they are exceptionally specific. 'You basically have to speak as if you would to a human co-worker,' Menezes said. 'And the instructions have to be perfect.' Role prompting helps the LLMs be consistent, giving both purpose and personality. For instance, Devin's begins with, 'You are Devin, a software engineer using a real computer operating system. You are a real code-wiz: few programmers are as talented as you at understanding codebases, writing functional and clean code, and iterating on your changes until they are correct.' Contextual prompting gives the models the context to consider before acting. It should provide guardrails that can, for instance, reduce costs and ensure clarity on tasks. Cursor's instructs, 'Only call tools when needed, and never mention tool names to the user — just describe what you're doing. … don't show code unless asked. … Read relevant file content before editing and fix clear errors, but don't guess or loop fixes more than three times.' Tool use enables agentic tasks because it instructs the models how to go beyond just generating text. Replit's, for instance, is long and describes editing and searching code, installing languages, setting up and querying PostgreSQL databases, executing shell commands and more. Studying others' system prompts helped Menezes see what other vibe coders emphasized. Tools like Loveable, V0, and Bolt 'focus on fast iteration,' he said, whereas 'Manus, Devin, OpenAI Codex, and Replit' help users create full-stack applications but 'the output is still raw code.' Menezes saw an opportunity to let non-programmers write apps, if his startup could handle more, such as security and access to enterprise data sources like Salesforce. While he's not yet running the multi-billion startup of his dreams, Superblock has landed some notable companies as customers, it said, including Instacart and Paypaya Global. Menezes is also dogfooding the product internally. His software engineers are not allowed to write internal tools; they can only build the product. So his business folks have built agents for all their needs, like one that uses CRM data to identify leads, one that tracks support metrics, another that balance the assignments of the human sales engineers. 'This is basically a way for us to build the tools and not buy the tools,' he sais.

5 AI tools I rely on more than ChatGPT
5 AI tools I rely on more than ChatGPT

Android Authority

timean hour ago

  • Android Authority

5 AI tools I rely on more than ChatGPT

Kaitlyn Cimino / Android Authority ChatGPT is usually the first tool that comes to everyone's mind when you start talking about AI. It sure is versatile and capable of doing a lot — often better than others in many ways. Its memory feature helps it learn about you and fine-tune responses, making it more personalized than many other (even paid) alternatives. It's also the one to open the floodgates for a wave of AI tools designed for very specific tasks — the kind that even ChatGPT can't handle well. And that's exactly where the broader world of AI tools comes in. I've tried quite a few of these AI tools, and here are the ones that have truly embedded themselves into my daily workflow. I can't imagine getting through the day without them anymore. What's your go-to AI tool (besides ChatGPT)? 0 votes Gemini NaN % Perplexity NaN % Claude NaN % Something else (comment below!) NaN % Gemini Ryan Haines / Android Authority Gemini is similar to ChatGPT in many ways — and no, I don't use it to unload my life problems. But I do end up using it quite a lot, and the simple reason is its presence. Since I rely on Google products both personally and professionally, Gemini is always just there — sometimes in the sidebar, sometimes just a wake word away. On my Workspace for Business account, I often use it to check grammar, tighten language, or tweak the tone of emails. But I've also had it read contracts in Docs and point out anything working against me or take notes during Meet calls and summarize the meeting within minutes of it ending. It even handles different accents in a single meeting quite well. Perplexity Calvin Wankhede / Android Authority If I've developed muscle memory for using ChatGPT, Perplexity has quietly replaced Google Search for most of my web lookups — especially on desktop. I use it heavily for research. Instead of browsing ten websites, scrubbing YouTube videos, or combing through forums for one specific detail — like what the most popular desktop OS in Europe was in the early 2010s — I just ask Perplexity. And much like appending 'Reddit' to a search to get real user opinions instead of content written for SEO, Perplexity does that for you. It gives you a concise gist, which is handy when I'm looking for general sentiment, say, around viral news. While it saves me the search legwork, I still verify what it gives me just to make sure I'm delivering accurate information with the necessary human oversight. NotebookLM Andy Walker / Android Authority For the longest time, I avoided NotebookLM. It felt like a niche product good for only a few tasks — but that's exactly where its strength lies. And honestly, what a tool Google has made! You can create multiple notebooks and upload source documents, even on the free plan. It analyzes long documents and can surface different perspectives, themes, and patterns — like a real brainstorming partner. That's not what I personally use it for, though. NotebookLM is a true example of AI's potential — I just hope it doesn't land in the Google graveyard anytime soon. What I love is its ability to switch tone easily between 'Guide' and 'Analyst' modes, especially when I need a more conversational or direct approach. But my favorite feature is its podcast-style audio summaries. Those have helped me digest jargon-heavy, never-ending documents while driving or cooking — and suddenly, it doesn't feel like work! NotebookLM is a true example of AI's potential — I just hope it doesn't land in the Google graveyard anytime soon. Notion AI Dhruv Bhutani / Android Authority Some AI tools become part of your routine without you even realizing it — and Notion AI is one of those for me. I initially dismissed it as something Notion was forcing on its users, but I've ended up using it more than I expected. I use Notion for almost all my long-form personal writing — blog posts, short stories, you name it. One thing that's always frustrated me is the lack of autocorrect. When I'm in vomit-draft mode, I don't care about typos or grammar and cleaning them up later was always a pain. With Notion AI, I can fix all that with just a couple of clicks. I can also throw in unformatted lists (groceries, travel packing, etc.) and just ask AI to clean them up. I even use it to brainstorm multiple angles for blog ideas, helping me avoid getting stuck on one track. It's like the second set of eyes I have always wanted for my blogs. The free plan gives me limited prompts, but since I only use Notion once or twice a day, I get by just fine. Ideogram An image generation tool has been eerily missing from this list — that's because I saved the best for the last! Ideogram has been my preferred tool for that because of one big reason. It is one of the only free tools that lets you customize and control a lot of aspects of your generated images, including their size and ratio. Most AI tools generate square images that are terrible for online use as feature images or on social media. Ideogram gets you a few free credits per week and creates some fantastic AI images using its latest-generation model with whatever customization you want. And it also has something called magic prompt that uses AI to create an elaborate prompt on your behalf. We all tend to underexplain AI the exact scene we want, but Ideogram covers you for that. Specialized AI tools are far more useful companions than a chatbot that behaves like an over-eager intern who always needs direction. AI beyond ChatGPT We've had AI around us for years — from Gboard's smart suggestions to Google Assistant — but it wasn't until ChatGPT became a buzzword that we really started noticing generative AI in our everyday lives. It's honestly hard to believe it's just been a couple of years since its arrival. A lot of AI tools have emerged in such a short span, and many of them have surely become an indispensable part of my life. But most importantly, I get to use them to be more productive without fearing about AI dimming my creative spark. They are far more useful companions than a chatbot that behaves like an over-eager intern who always needs direction.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store