Your kid is using AI for homework. Now what?
When your kids head back to school, there's a good chance they plan to use artificial intelligence to get their schoolwork done.
Twenty-six percent of teenagers ages 13 to 17 said they had used ChatGPT for their schoolwork in a 2024 Pew Research Center survey. AI chatbots have become more prevalent since then, so the number may be higher now.
As a professor, I have a word for when students ask chatbots to write their papers: It's called cheating. Most importantly, it cheats them out of an opportunity to learn. Unfortunately, it's easy for kids to get away with doing this because tools for detecting AI-generated content aren't reliable. So when educators grade papers, we can't always tell whether it was used or not.
That's why it's so important for parents to talk to their kids about when they should — and shouldn't — use AI this school year.
'Make sure they're using AI as a learning tool instead of a shortcut,' said Robbie Torney, senior director for AI programs at Common Sense Media, a nonprofit that advocates for healthy media options for children.
Here's how to do that.
Use AI to tutor and brainstorm, not to think or write
First, talk to kids about why their goal should be 'to learn and grow,' Torney said. If AI does their work for them, it 'takes away that opportunity.'
However, AI can help them learn. Torney suggested using it as a tutor. 'It can be great for explaining difficult concepts or helping them get unstuck, but original thinking and work should be theirs,' he said.
AI can also help brainstorm ideas, Torney said, but then students should do the thinking and writing on their own.
It's important to explain why these rules are important. 'Our brains are like a muscle,' Torney said. 'Kids won't learn skills unless they practice them.'
It's ideal to agree on these boundaries before children use AI, Torney said, but then 'check in regularly' to make sure AI tools aren't replacing their learning.
Don't believe everything AI tells you — and figure it out together
Chatbots tell users things that aren't true. It's called hallucinating, and it happens all the time.
Other times, chatbots just miss things. For example, recently my students submitted papers about (what else?) AI. A number of them were uncannily similar, which always rings alarm bells in my head that AI could have generated them. In this case, multiple students falsely asserted there isn't any federal legislation to help victims of nude deepfakes — even though the Take It Down Act became law in May.
So it's important not to accept AI answers at face value but to teach kids how to fact-check the information they receive. One way to do so, Torney said, is to take materials they get at school — on, say, the subject of photosynthesis — and compare those facts with what chatbots tell them about it.
It's great to do this experimenting together. And parents shouldn't feel intimidated about doing this because they don't fully understand how AI works. Most people don't.
'You don't have to be an AI expert to help your kids use AI wisely, and staying involved in asking questions and doing the exploration together can teach them the skills that they'll need for the future,' Torney said.
That's important because, like it or not, chatbots are probably here to stay. 'Accessing information through AI interfaces is going to become increasingly common for kids,' Torney said, 'the same way that accessing information online has already become common for kids.'
AI can help with schoolwork but not private advice
It's also important to teach kids that they shouldn't get personal advice from chatbots or share private information with them.
It's easy for kids to forget AI chatbots are technology, Torney said. 'We know that younger children often can't tell the difference between fantasy and reality, making them more likely to think that AI is a real person or a friend,' he said.
One concern is that chatbots, which are trained to conduct romantic conversations, could engage in sexual talk with kids. It could also give them bad advice, encourage harmful thinking or even come to replace relationships with other people.
So, it's a good idea to remind children that AI isn't human. If a chatbot gives an answer that could make it seem like it's not, Torney said parents can say something like, 'Did you notice how the AI said, 'I like your idea?' That's just programming. The AI doesn't think anything about your idea.'
Kids could also inadvertently make private information public through chatbots, Torney warned. If a child uploads a picture of your house and the system uses it as part of a training set, it could be shown to other users, he said. It's therefore important to talk about why they should never share personal information with AI tools.
Finally, set clear family rules for when chatbots are used. Consider allowing kids to use chatbots in places such as the family room, but not in bedrooms where they can't be supervised, Torney said. And establish tech-free times — such as during meals and before bed — when no one is on technology, he suggested.
Your kids are probably going to try to use AI to help with their schoolwork — if they haven't already. Chatbots have become so ubiquitous that understanding how to use them is a life skill for our children.
That's why we should teach kids to use AI to help them learn, not to do their work for them — and to question everything chatbots tell them. One way to teach this is by using chatbots together.
Kids should also know that they shouldn't turn to AI platforms for advice. Even if they sound human, they aren't real — but the consequences of letting AI get in the way of their learning certainly would be.
Solve the daily Crossword
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
15 minutes ago
- Yahoo
NVDA: Top Analysts Lift Nvidia Targets Ahead of Earnings -- Can AI Boom Deliver?
This article first appeared on GuruFocus. Aug 21 - Nvidia (NASDAQ:NVDA) continues to draw investor attention as analysts raise their price targets ahead of the company's second-quarter earnings report on August 27. The chipmaker's stock outlook remains supported by strong AI demand, its leadership position in the semiconductor market, and easing U.S. export restrictions. Warning! GuruFocus has detected 5 Warning Signs with NVDA. Is NVDA fairly valued? Test your thesis with our free DCF calculator. Shares of NVDA edged higher on Thursday's opening. Consensus estimates call for earnings of $1.01 per share on revenue of $45.87 billion. Analysts remain confident that Nvidia will deliver another solid quarter, although expectations are high. KeyBanc analyst John Vinh increased his price target on NVDA to $215 from $190 while reiterating a Buy rating. Vinh expects strong July-quarter results but noted guidance for the October quarter could be more cautious, as forecasts likely exclude China sales pending license approvals. If those sales are included, Nvidia could generate an additional $2 billion to $3 billion from its H20 and RTX6000D chips. Susquehanna analyst Christopher Rolland also lifted his target, moving to $210 from $180 with a Buy rating. Rolland cited the strength of Nvidia's data center segment and highlighted the upcoming GB300 product line as a catalyst for double-digit revenue growth, supported by higher selling prices and expanding margins. Sign in to access your portfolio
Yahoo
15 minutes ago
- Yahoo
COMPLY Appoints Frank Roe as Board Chair
Experienced Board Member with Deep SaaS and AI Expertise to Help Guide COMPLY's Next Phase of Growth and Innovation NEW YORK, Aug. 21, 2025 (GLOBE NEWSWIRE) -- COMPLY, the leading provider of regulatory compliance software and consulting services for the financial institutions sector, today announced the appointment of Frank Roe as Chairman of its Board of Directors. With more than 30 years in enterprise software, Roe has led SaaS companies through major inflection points, scaling platforms, growing global teams, and navigating complex, fast-moving Roe 'Frank's track record of scaling high-performance businesses in complex markets makes him a force multiplier for our Board,' said Michael Stanton, CEO of COMPLY. 'His leadership comes at a pivotal moment for COMPLY as we double down on making COMPLY the regulatory compliance standard for global financial firms.' In addition to his new role as Board Chair at COMPLY, Roe serves as Executive Chair on the board of MariaDB, a new generation cloud database company, board member at LastPass, a leading password and identity management company, and as a Strategic Board Advisor at SmartBear. He earned a Bachelor's degree in business from the Isenberg School of Management at the University of Massachusetts, Amherst. With deep experience in operational discipline, global execution, and scalable growth, Roe is well positioned to advance COMPLY's expansion and strategic priorities. Roe joins COMPLY as the company advances its AI strategy, focused on helping compliance professionals reduce manual effort, act on timely insights, and strengthen accountability. An advocate for the responsible use of AI in high-stakes environments, Roe has consistently emphasized the need for trust, transparency, and rigorous oversight. His perspective aligns closely with COMPLY's mission to deliver intuitive and intelligent tools that enhance human decision-making. 'COMPLY is tackling one of the most critical challenges in financial services: helping firms navigate regulatory complexity with speed, confidence, and clarity,' said Roe. 'As AI reshapes how work gets done, the winners will be those who apply it responsibly, with transparency, control, and a focus on outcomes. That's exactly how COMPLY is approaching it, and I'm looking forward to supporting Michael and the leadership team as they set a new standard for what modern compliance should look like.' About COMPLYCOMPLY is the leading provider of regulatory compliance software and consulting service for the financial institutions sector. Trusted by over 5,000 firms - from RIAs and wealth managers to investment banks, broker-dealers and private funds - COMPLY delivers purpose-built solutions that help firms navigate complexity, reduce risk, and scale with confidence. With deep regulatory expertise and powerful technology, COMPLY is building the industry's most comprehensive and future-ready compliance platform—one designed to meet today's demands and evolve with tomorrow's challenges. Media Contact:COMPLY PR pr@ A photo accompanying this announcement is available at while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data
Yahoo
15 minutes ago
- Yahoo
Meta said to be pausing its very expensive AI hiring spree
Meta Platforms has reportedly paused hiring across its artificial intelligence unit after months of aggressively poaching top talent from its rivals. The abrupt freeze, which began last week, is part of a wider reshuffle of the AI division and bars staff from transferring between internal teams, the Wall Street Journal reported . Any exceptions to outside hiring now require approval from Meta's chief AI officer, Alexandr Wang, the report continues, adding that it's unclear how long the restrictions will remain in place. In a statement to the Journal, Meta said it's engaging in routine planning and budgeting, and setting up clear structures for its Superintelligence Labs — a moonshot to develop systems that surpass human-level reasoning. The advertising and social-media giant didn't immediately respond to Quartz's request for comment. Meta has been one of the most aggressive recruiters in the AI race, frequently outbidding rivals with lavish compensation packages and acquiring startups primarily for their talent. Analysts have questioned whether such heavy spending, particularly on stock-based pay, risks eroding shareholder returns. The internal reorganization reportedly splits Meta's AI division into four branches: TBD Lab, which houses the superintelligence push; a products-focused team; an infrastructure group; and Fundamental AI Research, which remains mostly untouched. Since April, CEO Mark Zuckerberg has personally courted researchers from OpenAI, Google DeepMind, and other labs, sometimes dangling nine-figure compensation packages that stretched into hundreds of millions. Meta has hired more than 50 specialists in recent months. Sam Altman, however, doesn't believe this will be enough to revive Meta's edge, he said on a podcast in June, as attracting workers with financial incentives doesn't always pay off long-term. 'The degree to which they're focusing on that and not the work and not the mission, I don't think that's going to set up a great culture,' he said. "I don't think they're a company that's great at innovation,' he said of Meta. 'Their current AI efforts have not worked as well as they hoped.' Indeed, the overhaul follows disappointment with Meta's Llama language model earlier this year, which underperformed expectations and led to the dismantling of the AGI Foundations team. Several members exited the company around the August 15 vesting date. The costly hiring spree has also fueled investor unease. Morgan Stanley analysts wrote in an August 18 research note, viewed by the Journal, that escalating stock-based compensation offered by Meta and Google to lure AI talent undermine shareholder value if results fall short. Extravagant sign-on bonuses and salaries have "the potential to drive AI breakthroughs with massive value creation or could dilute shareholder value without any clear innovation gains," the analysts wrote. The freeze comes days after The Verge reported that Altman said AI is in a bubble, which preceded this week's tech stock sell-off. At a recent dinner in San Francisco with a small group of reporters, Altman was asked whether investors are collectively overhyping the AI space, and Altman reportedly replied, 'Yes.' He compared the current surge of excitement with the dot-com boom of the late 90s, when investors piled into internet startups on the back of one undeniable fact: that the internet was a world-changing technology. 'When bubbles happen, smart people get overexcited about a kernel of truth,' Altman was quoted saying. Since news broke on Friday of Altman's apprehensions, the S&P 500 is down 1.43%, while the Nasdaq Composite has lost 2.68%. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data