
Number of Students Using AI for Schoolwork Surges by Double-Digits
Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content.
The adoption of artificial intelligence (AI) in U.S. classrooms has accelerated rapidly over the past year, with double-digit growth in the number of students using AI tools for schoolwork, according to a new report from Quizlet.
"With the support of AI tools, students can reclaim time and streamline tasks, making their value immediately clear, Quizlet's CEO told Newsweek in part.
Why It Matters
Artificial intelligence has surged in popularity across the United States and worldwide.
While some companies are integrating the tools to improve productivity, students are using the technology to their own advantage, whether by helping them conduct research for papers, creating baseline drafts for essays or as a tutor-like service on an unclear topic.
What to Know
Quizlet's 2025 How America Learns report revealed that 85 percent of teachers and students (age 14-22) now use AI in some capacity, marking a substantial increase from 66 percent in 2024. Among students, 89 percent reported using AI for schoolwork, compared to just 77 percent in the previous year.
"We also know that students today are juggling more than ever. In particular, college students are significantly more likely than high school students (82 percent vs. 73 percent) to have sacrificed sleep, personal time, or extracurricular activities because of homework," Kurt Beidler, CEO of Quizlet, told Newsweek. "With the support of AI tools, students can reclaim time and streamline tasks, making their value immediately clear."
The Pew Research Center's January 2025 survey echoes this trend, finding that 26 percent of U.S. teens had used ChatGPT for schoolwork—double the 13 percent observed in 2023. Usage is highest among older students, Black and Hispanic teens, and those most familiar with AI tools.
Students are leveraging AI for a variety of academic tasks. Quizlet's survey found the most common uses are:
Summarizing or synthesizing information (56 percent)
Conducting research (46 percent)
Generating study guides or materials (45 percent)
Teens support using AI tools like ChatGPT primarily for researching new topics (54 percent find it acceptable), though fewer approve of its use for math problems (29 percent) or essay writing (18 percent), according to Pew.
Stock image of a child using a smartphone while doing homework.
Stock image of a child using a smartphone while doing homework."The growing adoption of AI in education signals a lasting trend toward greater use of these new technologies to enhance the learning journey by making it more efficient and effective," Beidler said.
"Just as the adoption of AI continues to increase, we anticipate the future of education to become more personalized. We're already seeing how AI can adapt in real time—identifying knowledge gaps, adjusting difficulty levels, and delivering the right content at the right moment to help students master material more efficiently."
Despite rapid adoption, opinion on AI's impact on education remains mixed. According to Quizlet's findings, only 40 percent of respondents believe AI is used ethically and effectively in classrooms, with students less likely to agree (29 percent) compared to parents (46 percent) and teachers (57 percent).
"While privacy and security are vital concerns, we also need to address the deeper cognitive and developmental risks posed by AI in education," Leyla Bilge, Global Head of Scam Research for Norton, told Newsweek.
"Easy access to instant answers and AI-generated content can lead to intellectual passivity—undermining curiosity, problem-solving, and critical thinking. Overreliance on AI shortcuts means students may miss essential learning processes, weakening foundational skills like reading comprehension, analytical reasoning, and writing."
Demographic differences also persist: Pew's data shows awareness and usage of ChatGPT is higher among white teens and those from wealthier households, while Black and Hispanic teens are more likely than their white peers to use it for schoolwork.
K-12 educators remain cautious. A 2023 Pew survey reported that 25 percent of public K-12 teachers believe AI tools do more harm than good, with more pessimism among high school staff. Still, many see benefits—especially in supporting research and personalized learning—if managed responsibly.
What People Are Saying
Kurt Beidler, CEO of Quizlet, said in the release: "As we drive the next era of AI-powered learning, it's our mission to give every student and lifelong learner the tools and confidence to succeed, no matter their motivation or what they're striving to achieve. As we've seen in the data, there's immense opportunity when it comes to career-connected learning, from life skills development to improving job readiness, that goes well beyond the classroom and addresses what we're hearing from students and teachers alike."
Leyla Bilge, Global Head of Scam Research for Norton, told Newsweek: "The sharp rise in AI adoption across classrooms tells us that what was once considered cutting-edge is now becoming second nature. This isn't just students experimenting, but it's educators and parents recognizing AI as a legitimate tool for learning and support. Whether it's drafting essays, solving math problems, or translating concepts into simpler terms, AI is making education more accessible and adaptive."
What Happens Next
As digital learning expands, Quizlet's report notes that over 60 percent of respondents want digital methods to be equal to or greater than traditional learning, citing flexibility and accessibility. However, gaps persist: only 43 percent affirm equal access for students with learning differences.
Looking ahead, the top skills students, parents, and educators want schools to develop include critical thinking, financial literacy, mental health management, and creativity—areas where AI-powered tools could play a growing role.
"Digital literacy must evolve. Students need to critically evaluate AI outputs, understand their limitations, and learn how to protect their personal data. Most importantly, children should engage with developmentally appropriate AI tools, those that encourage exploration and responsible use, not just efficiency," Bilge said.
"Like age-appropriate books, AI systems for kids should align with educational and cognitive developmental goals."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Axios
20 minutes ago
- Axios
"Clankers": A robot slur emerges to express disdain for AI's takeover
AI is everywhere whether you like it or not, and some online have turned to a choice word to express their frustration. Why it matters: Referring to an AI bot as a "clanker" (or a "wireback," or a "cogsucker") has emerged as a niche, irreverent internet phenomenon that illuminates a broader disdain for the way AI is overtaking technology, labor, and culture. State of play: The concerns range from major to minor: people are concerned that AI will put them out of a job, but they're also annoyed that it's getting harder to reach a human being at their mobile carrier. "When u call customer service and a clanker picks up" one X post from July reads, with over 200,000 likes, alongside a photo of someone removing their headset in resignation. "Genuinely needed urgent bank customer service and a clanker picked up," reads another from July 30. Here's what to know: Where "clanker" comes from Context: The word is onomatopoeic, but the term can be traced back to Star Wars. It comes from a 2005 Star Wars video game, "Republic Commando," according to Know Your Meme. The term was also used in 2008's Star Wars: The Clone Wars: "Okay, clankers," one character says. "Eat lasers." Robot-specific insults are a common trope in science fiction. In the TV Show Battlestar Galactica, characters refer to the robots as "toasters" and "chrome jobs." "Slang is moving so fast now that a [Large Language Model] trained on everything that happened before... is not going to have immediate access to how people are using a particular word now," Nicole Holliday, associate professor of linguistics at UC Berkeley, told Rolling Stone. "Humans [on] Urban Dictionary are always going to win." How people feel about AI Anxiety over AI's potential impact on the workforce is especially strong. By the numbers: U.S. adults' concerns over AI have grown since 2021, according to Pew Research Center, and 51% of them say that they're more concerned than excited about the technology. Only 23% of adults said that AI will have a very or somewhat positive impact on how people do their jobs over the next 20 years. And those anxieties aren't unfounded. AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Anthropic CEO Dario Amodei told Axios in May. And the next job market downturn — whether it's already underway or still years off — might be a bloodbath for millions of workers whose jobs can be supplanted by AI, Axios' Neil Irwin wrote on Wednesday. People may have pressing concerns about their jobs or mental health, but their annoyances with AI also extend to the mundane, like customer service, Google searches, or dating apps. Social media users have described dating app interactions where they suspect the other party is using AI to write responses. There are a number of apps solely dedicated, in fact, to creating images and prompts for dating apps. Yes, but: Hundreds of millions of people across the world are using ChatGPT every day, its parent company reports. What we're watching: Sens. Ruben Gallego (D-AZ) and Jim Justice (R-WV) introduced a bipartisan bill last month to ensure that people can speak to a human being when contacting U.S. call centers. "Slur" might not be the right word for what's happening People on the internet who want a word to channel their AI frustrations are clear about the s-word. The inclination to "slur" has clear, cathartic appeal, lexical semantician Geoffrey Nunberg wrote in his 2018 article "The Social Life of Slurs." But any jab at AI is probably better classified as "derogatory." "['Slur'] is both more specific and more value-laden than a term like "derogative," Nunberg writes, adding that a derogative word "qualifies as a slur only when it disparages people on the basis of properties such as race, religion, ethnic or geographical origin, gender, sexual orientation or sometimes political ideology." "Sailing enthusiasts deprecate the owners of motor craft as 'stinkpotters,' but we probably wouldn't call the word a slur—though the right-wingers' derogation of environmentalists as 'tree-huggers' might qualify, since that antipathy has a partisan cast."


Bloomberg
2 hours ago
- Bloomberg
A $30 Million Paycheck? Big Law Has Lost the Plot
Opinion Newsletter Jessica Karl, Columnist Save This is Bloomberg Opinion Today, the total billable hours of Bloomberg Opinion's opinions. Sign up here. If you were a lawyer earning tens of millions of dollars a year — compensation that Chris Bryant says is not unheard of in prestigious partner circles — would you be using ChatGPT to do your job? Because this individual certainly didn't bat an eyelash when they typed this inquiry in the search bar:


The Hill
2 hours ago
- The Hill
Meta bans millions of WhatsApp accounts linked to scam operations
Meta took down 6.8 million WhatsApp accounts tied to scam operations on Tuesday after victims reported financial fraud schemes. The company said many of the scam sources were based in Southeast Asia at criminal scam centers. 'Based on our investigative insights into the latest enforcement efforts, we proactively detected and took down accounts before scam centers were able to operationalize them,' Meta said in a Tuesday release. 'These scam centers typically run many scam campaigns at once — from cryptocurrency investments to pyramid schemes. There is always a catch and it should be a red flag for everyone: you have to pay upfront to get promised returns or earnings,' they wrote. In an effort to ensure users are protected, the company said it would flag when people were added to group messages by someone who isn't in their contact list and urge individuals to pause before engaging with unfamiliar messages where they're encouraged to communicate on other social platforms. 'Scams may start with a text message or on a dating app, then move to social media, private messaging apps and ultimately payment or crypto platforms,' Meta said. 'In the course of just one scam, they often try to cycle people through many different platforms to ensure that any one service has only a limited view into the entire scam, making it more challenging to detect,' the company added. The Tuesday release highlighted an incident with Cambodian users urging people to enlist in a rent a scooter pyramid scheme with an initial text message generated by ChatGPT. The message contained a link to a WhatsApp chat which redirected the target to Telegram where they were told to like TikTok videos. 'We banned ChatGPT accounts that were generating short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole. These messages offered recipients high salaries for trivial tasks — such as liking social media posts — and encouraged them to recruit others,' OpenAI wrote in their June report focused on disrupting malicious artificial intelligence efforts. 'The operation appeared highly centralized and likely originated from Cambodia. Using AI-powered translation tools, we were able to investigate and disrupt the campaign's use of OpenAI services swiftly,' the company added. The Federal Trade Commission has reported a steady increase in social media fraud. The agency said more money was reported lost to fraud originating on social media than any other method of contact from January 2021 to June 2023 — with losses totaling $2.7 billion.