02-05-2025
Federal AI Policy Is Here. Are Universities And Schools Ready?
CHAPEL HILL, NORTH CAROLINA - JUNE 29: People walk on the campus of the University of North Carolina ... More Chapel Hill on June 29, 2023 in Chapel Hill, North Carolina. The U.S. Supreme Court ruled that race-conscious admission policies used by Harvard and the University of North Carolina violate the Constitution, bringing an end to affirmative action in higher education. (Photo by)
Artificial intelligence is now embedded in the U.S. education agenda. On April 23, 2025, President Donald Trump signed the Executive Order on Advancing Artificial Intelligence Education for American Youth, marking a major federal investment in AI literacy. The order calls for AI to be taught in K–12 classrooms, directs new funding for teacher training, and encourages public-private partnerships through apprenticeships and a national student competition. It also establishes a White House Task Force on AI Education, underscoring the federal government's commitment to preparing the next generation for an AI-shaped world.
Despite talk of advancing AI efforts, many schools and universities remain stuck at the starting gate. The consequences for students are becoming increasingly dire.
Recent labor market data reveals why this implementation gap is so concerning. The New York Federal Reserve notes that labor conditions for recent college graduates have "deteriorated noticeably," with unemployment for new grads reaching an unusually high 5.8 percent. Harvard economist David Deming suggests that entry-level positions traditionally filled by college graduates, such as reading and synthesizing information, producing reports, and analyzing data, are precisely the kind of jobs that generative AI can now perform.
Perhaps more alarming is the "recent-grad gap"—the difference between unemployment rates for young college graduates versus the overall workforce. This gap hit an all-time low in early 2025, meaning today's graduates face a job market that's relatively worse for them than any time in the past four decades. This could be an early signal that businesses are replacing entry-level positions with AI tools, particularly in law, consulting, and tech sectors.
Against this backdrop, educational institutions' slow response to AI integration appears increasingly troubling. A global survey by UNESCO in 2024 found that fewer than 10% of education institutions have developed formal policies on generative AI. Among those that do, approximately half provide clear guidance, while the other half leave decisions to individual departments and teachers. Almost 20% of respondents weren't even sure whether such policies existed. This fragmentation is a missed opportunity to define the role of AI in education before it defines us.
Higher education faces a similar conundrum. According to Inside Higher Ed and Generation Lab's 2024 Student Voice survey, 31% of undergraduates don't know when or how to use generative AI tools like ChatGPT in their coursework. Meanwhile, EDUCAUSE's 2025 AI Landscape Study shows that although 57% of colleges now consider AI a strategic priority, fewer than 40% have formal use policies—and most of those focus only on plagiarism rather than broader curricular or community implications.
These gaps come at a time when student adoption of AI is near-universal. The Higher Education Policy Institute's 2025 Student Generative AI Survey , a UK-based survey, found that 92% of students now use AI in some form, up from 66% in 2024, with 88% reporting they've used generative AI for assessments. Another survey by the Digital Education Council found 86% of students already using AI in their studies. Yet the same surveys reveal that only 36% of students have received support from their institution to develop AI skills, despite overwhelming student belief that these skills are essential.
This disconnect between institutional hesitation and student innovation is exactly why I developed the AI Readiness Checklist. Rather than starting with compliance or risk, the checklist is built to prompt strategic reflection and action across five domains that shape campus culture: policy governance, faculty training, student literacy, technology oversight, and community trust. It's not meant to prescribe a one-size-fits-all solution, but to support schools in assessing where they are—and charting a path toward where they want to be.
For example, on the policy front, few universities have moved beyond reactive, classroom-based rules. Yet forward-looking institutions like Arizona State University are embedding AI into teaching, research, advising, and operations. ASU became the first higher education institution to collaborate with OpenAI in January 2024, and through its AI Innovation Challenge has already launched over 200 projects, including an AI tutor that supports psychology students with real-time guidance and feedback. In its press release, ASU President Michael M. Crow emphasized that "ASU recognizes that augmented and artificial intelligence systems are here to stay" while committing to "participate directly in the responsible evolution of AI learning technologies."
Other institutions are also responding with vision. Georgia Tech hosts an AI hub called Tech AI to integrate research and practice across discplines. Stanford University's Institute for Human-Centered Artificial Intelligence exemplifies how higher education can lead the way in integrating ethics, policy, and interdisciplinary collaboration into AI development. These models highlights the critical role universities can play in aligning AI innovation with public good.
And yet, the gap between what's possible and what's common at most institutions remains vast. Many campuses lack clarity on even basic questions: Who oversees AI decision-making? How are tools vetted? Are there meaningful opportunities for students and faculty to engage with the technology beyond fears of cheating?
To close this readiness gap, institutions must take a multi-dimensional approach—one that begins with dialogue. Faculty need support with learning how to use AI tools, as well as in how to help students navigate them ethically. Administrators must develop policies that go beyond enforcement and instead reflect community values. IT teams must scrutinize vendor contracts for hidden AI functions, and governance leaders must consider the implications of autonomous ("agentic") AI that can simulate decision-making or human presence.
Crucially, this work must be transparent. Colleges that fail to communicate their approach risk not just backlash, but erosion of trust. The AI Readiness Checklist includes questions like: Does your institution offer plain-language explanations of AI policies? Are students and faculty invited to provide feedback? Is leadership visibly engaged in the conversation?
The answers to these questions—and the steps that follow—can define whether an institution remains competitive in the AI age.
This is a pivotal moment. As McKinsey & Company's 2025 AI Workforce Report highlights, AI technologies are "already highly capable and rapidly advancing," with profound implications for workforce preparation. Their research indicates that 71% of employees trust their employers to act ethically as they develop AI—higher than their trust in universities (67%), large tech companies (61%), or startups (51%). This places a significant responsibility on educational institutions to help students develop both the technical skills and ethical frameworks needed to lead in an AI-driven workplace.
If higher education is to meet this moment with purpose, we need more than policies. We need principles. We need a shared vision of what it means to lead with AI, and the courage to build structures that support it. For a generation of students entering an increasingly AI-automated job market, the stakes couldn't be higher.