At nearly 400 pages, the 2026 Stanford AI Index is the ninth iteration of an annual report from the Stanford Center for Human-Centered Artificial Intelligence. I have written about the report previously and enjoy reading its insightful, statistic-driven narrative.
The introductory comments from the co-chairs note that mass adoption of AI has occurred faster than the adoption of the personal computer or the internet. They also report that leading AI companies are reaching revenue scale faster than most technology companies have, and that global corporate investment in AI more than doubled in 2025.
Unlike some of my previously published summaries, I chose to focus on a single chapter, Chapter 7 – Education. That’s not to say that I skipped over the other chapters. It’s just a lengthy and weighty publication, and most of my readers are interested in education.
The education material in the 2026 Stanford AI Index is one of the report’s strongest sections. Many of us have heard that education’s AI problem is mainly about cheating, productivity, or national competitiveness. The report’s chapter about education deals with all three as consequences of institutional lag.
That fits the report’s broader picture: AI capability and adoption are accelerating quickly. Organizational adoption of AI has reached 88 percent, and Stanford’s overview says four in five university students now use generative AI, even as responsible AI measurement and governance trail behind deployment. Education is where the report’s main theme becomes solid. The AI tools are already here, but the systems meant to govern them are not.
AI Realignment of Education Choices
One of the chapter’s biggest strengths is conceptual clarity. The authors separate “AI in education” from “AI literacy” and from “AI education.” Those phrases are frequently blurred together, yet they point to three different goals:
- using AI tools in classrooms
- giving everyone a grounded understanding of AI and its risks
- teaching the technical skills required to build AI systems
The report is also candid about its own evidence base. The authors note that data on AI education is fragmented, lagging, and incomplete, and in the past, the report relied on computer science education as a proxy. This year, the authors added AI-relevant majors as determined by the January 2025 White House AI Talent Report. The authors write that education systems are being pushed to redesign around AI before they can even measure access, curriculum quality, or outcomes very well.
At the postsecondary level, the chapter captures a real realignment. U.S. undergraduate computer science enrollment at four-year universities fell 11 percent between 2024 and 2025, but AI-related software degrees kept growing, especially at the master’s level. The report notes an 82 percent increase in AI-software-related master’s graduates between 2022 and 2024, including 17 percent growth from 2023 to 2024, while AI-hardware-related degrees were flat or declining.
It’s important to note that most AI-related graduate program students are male, non-U.S. resident students. The report’s authors write that the Trump Administration’s crackdown on student visas may impact these enrollments in the near term.
The change in program majors suggests students are not abandoning computing so much as moving toward narrower, more explicitly AI-linked credentials. Stanford links this to the wider labor-market story that entry-level software work appears to be under pressure even as overall AI hiring grows. For colleges, that implies a shift from offering generic technical training toward helping students combine AI capability with domain knowledge and judgment.
The talent-pipeline data points in the same direction. The number of new AI PhDs in the United States and Canada rose 22 percent from 2022 to 2024, and the report notes that the additional growth went to academia rather than industry. Industry still employed the largest share of new AI PhDs in 2024, at 62.75 percent, but that was down from a 77 percent peak in 2022, while academia rose to 31.59 percent.
For several years, the dominant worry was that universities were becoming mere feeders for frontier labs. This year’s data suggests something more mixed: academia may be regaining some pull, even if industry remains the main destination. That matters for who teaches future cohorts and where public-interest expertise in AI still lives.
AI Adoption is Student-Led
The heart of the chapter is student use. The report illustrates how normalized AI already is. Among university students surveyed across 15 countries, 80 percent said they had used generative AI to support learning in 2025, double the 40 percent reported in 2023. Usage ranged from 95 percent in Indonesia to 67 percent in both the United States and the United Kingdom, and 56 percent of users said they asked AI questions at least once a day.
In U.S. middle and high school populations, the report cites estimates ranging from 50 percent to 84 percent using AI for school-related tasks. High school students report using it most for research, essay revision, and brainstorming. University students are more likely to use it to understand a concept, which the report identifies as their top use. That is different than the claims that student AI use is mostly about evasion. Much of it appears closer to study support and academic scaffolding.
What Happens When Institutions Lag
Where the chapter becomes alarming is governance. Only about half of U.S. middle and high schools have policies on AI use. Of those, 28 percent allow AI in some circumstances, and 22 percent prohibit it. Yet policy presence is not policy clarity. Only 36 percent of students described their school’s AI policy as extremely clear, and 47 percent said they wanted to use AI for schoolwork but were unsure whether it was allowed. Teachers were even more negative: just 6 percent said their schools had clear, comprehensive policies.
The report’s most important educational insight may be that ambiguity is itself a system design choice. In practice, unclear rules shift judgment to individual teachers and students, making the actual policy informal and unequal. Since schools with AI policies are more likely to be wealthier and more urban, that ambiguity also becomes an equity issue.
The equity problem appears again when the chapter uses computer science (CS) access as a proxy for AI readiness. In 2025, 91 percent of large U.S. high schools offered foundational CS, compared with 77 percent of medium-sized schools and just 44 percent of small ones. Suburban schools were more likely to offer it than rural or urban schools, and non-Title I schools slightly more than Title I schools.
Access also varied by race and ethnicity. Asian students had the highest access to foundational CS at 91 percent, while Native American students had the lowest at 70 percent. At the same time, access should not be confused with participation. Based on data from 42 states, only 6.1 percent of students were enrolled in CS in 2024-25. This is the chapter’s clearest warning against simple “teach AI everywhere” rhetoric. Standards matter, but access, staffing, scheduling, and actual student uptake matter more.
The policy section is both encouraging and sobering. As of January 2026, 30 U.S. states had issued guidance on AI in education. Seventeen states had clarified that computer science is foundational to AI, and five had allocated specific professional-development funding for AI education. Forty-five states have adopted K-12 CS standards, but most include AI only minimally, usually at the high school level. 10 states make no specific mention of AI.
New Architecture is Emerging
Revised CSTA (Computer Science Teachers Association) standards are due in summer 2026, and the April 2025 executive order on AI education created a federal task force. It pushed agencies to prioritize AI in grants, teacher preparation, apprenticeships, and workforce pathways. The report’s authors are persuasive when they argue that the real problem is implementation. State guidance is largely nonbinding and decentralized; teacher training lacks state-level standards or stable funding; and AP Computer Science still lacks AI-specific content. The United States government is developing policy language faster than its capacity to implement.
The global picture sharpens that point. The report estimates that 93 percent of countries taught CS in 2025, but only 30 percent mandated it, while 63 percent made it available only in at least some schools. China and the UAE stand out because they moved beyond rhetoric and mandated AI education for the 2025-26 school year, with grade-level curricula that begin with literacy and move toward system design and ethics.
Elsewhere, the pattern is more tentative. South Korea introduced AI textbooks, then reversed course amid pushback. Greece partnered with OpenAI to train teachers, and Estonia launched a pilot serving 20,000 students and 3,000 teachers. The chapter does not confuse pilots or procurement with real curriculum integration, and it repeatedly reminds the reader that global education data is messy. Computer Science (CS) and Information Communications and Technology (ICT) are often conflated with digital literacy, and major countries are missing from some datasets. That makes the chapter’s claims more credible, not less. Its picture is directional, careful, and useful.
My Perspective
This is one of the best chapters in the 2026 report. The authors demonstrate that education has entered the AI era not through orderly reform but through mass student adoption, patchy policy, uneven teacher preparation, and widening pressure to upskill across the lifespan.
The closing section on skill acquisition reinforces that point. AI skills are increasingly being built outside formal education, through certificates, online learning, and work itself, and those skills remain unevenly distributed across countries and genders.
For schools and universities, the decision to “allow AI” has already been forced on them. The question that needs to be answered now is how to offer what informal AI learning cannot: shared standards, expert guidance, equitable access, stronger assessment, and a civic framework for using powerful tools well. That, to me, is the central educational insight of this year’s AI Index.
Schools and universities that work to establish best practices in AI policy and, at the same time, encourage faculty to incorporate lessons and experiences that use AI will forge ahead of those waiting on the sidelines “to see where this settles out.” The evidence is conclusive. Use of AI tools will remain on an upward trajectory. Employers will seek college graduates who can effectively use AI tools. Parents will expect their children to be prepared for the AI-enabled workplace of the future. It’s time to narrow the utilization gap between faculty and students because AI advancements are not slowing down.