Can Technology Transform the Higher Ed Accreditation Process?

The American Enterprise Institute, a DC-based think tank with an education practice, recently issued a series of papers about higher education accountability. The One Big Beautiful Bill, passed by Congress and signed by the President, included a provision about measuring higher education accountability. I thought I’d read and review a few of the papers about accreditation.

Technology Can Transform Accreditation

Alison Griffin authored the first in a series of papers titled “How Technology Can Transform the Higher Education Accreditation Process and Drive Continuous Improvement at Colleges and Universities.” Ms. Griffin served as a policy advisor to former House Chairman John Boehner and the U.S. House Committee on Education and the Workforce.

Ms. Griffin cites President Trump’s April 2025 Executive Order regarding accreditation as having identified several shortcomings regarding accreditation that limit the effectiveness of the accrediting agencies and their reviewers. One of the new principles of student-oriented accreditation recommended in the Executive Order is to “increase the consistency, efficiency, and effectiveness of the accreditor recognition review process, including through the use of technology.”

Noting that the review process required by the U.S. Department of Education can require an agency to submit as many as 800,000 pages of documents per review cycle, Ms. Griffin writes that a single reviewer reading 40 pages per hour would take five years to review all the documentation, an impossible task given the five-year review cycles of most agencies.

Furthermore, Ms. Griffin writes that the institutional review process required by most agencies requires institutions to submit documents for a comprehensive review every five to ten years. Problems at an institution can develop and persist before the next review cycle. The metrics required to be submitted often lack an empirical foundation. In addition, some accrediting agencies require qualitative rubrics with aspirational language about “appropriate resources” or “sufficient support.”

Healthcare Focused on Quality Improvement – Can Higher Ed Follow?

Ms. Griffin points to the healthcare industry as a sector that has successfully implemented continuous quality-monitoring systems that successfully provide real-time insights into patient care outcomes. Hospital accreditation frequently incorporates “ongoing data analysis rather than relying on scheduled site visits, leveraging data analytics to track key performance indicators and allowing the early identification of quality concerns.”

While it’s great to laud the changes in the Joint Commission’s accreditation of hospitals and other providers, it’s important to note that there is a vast difference between being able to use data to evaluate outcomes in real-time for a hospital with an average length of stay of 5.2 days and a college or university with a median time to completion of 52 months. The issuance of grades varies widely from course to course at most colleges, with some faculty requiring weekly quizzes, a mid-term exam, and a final exam, to some courses where the requirements are to read all the assignments, attend class, and submit a single paper at the end of the course for grading.

Thanks to regulations on Medicare reimbursement, a standardized system of classifying hospital cases was developed to identify the “product” that the patient received. One such product is an appendectomy. Diagnosis-related groups (DRGs) are the basis for reimbursing hospitals based on delivered products, not direct costs. Developed at Yale University, DRGs are calculated for each procedure’s expected length of stay based on patients’ age, sex, diagnosis, and other complications.

Imagine if a similar system were developed for college degrees and other credentials. Ms. Griffin cites a dashboard with 10 Key Performance Indicators used by Johns Hopkins Hospital to identify bottlenecks in improving patient outcomes. If universities used a similar system, it might be the first time that faculty would be evaluated on their ability to help a student graduate, versus their ability to conduct research, receive grants, and successfully publish papers in peer-reviewed journals. These activities would be combined not in a single instance over a period of days but as a collective group of courses completed over years.

Ms. Griffin cites the financial services industry as a sector that has used technology to monitor transactions and ensure compliance with myriad U.S. and international regulations. Several higher education-related software systems (Civitas Learning licenses one that I am familiar with) monitor college student attendance and engagement and participation in courses through an interface to the Learning Management System (LMS). These systems provide a faculty member or academic advisor a dashboard with early warning signals if the student is likely to fall behind (grade-wise) and possibly disengage or drop out of class, or even worse, drop out of college. That said, the activities in a single course may not be relevant indicators of whether the student will successfully complete a degree program over four, five, or even six years.

Can Technology Enhance Accreditation?

Ms. Griffin writes that technology can “transform accreditation from a resource-intensive documentation exercise to a system of continuous quality monitoring and improvement.”

Her first example is financial monitoring.

Ms. Griffin writes that a small private college has shown signs of “financial trouble” with cash flow declining over three consecutive quarters. She adds that the student-faculty ratio is increasing as departing faculty are not replaced (presumably because of the declining cash flow). Lastly, student retention rates are starting to drop. She argues that an accreditor could initiate a more focused review with a continuous monitoring environment.

All institutional accreditors require that institutions participating in the federal financial aid program submit audited financial statements annually. Part of that submission includes the financial responsibility composite scores. I am not aware of a quarterly cash flow statement submission requirement by either the Department of Education or by an accreditor. Would it be helpful? Probably. Would it be meaningful? If the reviewer (AI or human) understood that cash flows of traditional colleges vary widely based on the sequence of tuition deposits and upfront semester tuition and room and board plan payments, it might be meaningful. Based on the differences in sizes of colleges as well as the differences in payor sources (full-pay, financial aid, employer), understanding the cash flows tailored to an individual institution may require more than a data analytics tool.

Changes in student-faculty ratios may or may not be meaningful. I am not aware of accrediting bodies that require the submission of these ratios with the institutions’ annual financial statements. Nor am I aware of annual retention data submission requirements. I doubt that the student-faculty ratios will change much in a year. Changes like that may be more significant over a longer period. Changes in retention are unlikely to vary significantly in a year. Both calculations can be calculated or are already calculated and available through IPEDS data collected annually by the Department of Education and updated on the College Navigator dashboard. Unfortunately, this data is often two years behind and sometimes three years behind.

Just before the COVID-19 pandemic upended higher education, Bob Zemsky, Susan Shaman, and Susan Campbell Baldridge published The College Stress Test. The authors proposed calculating a financial stress test score for every college using IPEDS data collected on a rolling basis over an eight-year period.

Two critical measures used for each college were (1) the annual enrollment of first-year students and (2) the returning enrollment of last year’s first-year students. Two financial measures were also proposed for their stress score. One was the change in average net price (market price) over an eight-year period. A declining net price was a warning signal. The other was the ratio of endowment balance to total institutional expenses.

The authors published detailed instructions for calculating the stress score for all institutions. At least one institutional accrediting body uses this methodology to calculate an annual score for all its members. If the metrics indicate declining operational and financial health, the agency contacts the institution. While the annual audits are still collected, the stress test calculations provide stronger indications that an institution is at financial risk.

Can more accreditors incorporate an annual operational and financial metrics review like the College Stress Test score? Absolutely. Are quarterly data submissions and reviews feasible? Not at this time, in my opinion. Not only are quarterly data submissions unavailable and perhaps insignificant, but annual shifts in specific metrics like new student admissions and first-to-second year retention are more significant when measured over time like the eight-year rolling average calculation posited by Zemsky and his co-authors.

Another point to note is that Zemsky et al grouped institutions into five segments, organized from highest to lowest cost. They wrote that the schools charging the least lost the most students. Enrollment size mattered almost as much as price, with smaller institutions losing the most students.

Artificial Intelligence Tools Could Help Analyze Data

AI tools are indeed capable of “analyzing vast amounts of data to identify patterns, correlations, and anomalies that might not be apparent through conventional analysis.” However, I believe using AI to analyze most of the relevant data impacting a college’s viability or “stress” is not necessary at the current time.

It’s even more important to know that many databases must be “scrubbed” to have a “clean” dataset that AI tools can analyze. A measurement as simple as a Grade Point Average may not be comparable depending on how colleges record plusses and minuses, pass/fail grades, and grades from courses transferred to the institution.

AI tools are overkill at the present time. Let’s use the data institutions submit to IPEDS on a regular basis (there are several submissions per year per institution) before we propose directly connecting all LMSs and SISs to a master database without standardizing the data uploaded.

Focus on Economic Outcomes

Ms. Griffin writes that the April 2025 Executive Order emphasizes the importance of students’ outcomes in accreditation. I agree with her that “advanced analytics platforms could help accreditors develop metrics for value-added earnings to measure wage gains that institutions generate for students relative to attendance costs, creating balanced incentives to lower costs, improve graduation rates, and raise earning potential.”

However, there is a major roadblock with her recommendation. The first roadblock is that the IRS does not report earnings data to each institution. It is reported to the Department of Education. The second roadblock is that earnings data is only reported by program for students who borrow federal loan funds or receive Pell Grants. Full-pay students or those whose tuition is paid by employers are not included. Earnings for graduates from programs with less than 60 graduates over a reporting period are not included either.

The significance of the second point can be illustrated with a chart reflecting the percentage of students who borrow at Texas colleges and universities. For the 2019-2020 period, 52% of Texas college graduates have student loan debt. The percentage ranges from a low of 14% at The University of Houston – Downtown (perhaps many students here have their employers pick up the tab) to a high of 83% at Tarleton State University.

The average student loans outstanding per student range from a low of $4,902 at the University of Houston – Downtown to a high of $49,287 at Texas Christian University. The overall average for Texas college graduates was $26,273.

Why should any of these institutions, whose non-borrowing graduates’ earnings are not included, be measured by borrowers only? If I borrowed money to attend college, I might be inclined to take the first job offered to me, even if it might pay less than a job that takes longer to obtain. I might also be less inclined to attend graduate school because of incurring additional loans, even though many grad school graduates earn more than undergrad graduates. Earnings from all graduates should be included in any dataset used to measure accountability.

Platforms for Evidence and Communication

“Technology platforms could address the entire accreditation ecosystem – from ED’s oversight of agencies to institutions managing relationships with multiple specialized and programmatic accreditors – creating system-wide transparency rather than isolated improvements.” I concur with Ms. Griffin’s assessment.

I disagree with Ms. Griffin’s assessment of the reality of “real-time” data collection. Except for progress in short-term credentials, I think measuring students’ progress toward earning a two-year, four-year, master’s, or doctoral degree aligns more with Zemsky’s College Stress Test multi-year moving averages than a monthly or quarterly reported metric.

I concur with Ms. Griffin’s assessment that more frequently collected data allows for an institution-to-accrediting agency communication that can be collaborative problem-solving rather than institutional advocacy. I also agree that providing institutional and programmatic accreditors with the same data provides a level of transparency beyond the current situation.

I also agree with Ms. Griffin’s assessment that a commonly shared data platform could reduce the paperwork collected and analyzed by the Department of Education during its periodic review cycle of accrediting agencies. Somehow, the Department (ED) would have to determine how to use (or not use) data from all institutions accredited by an agency that can range from a high of 1,000 plus for an institutional accrediting agency to as few as 12 for a specialty programmatic accreditor. Smaller agencies might require financial assistance from ED to implement a technology upgrade that all agencies will use.

Implementation Considerations

In fairness to Ms. Griffin, she includes a section regarding the implementation considerations necessary for her recommendations. Her first consideration concerns the administrative burdens placed on institutions to submit data more frequently than currently required. She also notes that the historical “peer-review” process should be maintained between accrediting agencies and institutions so that humans are used to interpret data and assess quality instead of relying on a 100% automated process.

Ms. Griffin writes that data standardization must be considered in any quality initiative. She recommends using IPEDS and other state longitudinal systems if they exist. That aligns with my recommendation that a system using IPEDS data, like the College Stress Test, be considered. Ms. Griffin also notes the various institution types and suggests grouping them in classifications like the Carnegie system.

Recommendations and Conclusion

Ms. Griffin recommends that the Department of Education “convene a task force of accreditors, institutions, technology experts, and stakeholders to lay out principles for integrating technology into accreditation processes.” She writes, “The task force should focus on enhancing quality improvement, not simply automating processes.”

Another of Ms. Griffin’s recommendations is that ED fund pilot programs with 25-40 institutions across Carnegie classifications, sectors, and regions to test approaches to continuous data monitoring and analysis. She also recommends that the task force establish data standards and integration mechanisms. “Clear guidelines could build trust in the system.”

Accreditors should conduct research with institutions to establish more scientifically valid and appropriate performance benchmarks. Researchers could analyze historical data to identify correlations between various benchmarks and actual student outcomes (WEB note: assuming all students are included in the data sets).

Stakeholders “could develop tools to measure and track graduates’ wage gains and academic programs’ return on investment.” I concur, provided the wages are tracked for all graduates, not just those who borrow.

Ms. Griffin encourages innovation while maintaining accountability. She also recommends that all stakeholders commit to ensuring that technology innovation in accreditation remains politically neutral and focuses on educational quality and student outcomes. She writes, “technology platforms should be designed to support evidence-based assessment rather than advancing ideological perspectives.” I concur. It is a shame that some of these earnings-related accountability proposals were originally intended to be measured for for-profit institutions only. Institutional tax status should not matter in the evaluation of program outcomes.

Despite writing that AI tools can aid in increasing the frequency of data collection and analysis, Ms. Griffin writes that “the path forward isn’t only about adopting new technologies – it’s about reimagining the relationship between quality assurance and institutional improvement.” She further writes that “it will require substantial investment in new systems, personnel retraining, and cultural changes” and that “the time is now for bold innovation to ensure educational quality.”

A Few More Thoughts

I agree with many of Ms. Griffin’s observations and recommendations. However, I disagree that the concept of real-time data collection and quality improvement is cost-effective, given the substantial time it takes students to earn degrees.

The devil is in the details. Many accountability proponents have suggested that accreditors use earnings data already collected by the ED for the College Scorecard. It’s a mistake. The Scorecard doesn’t include the earnings for completers who do not borrow federal loans or use Pell grants. Institutions, particularly those with low attendance costs, should insist that earnings from all graduates are included. States that provide affordable tuition at their public institutions should also support a more inclusive data collection.

While we’re at it, let’s collect more data than just earnings. Let’s match degrees with employment fields and earnings. Think of the wonderful data collected by Lightcast, and the knowledge that could be gained if it were combined with earnings data from the IRS, matched by program and institution.

I believe that students do better when they know what they want to do when they complete college. Whether it’s the case studies we read about in Hacking College where students consider their educational and experiential journey as a field of study or when a student continues to earn a master’s or professional degree as the means to enter a field requiring more than just a bachelor’s degree, those students who know what they want to do when they graduate from college are more confident and likely to move up the ladder quicker than if they bounce from job to job.

Artificial Intelligence is a wonderful data analytics tool. But it can’t be used to its full potential unless the users build a dataset with well-defined and fully populated fields. IPEDS has been around for decades. Yet, researchers find instances where institutions have populated the wrong field or misclassified a program or degree.

Lastly, while the idea of using task forces to collaborate on data standards and the process for continually analyzing data for pilot studies is fantastic, let’s not forget to consider using experts to sit back and ask, “What’s missing?” Is it reasonable that some programmatic accreditors have increased the educational requirements for their licensed professionals from associate’s degree to bachelor’s degree to master’s degree to professional degree over the years? It didn’t happen overnight. Analytics beyond program cost and graduates’ earnings would be needed to determine if the extra years and extra costs were warranted.

It’s true that technology advances, particularly in AI, have increased the pace of data flow as well as the expectations of many that we should be able to diagnose problems faster in business, healthcare, and even education. Institutional and programmatic accreditation is one of those areas where many outsiders have viewed it as not innovative, unwilling to change, and too focused on subjective criteria instead of real data, like matching program outcomes with earnings. Some of that criticism is justified. Some of it is not. Ms. Griffin is not the first person to suggest enhancements to collecting and analyzing accreditation data. For those who have the power to make changes, let’s make sure that all data collected represents all students, not just a subset. Even then, let’s make sure that trends are measured over time and not just at a moment in time. Credibility is built over time, and a new and improved system of collecting, measuring, and analyzing outcomes and earnings metrics for millions of graduates will take time as well.

Subjects of Interest

Artificial Intelligence/AI

EdTech

Higher Education

Independent Schools

K-12

Science

Student Persistence

The Future of Work

Workforce