Improving Outcomes Data for Online Programs

In an opinion piece in this week’s Inside Higher Education, Professor Robert Kelchen writes that it’s almost impossible to tell how graduates of online programs fare compared to graduates of face-to-face programs at the same institution.

Dr. Kelchen raises a good point, that with the increased enrollment in online courses due to COVID, we should be able to determine whether students enrolled in online programs do better than those enrolled in face-to-face programs. He cites a study that concludes that students enrolled in face-to-face courses do better than students enrolled in online courses.

I was not familiar with the study cited by Professor Kelchen, so I quickly read it. It did not take me long to realize that this study utilizes comparative data from students attending the University of Phoenix, a school whose student retention has been decreasing for years. On the other hand, I am familiar with several metastudies that conclude just the opposite of this study and are not limited to a single institution with questionable outcomes.

One metastudy  examines 45 studies contrasting learning outcomes in fully or partially online courses with students enrolled in fully face-to-face courses. The study concludes that students enrolled in online courses performed modestly better than students enrolled in face-to-face courses. As a leader in online education since 2002, this is a much better study to rely upon than the University of Phoenix study.

Dr. Kelchen writes that the longstanding debate about the value of online education furthers the interest in knowing more about student debt burdens and post college earnings of students attending online programs at traditionally in-person institutions. I have read several Dr. Kelchen’s academic papers (he is a professor at the University of Tennessee at Knoxville) and respect much of his academic research. Sadly, this idea reflects the opinion of many academics at traditional institutions and does not reflect the good work disseminating best practices in online education by groups like the Online Learning Consortium (OLC) founded in 1992.

Professor Kelchen discloses that he was commissioned by a group of third-party online providers (OPM’s) to see if he could say anything about the value of their programs at partner institutions. He writes that he was unable to do so, provides a link to his blog article about his research, and lists his conclusions. His key takeaway is that the data presented in the federal College Scorecard and the Department of Education’s Integrated Postsecondary Education Data System (IPEDS) does not separate out the outcomes between students attending online programs at an institution that offers the same program face-to-face.

There are three key recommendations that Dr. Kelchen makes to improve the data collected and reported by the Department of Education. These are:

  • Make it clear when a college only offers a certain program online instead of having online and in-person options.
  • Report IPEDS data on the number of graduates by program for fully online programs and all other programs.
  • Report College Scorecard debt and earnings data separately for fully online programs and all other programs.

I like Dr. Kelchen’s recommendations. He correctly notes that segregating results for online and face-to-face programs at the same institution might limit the earnings data published due to lower numbers of graduates. At the same time, I happen to believe that institutions should track the performance of students enrolled in their online programs supported by OPM’s and compare them to the performance of students enrolled in their face-to-face programs. If the online outcomes are weaker, it could be due to many factors including that the academic and demographic profile of the online student is different than the student enrolled face-to-face. We know that many working adults prefer enrolling in online courses due to the difficulty of attending face-to-face classes. We also know that working adults face many difficult issues that place them in a higher at-risk profile than traditional students. Institutions should be the arbiters as to whether the OPM’s programs are achieving the desired results. We shouldn’t have to wait for the Department of Education to improve its data collection and publication of outcomes.

Unless the curriculum is different (and it shouldn’t be) or the grading standards are different (and they shouldn’t be), my bet is that the OPM’s achieve an equal or better result when factoring in differences between the population of students they enroll and the profile of students enrolled in face-to-face courses at the same institution. I am curious if student swirling (a common occurrence with online students who enroll at multiple institutions until they decide where to complete their degree) happens as frequently at institutions offering identical face-to-face and online degree programs.

As far as earnings go, I hate to disappoint traditional academics, but employers have recognized online degrees as legitimate for years now. I doubt the earnings are different between online and face-to-face programs at the same institution. They could be different between institutions but that sifting out “why” is complicated.

I have critiqued and criticized the College Scorecard and other government data sources in the past for the way in which they collect and present data. The Department of Education maintains (correctly) that Congress doesn’t want them to collect a unified dataset of college student outcomes. For that reason, many of the datapoints in the College Scorecard only present data from students who borrowed money. Slightly less than half (48 percent) of all undergraduate students borrow federal loan monies. Why should the College Scorecard only present data from students who borrow money?

For these reasons, I would enjoy getting the support of Dr. Kelchen and some of his traditional academic researcher contemporaries in pushing Congress to allow this change. Until then, the Scorecard should have big asterisks indicating that their data may not be accurate because it does not represent the outcomes for all students. As far as Dr. Kelchen’s recommendations, I would place a lower priority on making those changes than making the change that I propose which is to ask institutions to publish the differences, if any.

I encourage presidents or provosts at the institutions that utilize OPM’s to conduct their own outcomes research (easier and quicker to do than waiting for the data from the Department of Education) and transparently self-publish the differences (if any). My APUS colleagues and I published papers on student retention at APUS (a wholly online institution) for the purpose of providing data to researchers studying the efficacy of online learning. Unfortunately, we were never able to utilize earnings data published by the College Scorecard because it only represented earnings from graduates who borrowed to attend APUS and that number was less than 30 percent of the more than 100,000 graduates during my tenure.

Subjects of Interest

EdTech

Higher Education

Independent Schools

K-12

Student Persistence

Workforce