Another College ROI Study and Data for College Students

Recently, I read a tweet about the Foundation for Research on Equal Opportunity (FREOPP) report titled “Is College Worth It?” I eagerly followed the link to read the report authored by Preston Cooper. Sadly, I was disappointed.

The summary findings of FREOPP’s analysis indicate that some bachelor’s degrees are worth millions while others have no financial return on investment (ROI). Not so surprising was the revelation that the two biggest factors are which school to attend and what subject to major in.

The researchers reported that “for students who graduate on time, the median bachelor’s degree has a net ROI of $306,000.” However, they also noted that “after accounting for the risk of taking longer than four years to graduate or dropping out, median ROI for the bachelor’s degree drops to $129,000.”

Nearly 30,000 bachelor’s degrees were analyzed using the new program-specific data from The College Scorecard as well as data from the U.S. Census Bureau. FREOPP’s researchers define ROI as the amount a student can expect to gain financially from each individual degree. Notably in comparing a bachelor’s degree to a high school degree, the researchers include the cost of tuition and the foregone earnings for four years of college attendance.

The research indicates that the highest ROI of all the degrees is for a computer science major attending the California Institute of Technology (Caltech). Graduates of this highly selective, public institution can expect an ROI of $4.4 million. While the Caltech data is at the top, the report notes that 28% of all programs have a negative ROI.

There can be substantial differences between academic programs at institutions. The report notes that one of the highest-earning programs is for finance majors at the University of Pennsylvania’s Wharton School, who will have median earnings of $288,000 by age 35. However, UPenn students who major in film and photographic arts can expect earnings of $45,000 by age 35.

Mr. Cooper writes that FREOPP’s preferred measure of ROI incorporates the “significant” chance that a student would not complete college. They also report a “clean” version of ROI, which assumes the chance of on-time graduation is 100%.

The College Scorecard data has a limitation regarding lifetime earnings: it currently reports earnings only for the first two years after graduation. Because of this limitation, Mr. Cooper extrapolated earnings using data from the Census Bureau’s American Community Survey (ACS). He writes that there is a .94 correlation between earnings immediately after graduation and earnings at 45.

To calculate an ROI, you need to determine the investment or cost side of the equation. This is where the FREOPP methodology differs from other ROI analyses that I have reviewed.

First, Mr. Cooper writes that an accurate ROI analysis must consider “counterfactual” earnings – what each student would have earned if he or she had not attended college. It is not adequate to compare the median earnings of a college graduate with the median earnings of a high school graduate.

It’s also not adequate to compare the counterfactual median earnings of a student interested in engineering with the median earnings of a student interested in English. The details of these unique calculations are in the methodology section that merits comments later in this article.

Next, Mr. Cooper notes that “most students cannot work full-time while they are enrolled.” He estimates that every year a student spends in college costs $24,000 in lost earnings.

That’s true, but according to Georgetown’s Center on Education and the Workforce, approximately 25% of students work full-time while enrolled and 70% work (including those who work full-time). Making this assumption means that the results are incorrect for one out of four students and maybe need adjusting for the part-time wages of the remaining students.

I applaud Mr. Cooper for choosing to select net tuition (after grants and scholarships) for the cost of an education instead of the sticker price. I don’t agree with his decision to exclude living expenses for all students as a cost of college, since they would have to pay for food and rent regardless.

I think that’s the case for students who are not living on campus, but on residential campuses – particularly state universities – the costs of room and board can substantially exceed net tuition. That’s a lot different than the cost of an 18-year-old living at home with his/her parents.

An example of the present value calculation provided for a physics graduate of the University of Maryland uses in-state tuition and no room and board. Given the high percentage of out-of-state students attending many state flagship universities, I’m not sure this is an appropriate treatment for the calculation.

Mr. Cooper notes that after weighting for student counts, the median ROI across all 30,000 bachelor’s degree programs listed in the College Scorecard is $306,000. That’s substantially different than the $1,000,000 plus often cited by higher education policymakers.

The report continues with a discussion of the variance among types of degrees. Approximately 16% of the degrees have a negative ROI. Another 12% have an ROI of $1 million or more. Among engineering programs, 69% deliver a lifetime payoff of $1 million or more, and 97% have an ROI exceeding $500,000. More than 85% of computer science programs have an ROI exceeding $500,000.

Nearly 70% of programs in visual arts and music have negative ROI similar outcomes, as do most programs in philosophy and religious studies. Surprisingly, writes Mr. Cooper, 31% of programs in life sciences and biology have a negative ROI. He notes that the likely explanation is that many of these graduates pursue a graduate degree in medicine and the ROI looks at bachelor’s degrees only. Thus, the lower earnings after graduation likely reflect students who are attending graduate school.

The next part of the ROI calculation determined by FREOPP is likely the most controversial, particularly for institutions that serve a majority of part-time, non-traditional students. Cooper states that four-year graduation rates are a measurement of institutional quality. If students take five years or six years to graduate, the number of programs with negative ROIs increase. If the student drops out, all programs are a negative ROI for that student.

I totally disagree with this approach. I believe that this calculation is misleading. Either there is a ROI for a person who graduates, or there is a negative ROI if you don’t graduate. The negative ROI would vary by how many years you attended college. Many college dropouts complete less than a year of college. Additionally, many part-time students work full-time and pay out of pocket for their education, or their employer is paying or reimbursing them.

Many online students are students who swirl by attending multiple institutions before they earn a degree. Student swirling is a trait observed by academics who study student retention. Nearly half of all students will transfer institutions at least once, and approximately one-third will attend more than two institutions.

In addition, the College Scorecard uses data collected from students who use financial aid. Assuming the College Scorecard graduation rates reflect first-time, full-time student graduation rates, there are a significant number of institutions serving part-time, non-traditional students whose accomplishments will not be represented properly.

I don’t believe that institution quality should be judged by graduation rates, particularly for institutions that serve underserved minorities or non-traditional students like those who work full-time. Cooper and his colleagues are part of the group of higher education observers who continue to look at every institution alike with the elite institutions as the standard. Statistically, the independent variables that correlate the highest with a traditional student’s chances of persisting or graduating from college are high school GPA, SAT/ACT scores, and whether both parents graduated from college.

Non-traditional students have different factors that influence their chances of persisting until graduation. Some of the more significant factors influencing adult student persistence as well as graduation rates include:

  • Are those college students financially independent from their parents?
  • Do they have dependents?
  • Are they working full-time?
  • Has there been an extended period since the last time they attended school?
  • Are they first-generation college students?
  • Are they academically prepared for college?
  • Do they have financial issues?
  • Do they have issues at work?

Mr. Cooper states that his methodology uses institutions’ reported completion rates and that he makes allowances for transfer students before he adjusts for the extra years to complete or risk of non-completion. I find it difficult to believe that this methodology works for institutions that serve non-traditional students.

In summary, there are a few components of this methodology that I can support (such as using net tuition) and elements that I cannot support. First, the system is heavily biased toward traditional, highly selective colleges and universities. Why are Caltech’s computer science majors so heavily compensated? Because they’re most likely the smartest (75th percentile of SATs for Caltech’s freshmen last year was 790 for evidence-based reading and 800 for math).

The Caltech computer science majors are small in numbers as well (only 72 graduates in the most recent year). There are technology and consulting firms that will pay a premium to recruit a Caltech graduate.

Why are finance graduates of the Wharton School at the University of Pennsylvania paid so much? They are in an elite pool of high school graduates when they are admitted to Penn (a single-digit acceptance rate). Those students who complete the rigorous and quantitative finance program at Wharton are highly recruited by investment banks and private equity firms for their potential talent as analysts.

Why is there such a high ROI for engineering degrees? Engineering schools have math prerequisites and require as many as six undergraduate calculus courses. Few high school graduates can meet those prerequisites, and nearly half of the freshmen who start as engineering majors switch their major before the year is out.

In our current economy, there are not enough engineering graduates for all of the available jobs. Scarcity means jobs pay more.

The overall ROI includes a risk factor for a student at each college enrolled in a specific program of taking longer than four years to graduate or not graduating at all. I believe that a student’s propensity to drop out or graduate is binary. Either they drop out or they graduate.

Applying an average does not mean as much as providing the calculated ROI for graduates or for those who drop out. The same applies to those who take longer than four years to graduate.

However, some of those students may be full-time employees whose employer pays for their education. The methodology does not take these factors into account.

If nearly half of all undergraduates transfer colleges, why doesn’t the methodology provide a plus-up for colleges that admit and graduate transfer students? Under the first-time, full-time tracking of traditional students, these students are accounted for as dropouts, instead of someone whose reason for transferring may have been related to family relocation, income issues, a decision to major in a program not offered by their original institution, or some other legitimate reason beyond the institution’s control.

As I mentioned earlier, this methodology does not account for students who are enrolled part-time and work full-time. In this example, there should be no diminution of earnings for attending college.

The “counterfactual” factor as described by Mr. Cooper estimates that a student forgoes $24,000 of earnings for each year that they attend college. That should only apply to a full-time student who does not work. Those students represent a small minority of today’s college students.

I believe many students, particularly adult students who return to college, are smarter than some of the critics who build these elaborate methodologies acknowledge. When I decided to be a history major at Duke, I knew that I was going to attend graduate school (along with approximately 70% of my undergraduate class). My Trinity College (undergraduate liberal arts school) classmates had higher overall SAT scores than our engineering classmates, but we chose different pathways. In the long run, we knew that our plans would work out. This system presumes that students don’t know which programs reward them financially.

I enjoyed history classes but knew that I would ultimately go to graduate school for business or law. One of my daughters chose psychology as her undergraduate major knowing that a job in the field of psychology would require a graduate degree or that she might want to attend law school instead.

Mr. Cooper correctly reports that the College Scorecard only reports two years of earnings, which blocks the ability to calculate an ROI for a program over a lifetime of earnings. He extrapolates earnings from the Census Bureau data to build an estimated lifetime earnings value.

However, the extrapolation doesn’t allow for the future action that someone may go to grad school, complete a special certificate, or receive training enabling them to get another job that pays more. Once again, it’s my assertion that many people are smart enough to find options that allow them to earn more over time than what they were paid as recent college graduates.

Like many others who use College Scorecard data, Mr. Cooper fails to note some of the limitations of that data. In its data dictionary document, College Scorecard notes that many of its elements are only available for students who receive federal student aid grants or loans. Data for students who are self-pay or whose employers fund the education are not included. At some institutions, non-FSA students comprise the majority.

The Scorecard also notes that it does not report data for earnings or loans for programs where less than 30 students are included in the denominator. At smaller schools, that eliminates all but the most popular degree programs from earnings being disclosed.

Data for completion rates in the Scorecard are limited to the completion rates for students who are “full-time, first-time students beginning in the fall semester.” The Scorecard’s Data Dictionary explains that “the exclusion of part-time students, transfer students, and students who do not start during the fall from the IPEDS [Integrated Postsecondary Education Data System] completion rates makes the rates less relevant for those populations of students. Full-time, first-time students make up fewer than half of all college students, or even less in some sectors of institutions.” The full-time, first-time student completion rate will not accurately represent the situation for institutions that serve primarily a non-traditional student.

The mission of the Foundation for Research on Equal Opportunity is to conduct original research on expanding equal opportunity to those who least have it. No one doubts that many of our citizens whose incomes or wealth are below the U.S. median would benefit from a bachelor’s degree.

However, this report ignores two-year colleges and their degree and non-degree programs. It also ignores outcomes for non-traditional and part-time students, which impacts the four-year schools that serve them. If Mr. Cooper and his colleagues are serious about their bipartisan mission, I suggest that they refocus their efforts on developing an accurate system by working with institutions that serve students who cannot afford to quit their jobs and attend school full-time for a single year, much less four years. One size does not fit all applies to many products and services. This methodology does not attempt to understand the colleges that serve non-traditional students.

Subjects of Interest

EdTech

Higher Education

Independent Schools

K-12

Student Persistence

Workforce