The Methodology behind What Colleges and Universities Will Thrive, Survive, Struggle, or Perish

In my previous blog post, I wrote that I needed some time to consider the construct of Professor Galloway’s risk/rating system. While there are multiple datapoints and calculations, there are two key calculations, the Value to Cost Ratio and the Vulnerability Score, that comprise the X axis and Y axis of his ranking system.

As the scores increase or decrease along each axis, colleges and universities’ two scores land them in one of four quadrants. These quadrants are represented by the terms Thrive (elite schools plus those that offer strong value); Survive (those that will see lower revenue but have brand equity, credential-to-cost ratio, and/or large endowments); Struggle (Tier 2 schools with high admit rates, high tuition, or low endowments); or Perish (schools with high admit rates, high tuition, low endowment, dependence on international students and weak brand equity).

The Vulnerability Score is calculated by combining two scores, the Endowment per Full Time Student Percentage Ranking and the Percentage of International Students Percentage Ranking. Each is equally weighted.

In the group of institutions that were categorized as Perish, the lowest endowment per full time student percentage ranking was at Kent State University with an average of $6,721 per full-time student. For that average, Kent State received a percentage rank of .96, which is close to the 1.00 mark that appears to delineate Perish from Struggle.

As a former CFO, I am not concerned about a state-supported institution with a low endowment per full-time student when its full-time enrollment is 21,700 students and its annual undergraduate tuition is $14,663. Perhaps Dr. Galloway should calculate Vulnerability differently for public institutions and use a ranking like the annual state funding contribution per student, data that is available from the State Higher Education Executive Officers Association. That data would provide an indication of the significance of a state budget cut on the Vulnerability of that institution.

The other score, Percentage of International Students Percentage Ranking, is interesting, more for its impact based on its weighting. In the Perish category, institutions’ percentages ranged from 0 to 1.00 with the lowest percentage at 0% international students and the highest percentage at 33% (College of the Atlantic). At the same time, 33% was really an outlier.

Beloit College scored a .98 for a 20% international student population, and Clark University scored a .80 for a 10% international student population. Bowling Green State University in Ohio has a 2% international student population and scored a .31 which when added to their .90 received for a $11,331 endowment per student pushed them into the Perish category (Bowling Green State University is publicly funded and has a full-time enrollment of 14,323 students).

If Percentage of International Students Percentage Ranking counts as 50% of the Vulnerability Score, perhaps it should be scored on a logarithmic scale instead of a percentage to reduce the impact of the wide gaps in actual percentages. Adding the 4,000-plus colleges and universities not included in this dataset skews the results more if you assume that foreign students are attracted to ranked institutions more than unranked institutions.

An attribute of many foreign undergraduate students is that they are full pay, a rarity in today’s competitive higher education landscape. If a dataset is constructed that includes all 4,500 colleges and universities, perhaps Dr. Galloway and his colleagues should consider a full pay percentage ranking instead of an international student ranking percentage.

The Value to Cost Ratio is the other major score and is calculated by multiplying several other scores and dividing them by a tuition ranking. In my first article discussing Professor Galloway’s risk ranking system, I mentioned that using the U.S. News and World Report ratings for one third of the Credential Score was misleading, since all of the institutions in this dataset are ranked and approximately 4,500 accredited institutions of higher education report metrics annually to the U.S. Department of Education. Only 432 of them, all ranked nationally by U.S. News and World Report, were included in this model.

Whether the dataset is expanded to include all institutions or is limited to only those that are ranked, I would remove the U.S. News ranking percentage since I believe it distorts the score of all institutions below the 50th percentile. As far as the Admit Rate Rank Percentage, it works with the smaller group of 432, but will it work with an all-inclusive group of 4,500 since 70% of all U.S. colleges and universities are open enrollment, meaning that they admit everyone who applies? The Average Monthly Search Volume Rank Percentage is a score that recognizes brand presence on the internet and requires further analysis.

I asked my marketing department to check the Google Keyword Planner (used as the source for this ranking) for Bowling Green State University, since I thought the volume of 50 searches per month for BGSU was most likely an input error. When they looked it up, they found 27,100 searches per month for Bowling Green State University, but also found 8,100 searches per month for Bowling Green University. They also checked Biola University and confirmed the 40,500 searches per month used in the spreadsheet, but found an additional 8,100 searches per month for Biola.

When I asked them to check how many searches per month there are for one of my institutions, American Military University, the answer was 49,500 for American Military University but 90,500 for AMU, a term that many of our students and alumni use regularly. I like the metric, but I believe there needs to be a schema for evaluating the name or its derivative, rather than seeking an exact match of the university’s name in order to provide a more accurate ranking.

The Experience Score is another component of the Value to Cost Ratio. It’s a creative way to consider how important the campus experience is for the students which will be an issue if campus activities are cancelled and online classes are held in the fall.

The source for the rating is the Niche website. When I reviewed the Niche methodology, 77.5% of the rating is created from Niche student survey responses, a Niche Party Scene grade, a Niche Athletics grade, a Niche Campus Quality grade, a Niche Local Area grade, and a Niche Diversity grade. Only 7.5% of the score relates to the Department of Education percent of first-time, full-time undergraduate students who return for a second year.

If someone was concerned about the Niche survey influence on the ratings, I would consider using the department’s returning student percentage as a key indicator. The Niche rating is probably a reasonable measurement for the smaller group, but when the dataset is expanded, commuter schools and online schools are not likely to score well or be scored on this index. Using the Department’s returning student percentage may be an alternative metric, but commuter schools and adult-serving institutions may not do as well with that metric either if most of their students are not represented by the first-time, full-time freshman category.

The Education Score is another component of the Value to Cost Ratio. Professor Galloway and his colleagues have chosen to use the Georgetown Center on Education and the Workforce ROI net present value (NPV) rankings at 15 years and 30 years.

As mentioned in the previous article, I wrote about the Center’s methodology in a November 2019 post. The Center originally included the ROI at 10 years and 40 years, but now makes the NPV rank available for 10, 15, 20, 30, and 40 years. I think Professor Galloway’s selection of 15 and 30 years is appropriate and will benefit some of the smaller colleges whose graduates go to graduate school first.

In using the U.S. News and World Report surveys for this dataset, the vast majority of the institutions rated are traditional. With the broader dataset used in the Center’s ROI analysis, two online institutions, American Public University System and Capella University, were ranked 113 and 160 at the 15-year mark and 93 and 95 at the 30-year mark, putting them in the top 2%. Perhaps that adds evidence to Dr. Galloway’s assertion that colleges and universities should invest more in technology.

In addition, Dr. Galloway and his colleagues included instructional wages per full-time student as the third factor in calculating the Education Score. I am not sure that instructional wages per full time student is relevant to the education score. The variability for the Perish institutions ranges from $4,866 (Wartburg College, .02) to $21,360 (Bard College, .95).

My recommendation would be to add the 10-year and 20-year NPV metrics from the Center study (four metrics in all) and exclude instructional wages, as they will differ vastly between research universities and two-year institutions.

Lastly, the Average Undergraduate Tuition is ranked as a percentage of the group and scored from high to low. Sarah Lawrence College is the highest in the Perish category at .97 ($55,900 avg tuition), and New Mexico State University is the lowest in the Perish category at .06 ($14,228).

Since the Average Undergraduate Tuition percentage ranking is divided into the Credential Score, Experience Score, and Education Score multiplied by each other, a lower Average Undergraduate Tuition percentage ranking will yield a higher Value to Cost Ratio. I find this appropriate for evaluating value. If the dataset is expanded, I am not sure whether that would skew toward higher ratios or not based on average tuition percentage rankings.

As I reflect on the Value vs. Vulnerability assessment that Scott Galloway and his colleagues have constructed, I think it’s a good first start with the opportunity to improve by addressing some of the weaknesses I mentioned. As with other methodologies like Bob Zemsky’s, one weakness is the use of data submitted to the Integrated Postsecondary Education Data System (IPEDS) that is usually 18 months old when published.

The only current data used is the data from the Google Keyword Planner. But that is not likely going to translate into whether or not an institution can make up for an enrollment shortage this fall, although there is usually some correlation when an institution ramps up its presence in searches.

Subjects of Interest

Artificial Intelligence/AI

EdTech

Higher Education

Independent Schools

K-12

Science

Student Persistence

Workforce