College Rankings: Could There Be a Better Alternative?

Image courtesy of Naassom Azevedo on Unsplash

I wrote an article about four of the more recently issued rankings of colleges and universities. I originally planned to write a follow-up article about those rankings selecting a few groupings of colleges and universities that are not ranked in the top 25 or so.

The more I thought about it, the more ridiculous it seemed. After all, what college wants to brag that it was ranked #175 or #250, even if there are 5,000+ colleges and universities in the United States?

Then I read a blog post written by Carlo Salerno, an education analyst and blogger that I follow. His post took a different approach. He believes that the college rankings are awful and proposes a crowd-sourced approach.

Mr. Salerno agrees with me that no one is likely to care about rankings after the top 30 or so. He wrote that it’s better to find a way to flag the good schools and the bad schools. He suggests crowdsourcing former students’ reviews and aggregating them like Yelp’s ratings.

Mr. Salerno suggests creating a website which aggregates former students’ responses to six or eight indicators that measure their satisfaction with certain experiences. He suggests ranking factors such as affordability, availability of night or weekend offerings, availability of certain student services such as childcare or tutoring, the learning experience, overall customer service, and job placement. Mr. Salerno also proposes that only currently enrolled students with an active student email account at the school be allowed to rate their schools (later, he includes former students in his article, so I am not sure if the current students is a starting point or not).

He believes that it is the only relevant option. After all, Mr. Salerno writes, who are you more likely to trust, a researcher whose opinions about what is good that may not match your beliefs or the aggregated responses from thousands of current and past students?

Mr. Salerno acknowledges that there are challenges that would have to be managed. Shoppers care about so many services that there will likely be calls for other services to be included in the surveys. Also, it’s well known that people with negative experiences are much more likely to be responsive to surveys than people who are satisfied with their experiences.

Mr. Salerno ends his proposal with a comment that we already have a service like Yelp and that despite the flood of data available from the Department of Education, consumers have their own specific issues that may or may not reflect any of the data that are collected and published.

I think Mr. Salerno’s last comment may be his most relevant. I outlined the methodologies of the four surveys mentioned in my article for the primary purpose of educating readers about the differences between the surveys. As I noted, the top colleges and universities are generally the previously rated top colleges and universities, regardless of the methodology.

The methodologies also favor institutions that serve full-time students in the 18-22 age range who attend traditional universities. Institutions that serve working adults, generally older students who are working full-time and attending college part-time, are not highly ranked because of the researchers’ biases on what quality measures of colleges should be.

Asking current and former students to rate their institutions on measures that reflect relevant services could lead to 4.9 ratings (out of 5 stars) for adult-serving institutions. This type of rating could occur since their current and former students likely selected those schools for reasons far different than those institutions picked by seniors in high school and their parents.

As simplistic and feasible as Mr. Salerno’s recommendation appears, it’s not likely to be well received by the establishment of ranking entities, academics, policymakers, and traditional institutions. Educational institutions ranked at the top want to stay at the top.

Institutions at the top of the traditional rankings value 90-95% graduation rates because they selectively admit students whose academic accomplishments and accolades form their high achievement personalities. They wouldn’t understand that current and former students of an adult-serving institution might give their institution a 4.9 out of 5.0 rating for academic support and tutoring, even though its six-year graduation rate was 35%. The lower graduation rate is most likely impacted by their students’ ability to attend issues, not the schools’ failure to provide courses or advisory services on a timely basis.

Is there a difference between the U.S. News & World Report rankings (selected only as an example, not as the one that I believe is most relevant) and the Yelp-like consumer survey ratings as proposed by Mr. Salerno? There’s a vast difference in methodologies, but I argue that Mr. Salerno’s proposal might be more believable by consumers.

Kudos to the U.S. Department of Education for not using the College Scorecard as a ratings system! In my opinion, the Department does not collect enough information to allow consumers to make meaningful comparisons between institutions that serve non-traditional students.

Georgetown University’s Center on Education and the Workforce used College Scorecard data to form its rankings on the ROI of accredited colleges and universities. Similarly, reporters at the Wall Street Journal analyzed college affordability using College Scorecard data and comparing graduate debt with graduate earnings.

Those analyses are less complicated and more transparent than the ranking systems. However, they use data reported by institutions to the Department, and sometimes that data is misclassified or misreported. It may take time for everyone to be comfortable that all of the data is comparable and accurate.

Maybe Yelp will follow through with Mr. Salerno’s suggestion. Given their ability to access millions of Americans, they may be able to recruit enough participants to make a rating system work.

They might also be able to convince institutions whose rankings don’t measure up to their perceived student satisfaction to help them recruit participants. I hope so. It would be nice to see a consumer driven ratings system.

Subjects of Interest

EdTech

Higher Education

Independent Schools

K-12

Student Persistence

Workforce