The Latest College Rankings: Methodologies and Flaws

It’s hard to keep track of college rankings. These days, it seems like everyone wants into the game from which the U.S. News & World Report team made its fame and fortune (note that I did not say “originated” since I believe there were rankings before U.S. News).

On September 8, 2021, Forbes magazine issued its college rankings. The top 13 schools listed in its overall universities’ rankings are:

  • University of California, Berkeley
  • Yale University
  • Princeton University
  • Stanford University
  • Columbia University
  • Massachusetts Institute of Technology (MIT)
  • Harvard University
  • University of California, Los Angeles
  • University of Pennsylvania
  • Northwestern University
  • Dartmouth College
  • Duke University
  • Cornell University

The U.S. News & World Report 2022 college rankings were issued on September 13. The top 13 schools listed in its national universities’ rankings are:

  • Princeton University – #1
  • Columbia University – #2
  • Harvard University – #2
  • MIT – #2
  • Yale University – #5
  • Stanford University – #6
  • University of Chicago – #6
  • University of Pennsylvania – #8
  • California Institute of Technology – #9
  • Duke University – #9
  • Johns Hopkins University – #9
  • Northwestern University – #9
  • Dartmouth College – #13

The Wall Street Journal rankings were issued on September 21, 2021. The top 13 schools listed in its rankings are:

  • Harvard University
  • Stanford University
  • MIT
  • Yale University
  • Duke University
  • Brown University
  • California Institute of Technology
  • Princeton University
  • Johns Hopkins University
  • Northwestern University
  • Cornell University
  • University of Pennsylvania
  • Dartmouth College

Washington Monthly’s annual college rankings were issued in the September/October 2021 issue. The top 13 schools listed in its national universities’ rankings are:

  • Stanford University
  • MIT
  • Duke University
  • University of Wisconsin-Madison
  • Harvard University
  • University of Pennsylvania
  • Johns Hopkins University
  • University of Illinois – Urbana-Champaign
  • University of North Carolina – Chapel Hill
  • University of California – Berkeley
  • University of Washington – Seattle
  • University of California – San Diego
  • University of Notre Dame

Each of these publications touts their methodology as the best way to evaluate colleges and universities. Based on which national universities were selected in the top 13 (the “lucky 13”), I’m not sure the methodology matters that much for the first three, but I’ll try to provide a brief outline of each.

Forbes’ Ranking Methodology

Forbes introduced a new methodology for its rankings in 2021 and for the first time, a public university was ranked #1. The weightings in the Forbes methodology were as follows:

  • Alumni salary (20%) – From the College Scorecard, Forbes analysts used the salaries of alumni at 6 years and 10 years after enrollment. From PayScale, they included early and mid-career earnings which span 1-4 and 10 years after graduation.
  • Debt (15%) – From the College Scorecard, they multiplied the average federal student loan per borrower by the percentage of students who took out federal loans. They used College Scorecard data for five-year loan repayment rates. Each of these variables was weighted at 7.5%.
  • Return on Investment (15%) – Third Way provided the Price-to-Earnings Premium for each institution. This methodology is touted by Third Way and measures how long it takes students to pay their college costs. It divides the total net price of obtaining a college degree by the post-enrollment earnings boost those graduates get, compared to the typical salary of a high school graduate in their state. The general premium was weighed at 10%, and a special low-income premium was weighed at 5%.
  • Graduation rate (15%) – This year, Forbes included the completion rates for all students using IPEDS six-year completion rate (which isn’t very representative of most part-time students, in my opinion). They split the calculation, allocating 10% for all students and 5% for low-income students.
  • Forbes American Leaders List (15%) – They counted how many graduates were listed in a multitude of Forbes listings as well as individuals in public service, with no examples given from military service.
  • Retention rate (10%) – Forbes used the IPEDS three-year average full-time student retention rate, which measures the percentage of students who return after their freshman year. There is no mention if they evaluated how many institutions have a majority of part-time students and how the full-time rate may not be representative.
  • Academic success (10%) – Two measures weighted at 5% each were used. Measure one was the number of graduates who earned Fulbright, Goldwater, Truman, Rhodes, Gates, and Cambridge scholarships over the last four years. The second measure used data to determine the number of undergraduates who earned Ph.D.s over the past three years.

U.S. News & World Report Ranking Methodology

U.S. News & World Report states in its methodology section that it uses 17 academic quality metrics to determine its rankings. These metrics include:

  • Average graduation and retention rates (22%) – The average six-year graduation rate receives a 17.6% weighting using a four-year rolling average, and the average first year student retention rate receives a 4.4% weighting using a four-year rolling average.
  • Social mobility (5%) – Uses the Pell grant recipient graduation rates (2.5%) and the Pell grant recipient graduation rate performance (2.5%).
  • Graduate indebtedness (5%) – Assesses each school’s average accumulated federal loan debt for its last two years graduating classes against the median debt amount for ranked schools (3%) and the percentage of graduates from the last two classes who borrowed federal loans (2%).
  • Graduation rate performance (8%) – This was measured using the 2014 U.S. News predicted graduation rate for each school, compared to its actual six-year graduation rate.
  • Undergraduate reputation (20%) – Based on a survey of presidents, provosts, and deans of admissions, participants rate their peer institutions that they are familiar with. A two-year weighted average of the ratings is used.
  • Faculty resources (20%)U.S. News looks at five factors: class size (8%), faculty salary (7%), faculty with the highest degrees in their field (3%), student faculty ratio (1%), and proportion of faculty who are full-time (1%).
  • Financial resources (10%)U.S. News uses the average spending per student on instruction, research, student services, and related educational expenditures in the 2019 and 2020 fiscal years. These expenditures were compared with the Fall 2018 and Fall 2019 full-time and part-time undergraduate and graduate enrollment.
  • Student excellence (7%) – Standardized tests averages are weighted at 5%. High school class standing receives a 2% weighting and represents the percentage of students who graduate in the top 10% of their high school class (national universities).
  • Alumni giving (3%) – The average percentage of living alumni with bachelor’s degrees who gave to their school during the previous two years.

The Wall Street Journal/Times Higher Education Ranking Methodology

The Wall Street Journal and Times Higher Education (THE) rankings claim to put student success and learning at the heart of their methodology. They claim to use a Balanced Scorecard approach with 15 indicators, including:

  • Resources (30%) – Looks at the amount of money spent on teaching per student (11%), faculty per student (11%), and research papers per faculty (8%).
  • Engagement (20%) – Looks at student engagement using a THE-commissioned survey (7%), student recommendation (6%), interaction with teachers and students (4%), and number of accredited programs (3%).
  • Outcomes (40%) – Looks at graduation rate (11%), value added to graduate salary (12%), debt after graduation (7%), and academic reputation based on a THE survey (10%).
  • Environment (10%) – Looks at the proportion of international students (2%), student diversity (3%), student inclusion (2%), and staff diversity (3%).

The Washington Monthly Ranking Methodology

Washington Monthly states that it ranks four-year schools based on their contribution to the public good in three equally weighted categories: social mobility, research, and providing opportunities for public service. Their methodology is different from the first three.

The social mobility portion of the Washington Monthly rankings also is used for its “Best Bang for the Buck” ranking. Graduation rates are included for all students and measured over eight years.

The analysts used Integrated Postsecondary Education Data System (IPEDS) data comparing graduation rates of Pell and non-Pell students to develop a Pell graduation gap measure. Colleges with higher Pell graduation rates than non-Pell scored positively on this measurement.

Data from IPEDS was also used to measure a college’s affordability. From IPEDS, the analysts used the average net prices paid by first-time, full-time, in-state students with family incomes below $75,000 per year over the past three years.

Other measures of financial success were changed, due to changes in the College Scorecard. A new measure this year is the share of students who earned 150% of the federal poverty line three years after graduating.

The student loan repayment rate is the percentage of the original loan that is still outstanding five years after graduation. Average rates above 100% indicate no progress, while average rates below 100% indicate progress.

The research score for national universities is based on five measurements from the Center for Measuring University Performance and the National Science Foundation. For liberal arts colleges and non-Ph.D. granting universities, science and engineering Ph.D.s were excluded. Double weight was given to the number of alumni who go on to earn Ph.D.s.

The community service score was determined by measuring each college’s performance across several measures. The percentage of graduates serving in AmeriCorps and Peace Corps was combined for one metric. An indicator was used to determine if colleges provided matching funds for undergrad students who received a Segal AmeriCorps Education Award.

Military service was measured by collecting data on the size of each college’s ROTC programs. (Note: American Public University System (APUS), the institution where I served as president, enrolls more than 50,000 students who are active-duty servicemembers or veterans. This methodology does not provide APUS any credit since ROTC programs are only available for residential campuses.)

Federal work/study money used on community service projects was a measure of a college’s priority on community service. (Note: work/study programs are only available at residential colleges.) Participants in Albion College’s Carnegie Community Engagement Classification earned two points. Colleges could also earn up to six points for various voting engagement activities.

The Flaws of These College Rankings

College rankings are very important to schools that are ranked highly by any of these entities. While there are several classifications, I chose to list the schools ranked in the national rankings section for comparison purposes.

The national rankings favor schools that serve a traditional undergraduate population on a residential campus and universities with notable research programs. Traditional student graduation rates are higher than non-traditional students. Traditional students live in a residential environment, supported by their parents and/or student loans, and are encouraged to participate in community service and other activities.

I noticed that none of the rankings evaluated colleges for the percentage of their students who worked full-time, were the head of their households or were parents. Also, none of the rankings evaluated and rewarded colleges who served veterans or evaluated colleges who served active-duty servicemembers. None of the rankings evaluated and rewarded colleges for educating and graduating certificate students.

College rankings are complicated, and many colleges and universities spend money in a quest to improve their rankings and theoretically enroll more “high-quality” students. For parents of traditional students, the rankings may be helpful.

For non-traditional students, the College Scorecard may be a better source, but it is also skewed toward traditional student measurements. If anyone is interested in creating a ranking for schools that serve a majority of non-traditional students, let me know.

Subjects of Interest


Higher Education

Independent Schools


Student Persistence