Note: This article is part 1 of a two-part series on instructional spending tests, recommended policies, and the realities of higher education.
Alexis Gravely’s recent Inside Higher Ed article, “The Debate Over Instructional Spending Policies,” reports on a Veterans Education Project paper detailing the limitations of instructional spending tests for college accountability.
There are no current instructional spending tests for college accountability. In my opinion, there shouldn’t be. However, Ms. Gravely notes that there are recommendations for such from Connecticut’s Democratic Senator Chris Murphy, Washington, DC think tank Third Way and New York think tank The Century Foundation.
These proposals are designed to measure how much an institution spends on instruction as a percentage of tuition so that those institutions with “poor student outcomes” are judged on how they use their resources. In other words, you can have poor student outcomes if you spend a large amount of money on instruction, but not if you don’t.
From my perspective, the definition of “poor student outcomes” and the “relationship of instructional cost to quality” are mostly political sound bites rather than a well-researched paradigm. I have never heard of or read a well-researched academic paper that attributes good student outcomes with the ratio of spending on instruction.
In order to more appropriately assess the Veterans Education Project paper, I decided to review the three recommendations cited by Ms. Gravely that triggered this paper. I noted that all three appear to have been written in 2019 and decided to review them in sequence of their respective publication dates and then review the recently published paper.
The Century Foundation Proposal
The Century Foundation’s paper is “Examining Instructional Spending for Accountability and Consumer Information Purposes” and was authored by Professor John Cheslock. Mr. Cheslock opens his paper with a statement that “relatively little attention has been paid to the spending patterns of colleges and universities and the communication of those patterns to the public.” He notes upfront that he wants to look at spending on instruction and the degree to which dollars collected should be spent on instruction.
Mr. Cheslock writes that the best source of data is the Integrated Postsecondary Education Data System (IPEDS) finance survey, which uses data collected annually by the Department of Education. All institutions that participate in the Federal Student Aid program must provide the data. He notes that the data is not necessarily consistent across institutions because of the differences in how they report their spending (an issue I have written about for years).
Mr. Cheslock writes that there is an IPEDS category labelled instructional expenses (referred to as INSTR) and lists the components that IPEDS includes in this category in an appendix. He adds that there are educational and related expenditures in another IPEDS category (referred to as E&R). These expenditures include student services, academic support, and institutional support.
Not all expenses in this category should be included, especially admissions expenses related to a “greater focus on enrollment management.” Mr. Cheslock provides more details of which expenses are included in E&R in the appendix. The final form of these categories that he reviews are used as the numerator in his ratio of instructional expenditures to dollars collected.
To collect information on dollars collected for instructional purposes, Mr. Cheslock looks to IPEDS again. Collected tuition and fee revenue (CTFR) are funds collected from external sources to cover tuition and fees. Collected tuition and fee revenue plus operating subsidies’ share (CTFR +OSS) represents CTFR plus the instructional share of appropriations from state and local governments (OSS).
Professor Cheslock’s work creates several tables that use the ratio of INSTR to CTFR, INSTR to CTFR+OSS, E&R to CTFR, and E&R to CTFR+OSS for all institutions that report to IPEDS. You don’t have to be a CPA to know that given the problems with consistency in data classifications, grouping institutions in ten percentiles may not reflect a realistic ratio.
Mr. Cheslock concludes his report with a section titled “Future Inquiry.” He acknowledges the inconsistencies in data reporting to IPEDS and also notes that there is no other data source for tracking expenditures across all institutions. He writes that any future work would have to answer five questions related to data limitations. These questions are:
- What types of spending should be counted as instruction?
- What resources are collected for instruction?
- What values for spending ratios signal an improper level of spending on instruction?
- Should spending ratios be interpreted differently across different contexts?
- If spending ratios were used for consumer information and accountability purposes, would higher education institutions respond strategically in undesired ways?
Professor Cheslock does not suggest that regressions be performed to determine if his proposed ratios demonstrate any significance in correlation toward academic quality (expressed as graduation rates or loan default rates or earnings?).
The Third Way Paper
Third Way’s paper, “Using Instructional Spending to Test for Value in Higher Ed,” was published two months later in April 2019. Co-authored by Wesley Whistle and Lanae Erickson, the authors open with a premise that not many higher ed institutions offer students a return on their investment.
Up front, they offer a definition of “the problem,” which is that poor student outcomes exist across every type of institution in our higher education system. According to Whistle and Erickson, far too many institutions do a “poor job of equipping students to get a degree, earn a good wage, and repay their loans.”
The first point that they cite are institutions that are unable to graduate at least 20% of their students in eight years (127 total). The authors cite a recent study that indicates that 17% of institutions do not provide their graduates with an education that allows them to earn more than high school graduates. They do not link these two facts statistically, but the way in which they are presented might lead one to think that they are.
Whistle and Erickson note that institutions have different level of resources to support their ability to improve student outcomes, and that the federal government has no way of controlling how different institutions spend their money. They do not note that institutions may recruit distinctively different populations of students whose persistence patterns or lack thereof may be attributable to a lack of college preparedness or a lack of time for college, based on family or workplace demands.
They state that as Congress considers the Higher Education Act Reauthorization and increased institutional accountability for outcomes, they should consider using a “resource test” to tailor penalties or consequences to institutions that choose not to work to improve student outcomes versus those institutions that would like to improve student outcomes but that do not have the resources to do so.
Their proposal is to link instructional expenditures to available resources. They recommend using IPEDS to look at a ratio of Expenditures on Instruction per Full-Time Equivalent (FTE) compared to Net Tuition per FTE. They suggest that schools that are spending very little on instruction and that have poor outcomes be removed from the Federal Student Aid program. Not once in their paper do they mention that some schools may be benefiting by including too many expenses in their instructional expenses or may be wasting their money on instructional expenses.
Under the heading “Critique and Responses,” Whistle and Erickson list a few areas that should be considered along with their recommendation. These are:
- Spending on instruction isn’t the only factor that matters to improving student outcomes – The authors note that including student support in the expense calculation should be considered, but since advertising expenditures are included with no way to break them out, that category should be excluded until separate expense breakouts are available. This point is similar to the point made in the previous paper by Professor Cheslock when he noted that the IPEDS E&R category includes “enrollment management” expenses.
- Current federal data is limited – IPEDS data has limited value because certain categories of expenses are not disaggregated enough to understand how institutions spend their money.
- Possible bad actors could be missed using this test – The test would not catch everyone. The authors write that of the 22 institutions with graduation rates below 10%, their instructional spending ratios range from 14% to over 650%.
Senator Chris Murphy’s Paper
“Are You Getting What You Pay For?” is the title of a paper issued by Senator Chris Murphy’s office sometime after the other two papers. In its introduction, the paper states that “the real crisis in American higher education is a quality and completion crisis” and cites the 60% higher education completion rate as well as the one million students who default on their student loans each year. The intro also notes that “we cannot afford to keep wasting so much public funding on colleges and programs that simply don’t work for students.”
According to the paper, “an education is a bad deal if it consistently costs too much to attend, leaves graduates in jobs that won’t pay the bills, or burdens students with debt they cannot afford.” After indicating support for several education bills sponsored by the Democratic party, the paper states that federal policy must differentiate between schools that can’t invest more in their students and those that could but won’t. It points out that some schools spend less than a quarter of their tuition teaching students, as if that is a magic threshold that no one should venture below.
The Murphy paper calls for a new accountability framework based on student success outcomes. Step 1 calls for the U.S. Secretary of Education to identify schools that fail either a completion metric or a value metric. Schools identified in this manner would be those that fail to achieve an average graduation rate or a transfer rate below 20% for two out of three consecutive years. The graduation rate should be measured at two years for certificate-granting institutions, four years for predominantly associate degree institutions, and six years for schools that predominantly award bachelor’s degrees.
It’s important to note the dual standard of these targets in that associate degree institutions are given a 200% of full-time standard whereas bachelor’s degree institutions are only given a 150% of full-time standard. The standard is biased toward community colleges and fails to recognize that working adults enrolled in bachelor’s degree programs will likely need at least 200% of the full-time standard as well.
The value metric is not defined but several examples are given. These examples include a debt-to-earnings ratio, a price-to-earnings metric, or a strong repayment rate reflecting that students are able to repay their loans.
Schools that fail to achieve any of these metrics should be notified by the Secretary and required to complete a plan to address which metrics failed. Additionally, the Secretary should evaluate whether the institutions failing to meet the minimum metrics are investing enough money to enable their students to succeed.
Specifically, the paper recommends that the Secretary of Education compare each institution’s tuition per FTE to its spending on instruction per FTE based on data reported to IPEDS and audited financial statements. Over time, spending on certain student support services could be included.
Schools that come up short on the outcomes metrics would be placed in two groups. Group 1 would include schools that spend less than one-third of every tuition dollar on student instruction (note: this percentage was increased from the one-quarter mentioned in the paper’s introduction). Group 2 would include schools that spend one-third or more of every tuition dollar on student instruction.
Schools failing outcomes metrics that are in Group 1 would be required to list their outcomes on FAFSA forms, the College Scorecard, and the Department of Education. They would also be required to put a prominent warning on their website and on application materials alerting all applicants of their status. Failing to meet the benchmark on the same metric for two out of three years would result in the school losing its ability to participate in Federal Student Aid programs.
The paper also recommends that schools be required to maintain their percentage of students receiving Pell grants so that they cannot improve their outcomes by reducing their enrollment of low-income students.
Lastly, I noted that the paper provides no evidence that the targeted instruction-to-tuition ratios correlate toward better student outcomes.
In the next article, I will discuss the Veterans Education Project paper and its significance to instructional spending policies.