Accountability in Assessment: Projections for Higher Education
The use of assessment in higher education dates back to the earliest learning experiences, but assessment today often refers to a means of evaluating student learning throughout a program or university. Since the 1980s, as a field it has developed to facilitate the self-evaluation of a program, achievement of the school’s mission, or to determine success for a particular area, such as writing. Regional accrediting bodies have placed an increased focus on assessment of student learning, but generally left the details of how it happens to individual universities. As long as a university had established processes and followed them with the goal of improvement, the accrediting body was satisfied.
The Spellings Commission in 2006 and the Obama administration’s 2013 higher education ratings proposal have prompted discussions about the assessment of student learning, and focusing on external accountability for colleges and universities. Easy comparison between colleges, using “scorecards” or other rating systems seems to be the most desired outcome. Yet trying to find valid comparisons between large research universities, Ivy League institutions, state colleges, private colleges, and community colleges is a challenge, given the diversity of student ability upon entering school, and the variation in institutional missions. Assessment refers to student learning, not retention, graduation, or future employment, although assessment of learning can contribute to success in those areas.
Various methods of assessing student learning have become prevalent, but most have some drawbacks as well. Using common tests can provide useful information about student success. For example, the Educational Testing Service offers tests in general education areas (the Proficiency Profile) as well as specific program areas (Major Field Tests). The Council for Aid to Education offers the Collegiate Learning Assessment, and other companies develop tests for specific programs. However, unless programs are closely aligned with the tests being developed, scores can’t easily be compared across schools, and standardized testing isn’t good at some areas of learning, like creativity, self-reflection, and the ability to carry out an independent project. Those areas are better assessed in more individual activities, usually in the classroom. Assignments like research papers, independent projects (alone or in groups) and case studies provide more options for individuals to express themselves individually, but lack the comparison options that come with testing. Practicum experiences provide individual students with real-life situations to work on and solve; but again, do not lend themselves to comparison.
Colleges often rely on faculty-developed and graded assignments, using rubrics to clarify various aspects of the assignment. While these assignments closely relate to the individual program, not much can be made of comparisons, between programs, between schools, or even over time. The increase in online learning has had an effect on assessment, although mostly in the sense of ensuring that students in online courses and in on-ground courses have the same opportunities to show their abilities. In some cases, this has resulted in an increased use of learning management software in exclusively on- ground classrooms, so that all students take exams in the same way or all students contribute to online discussions during the week.
Some organizations are trying to facilitate common ground in assessment without making specific requirements. For example, the Lumina Foundation is providing structures, such as the Degree Qualification Profile and Tuning USA, that clarify common student knowledge and abilities, whatever their major. But measurement of the proficiencies and abilities usually happens with university-developed assignments, which limits comparisons. The National Institute for Learning Outcomes Assessment (NILOA) has begun curating a collection of assignments that are faculty-developed and align to the proficiencies, but variations in the stringency of grading and issues of grade inflation still affect any subjective assignment.
Many organizations and individuals in assessment have been working toward finding ways to achieve these goals of commonality and comparison. Others have gone in a completely different direction, emphasizing individuality in student learning. Some of the work in this area has led to competency-based learning, in which student learning may be completely separated from the credit hour and assessment relies on the student demonstrating one specific competency at a time. Others focus on their students building a portfolio of work, or completing an independent project that makes a meaningful contribution to the community.
Technology may provide some options for both consistency and individual response. Simulation of real-life experiences in the field in which students have to make quick decisions, work with others in a virtual environment, and provide self-reflection on their actions would be excellent ways to assess what students have learned. There are few such products available currently that align with specific college majors, however.
The future of assessment has more questions than answers. Is it vital to be able to compare colleges to each other, given their innate differences? Is it meaningful to say that students are proficient on aspects of a standard framework when the grading is subjective? Will non-traditional formats such as competency-based degrees become more prevalent? There are forces pushing for more comparison and consistency as well as for more authentic individual assignments, and there may be some strain between competing demands. All of those involved, from assessment professionals to faculty and data managers, will need to be flexible about what is required, committed to student learning, and agile when new ways of measuring student learning are needed. Assessment of student learning has changed since the 1980s and will continue to do so as we learn more about student learning and are asked for more accountability.
About the Author
Julie Atwood is the Director of Assessment at the American Public University System. She has worked in higher education for more than 20 years in the areas of adult learning, program evaluation and assessment. She earned her M.Ed. from the University of North Carolina at Greensboro in 2001.