4,116
Views
1
CrossRef citations to date
0
Altmetric
More Evolution than Revolution? The Work and Workplaces in Higher Education

Scope, Cost, or Speed: Choose Two—The Iron Triangle of Assessment

&

In its purest form, the purpose of assessment is to use evidence to give an account of and improve student learning. But assessment is rarely practiced in its purest form. For many staff and faculty in higher education, assessment is something they are asked or, more likely, required to do. And the reason they are usually asked or required to assess is because some external entity has required that their bosses document assessment practices at their institution.

So more often than not, assessment is not practiced in its ideal form as collaborative inquiry into student learning. This means that when we consider the success of assessment as a means of improving student learning, we have to consider the larger external forces that shape how it's practiced.

A classic conceptual tool for project managers is the Iron Triangle. The thrust of the Iron Triangle () is that the quality of any project is dictated by how you prioritize three factors: scope, cost, and speed.

Figure 1. The Iron Triangle
Figure 1. The Iron Triangle

These factors constrain each another. If you want to maintain the quality of a project, increasing the scope of the project will either lengthen the project timeline, increase the project cost, or both. Likewise, maintaining the quality of a project while cutting the project budget will lead you to reduce the scope or increase the timeline of the project.

The Iron Triangle is a way of reminding clients that when it comes to implementing a project, you can't have it all. Big, fast, or cheap—pick two. We think this important reminder also applies to assessment in higher education. In fact, we believe that external forces have already “picked two,” expansive scope and low cost, for assessment at most colleges and universities. And if we are going to improve the impact of assessment on student learning, we need to acknowledge the constraints imposed by the two sides of the Iron Triangle that have already been picked for us.

The First Side of the Iron Triangle: Scope

Assessment is now pervasive in higher education. Most institutions have created learning outcomes and are assessing those outcomes. The use of student work in assessment is increasing as is the level of faculty engagement (CitationJankowski et al., 2018). Assessment is at a very different place than it was a decade ago.

It would be nice to think that these trends were driven by growing recognition from faculty, administrators, and other stakeholders in the value of assessment as a means of improving student learning and experience. But that's not the case. While more people are engaged in assessment, and some of them even see the benefits of assessment, the increase in assessment is driven largely by the requirements of accreditors. To put it another way, imagine a world in which accreditors did not require any assessment and ask yourself how many departments and programs at your institution would be doing the assessment work they are doing now?

When you review institutional self-studies, work with institutions that are in the midst of their self-studies, or attend workshops on “how to succeed at accreditation,” it is clear that calling accreditation-driven assessment requirements “expansive” is an understatement. Whether or not they are intended to do so, meeting assessment standards in accreditation takes a lot of time and labor.

We respect the important role that accreditors play in ensuring quality and sound practices in higher education. But one can't deny the workload involved in documenting student learning outcomes and assessment processes for departments, programs, and institutions. How many institutions have accreditation “war rooms”-either digital or physical-filled with documents, reports, and other paraphernalia necessary to create a self-study report? How many have invested in expensive software to aggregate information from departments and programs across campus for accreditation reports? How many have redirected the work of staff, administrators, and faculty to compile and proofread documents so that they meet complicated accreditation standards?

Based on our work with institutions, our sense is that they are devoting evermore resources to meeting accreditation requirements. At a recent meeting of a regional accreditor, we heard the leaders of a workshop state that even though their institution was just reaccredited, they now see accreditation as an ongoing, continuous process—a process that they referred to as “developing a culture of accountability.” One step towards developing this culture was to create a permanent, high-level administrative position dedicated to doing the work of the next accreditation report. And since they couldn't afford additional support staff, it also meant that the people supporting this new position would be current staff members who had a portion of their work reassigned to this new position.

We're not debating the importance or effectiveness of accreditation. Nor are we debating whether standards for learning outcomes assessment should be part of accreditation. Our point is that there is no denying the time, effort, and resources necessary to meet the increasing scope of these requirements. In effect, accreditation has set the terms for one side of the Iron Triangle.

The Second Side of the Iron Triangle: Cost

The second broad trend that impacts assessment is a chronic recession in higher education. This recession, which started before and has lingered after the worldwide 2008 recession, continues to buffet much of higher education and is shrinking resources at many institutions.

In considering the effects of diminished resources on assessment, it's important to consider what we've learned about how to do assessment so that it is effective. For example, more and more departments, programs, and institutions are using student work, rather than surveys or standardized tests, as a means of gathering actionable evidence about what students are and are not learning. Among the many benefits of using student work in assessment is that it is far more engaging and compelling for faculty than other forms of assessment.

Unfortunately, it is also a labor-intensive approach to assessment. Compared to a standardized test, in which most of the work involves getting students to sit for the test and take it seriously, using student work in assessment requires significant faculty support. Faculty need to create and refine rubrics and then use them to evaluate samples of student work. That evaluation work will only bear fruit if faculty take the time and effort to use what they've learned to revise their assignments, courses, or pedagogy. Moreover, other personnel may be needed to help gather and organize student work, tally the results of faculty evaluations, and even summarize faculty members' reports on what they learned from evaluating student work.

We're not critiquing the use of student work for assessment. We're advocates of this approach. Nor are we critiquing the fact that it is more resource-intensive than using standardized tests. You get what you pay for. Our question is whether it is possible to sustain the more widespread use of this engaging and actionable approach to assessment at a time when faculty are being asked to do more and more while the funds necessary to help faculty learn how to create, use, and revise rubrics, or to provide small compensation for their time, are in short supply.

The increasing use of adjunct faculty also makes this approach to assessment more challenging. Not because adjunct faculty are any less capable or willing to learn how to use rubrics, but because many departments and programs have not integrated adjunct faculty in a way that would allow them to make effective contributions to the process. Assessment costs. Effective assessment costs more.

Scope, Cost, and Speed: Choose All Three

Unfortunately, when institutions fail the assessment portion of accreditation, they can end up working on tight timelines to make things right. And failing the assessment portion of accreditation happens often.

In a recent publication of the Association for the Assessment of Learning in Higher Education (AALHE), the Higher Learning Commission reported that about 40% of the institutions in that region fail to meet the assessment component of the criteria for accreditation (CitationWelsh, 2016). In another issue of AALHE's journal, the Southern Association of Colleges and Schools Commission on Colleges reported that 64% of the institutions in that region fail the assessment requirements in the pre-site visit part of the review process, 36% fail after the site visit, and 23% of institutions were asked for an additional “follow up after review” (CitationEubanks, 2014).

In our experience, getting a warning about assessment from an accreditor tends to focus the minds of administrators on the importance of assessment–or at least on the importance of not getting “dinged” by an accreditor. Too often, such warnings lead to an all-hands-on-deck response—one which trades speed for quality. As the Iron Triangle tells us, if you try to complete big projects on a short timeline with limited resources, quality inevitably suffers. And worse, externally-driven, breakneck efforts to “fix” assessment programs can reinforce the idea among faculty and staff that assessment is nothing more than make-work imposed by outsiders who are hostile to their work.

We wonder if this is part of what's behind the resurgence of complaints about assessment. Essays criticizing assessment have surfaced in The Chronicle of Higher Education and The New York Times, arguing that assessment undermines faculty, hogs resources, and does little to improve student learning. Other articles have described assessment as a “theater of compliance” that leads to work that passes muster for accreditors but is so poorly designed that it has little chance of improving student learning.

Although assessment experts disagree with these critiques, it is telling that the advice these experts often give to faculty and staff is to ignore the requirements of regional accreditors and build assessment processes that help them ask useful questions about what their students are learning. Unfortunately, this advice is hard to follow, even if it is what the accreditors truly want, when an institution is responding to an accreditor-mandated, fix-it-now event.

CitationFulcher et al. (2014) have pointed out that when many faculty and staff talk about the impact of assessment, they refer to changes they've made rather than improvements in learning that resulted from those changes. This is an important point. Although we personally have worked with departments and programs that have made changes that have resulted in improvements in student learning, we've also worked with departments and programs where change alone is the outcome of assessment—change a course, change a lab, change an assignment.

It's not clear whether these changes made a positive impact on student learning. Regardless, such changes will count on a department's reporting grid as “closing the loop.” More depressingly, sometimes the changes that departments and programs highlight are changes to the assessment process rather than changes to anything that students experience. One can close the loop endlessly by changing assessment measures, changing the criterion for demonstrating competency, changing the outcomes, or changing the way that student work is sampled—assessment full of sound and fury, signifying nothing.

Slow Down!

If the scope of assessment is wide, resources are limited, and the deadlines are tight, then quality will suffer. And to be clear what we mean by quality: quality assessment (1) uses evidence to determine what students are and are not learning: (2) ensures that the factors that benefit learning are maintained while implementing changes that might shore up areas where student learning falls short: and finally, (3) tests whether or not those changes led to improvements in student learning.

The fact that our sentence describing high-quality assessment is nearly 50 words long and has three clauses speaks to the complexity of the process that we're asking faculty and staff to engage in. And if we're going to ask them to engage in a complex process with few resources, then the least we can do is slow things down so that they can do high-quality work—work that will feel worthwhile to them.

We encounter too many places where faculty and staff are working on closing the loop for three or four outcomes a year. How can someone possibly determine whether a change at the program or department level has resulted in improved student learning in one year?

Take the “simplest” level of assessment: using evidence to improve student learning in a course. Even if you have already identified the learning outcome you want to assess as well as your means of assessing it, you're only going to get a baseline in the first iteration of the course. If you decide to change something, you won't be able to see the impact of your change until the second iteration of the course. And even then, you should probably test the impact of the change for a couple more iterations of the course to make sure that the improvement you observed in that first class wasn't a fluke.

We recently served as external evaluators for the Collaborative Humanities Redesign Project (CHRP) (see https://cte.ku.edu/chrp). This project guided 27 humanities faculty through a three-year process of using evidence from student work in one of their courses to a) identify changes they would make to improve student learning and b) use student work from subsequent iterations of the course to evaluate the impact of those changes. The project was rooted in the Scholarship of Teaching and Learning, and it required faculty to create a public course portfolio in which they documented the evidence they used to both identify the changes they wanted make and evaluate the impact of those changes.

The faculty in the CHRP project provided sound evidence that their course revisions had improved student learning. The CHRP project was a successful assessment project, and it is an excellent model of how faculty within and across institutions can collaborate to do effective work in using evidence to improve student learning.

But it took three years for CHRP faculty to do the careful and difficult work of using student work to make clear improvement in one outcome in one course. And that's with exceptional support from project leaders who had expertise in educational development. How long will it take a department or program with 10, 15, or 20 courses to use evidence to make changes that add up to clear improvements in program-level student learning outcomes? And how many cohorts of students should go through the program before we're confident that the improvements are real?

We have worked with institutions to assess their entire general education programs, which may include hundreds of courses and 15 or more outcomes, in a three-, four-, or five-year period. Why the rush? One fear we've heard is that working on only three or four outcomes for a four- or five-year period means that it might take 15 years or longer to assess all of the general education outcomes. That is a long time. But maybe it's the time we need for assessment to fulfill its potential to improve student learning for that many outcomes.

It's interesting to ask how long change takes in industries outside of higher education. We recently ran across an article reporting that it takes the auto industry roughly five years to develop a new paint color for cars (https://www.xrite.com/blog/producing-new-car-colors). If, given far greater resources, it takes an automaker five years to develop a new paint color, why should it take any less time for a department of 10 faculty, with fewer resources and an array of responsibilities outside of assessment, to figure out how to assess and improve one or two learning outcomes from a cluster of courses that may serve hundreds of different students from different backgrounds? Maybe it's time to acknowledge that good work in assessment, like good work in so many other areas of life, will take time.

Slow is Smooth and Smooth has Impact

The way that colleges and universities build and run assessment programs is at the mercy of trends that are larger than our institutions. The declining resources available to most colleges and universities is undeniable. So too is the scope of assessment required by accreditors. And these trends are themselves the result of other powerful forces.

We do not blame either the economy or accreditors for what's happening. What we do assert is that we need to acknowledge the corner we're in when it comes to assessment and recognize that unless we change the scope, cost, and speed equation, we are inadvertently promoting processes that are better at generating reports than improving student learning.

In the last couple of years, assessment practitioners have begun calling assessment that uses student work rather than standardized tests “authentic assessment.” We'd like to expand the idea of “authentic assessment” to refer to assessment that includes authentic work from students and authentic work from faculty and staff. That is, assessment work should give faculty and staff the time to ask meaningful questions about what their students are and are not learning, create experiments based on what they learn, and see how those experiments turn out. This will mean allowing a deeper and longer focus on fewer outcomes at a time. It might mean 5 years, 10 years, or even longer to authentically assess a complicated set of outcomes. But our bet is that while authentic assessment of the sort we describe might fill in fewer boxes on our assessment reporting grids, it will be far more likely to result in improved learning.

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.

References

  • Eubanks, D. (2014). Q&A with Mike Johnson. Intersection, October, 6–7.
  • Fulcher, K. H., Good, M. R., Coleman, C. M., & Smith, K. L. (2014, December). A simple model for learning improvement: Weigh pig, feed pig, weigh pig. (Occasional Paper No. 23). Urbana, Il: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.
  • Jankowski, N. A., Timmer, J. D., Kinzie, J., & Kuh, G. D. (2018, January). Assessment that matters: Trending toward practices that document authentic student learning. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).
  • Welsh, J. (2016). Q&A with Barbara Johnson. Intersection, Summer, 22–24.