8,720
Views
38
CrossRef citations to date
0
Altmetric
Articles

Can a rubric do more than be transparent? Invitation as a new metaphor for assessment criteria

ORCID Icon & ORCID Icon

ABSTRACT

‘Transparency’ is frequently invoked when describing assessment criteria in higher education. However, there are limitations to the metaphor: ‘transparent’ representations give the illusion that everything can (and should) be explicated, and that students are ‘seeing through’ to the educators’ expectations. Drawing from sociomaterial perspectives on standards, an argument is made for a different way of conceptualising assessment criteria. ‘Invitational’ enactments offer an alternative metaphor, one which intentionally promotes student learning. Three propositions frame potential use of this metaphor: (1) assessment criteria promote learning when they invite students into a ‘productive space’; (2) assessment criteria coordinate sustained learning by inviting multiple enactments across tasks; (3) assessment criteria develop sophisticated ways of knowing by inviting student reflection. Drawing from both metaphors, teachers can design assessment materials – particularly rubrics, task descriptions and exemplars – which convey their intentions while also prompting students to develop their own ways of working and learning.

Introduction

Assessment criteria are so frequently considered transparent representations of institutional expectations that we forget there may be alternative ways of thinking. Originally, transparency denoted the move from the ‘secret business’ of university assessment to an open and defensible approach (Boud Citation2014). Thus transparent assessment criteria improve perceptions of fairness and consistency of marking. More recently, transparency also connotes explicit assessment criteria that enhance learning and the student experience (Jonsson Citation2014). From this perspective, students ‘see through’ to institutional expectations; they then grasp what they need to do to complete the task, including the basis for judging the quality of work (Jonsson Citation2014). At face value this seems reasonable and, when asked, students value transparency (Reddy and Andrade Citation2010; Bamber Citation2015). However transparent assessment criteria are surprisingly problematic.

Being completely explicit is not as easy as it sounds; indeed it may be impossible (Bearman and Ajjawi Citation2018). Attempts to be transparent therefore can lead to lists of mechanistic instructions, which hinder deep and holistic understanding (Torrance Citation2007). This is not ideal. From a learning perspective, assessment criteria should promote sophisticated ways of knowing, including coming to understand what constitutes quality of work (Tai et al. Citation2018).

This paper seeks to present a different way of thinking and talking about assessment criteria in order to meet these aspirations. Our focus here is on how assessment can promote learning. We commence by providing some definitions to guide us through the somewhat contestable landscape. We then review the challenges facing transparent assessment criteria, particularly with respect to student learning. Next, we draw from the sociomaterial literature to develop a case for assessment criteria as enacted rather than represented. We invoke an alternative metaphor for this new way of thinking – the invitation. Finally, we explore the implications for assessment materials, particularly rubrics, and provide three propositions to guide practice.

Definitions

Transparency is rarely defined in the higher education literature and this oversight may be because, at heart, it is a metaphor. In this instance, the metaphor of transparency suggests those on the outside can ‘see through’ to previously hidden processes. Metaphors are powerful (Lakoff and Johnson Citation1980) but nonetheless figures of speech, not literal references. For example, while transparency is associated with audit, it is by no mean the same thing (Strathern Citation2000). Therefore, we do not define transparency tightly. Jonsson (Citation2014, 840) describes transparency with respect to assessment in higher education as: ‘[s]tudent awareness of the purpose of the assessment and the assessment criteria’; Strathern (Citation2000) uses the term ‘visibility’. Similarly, we consider transparency a form of ‘seeing through’: a metaphor associated with explicating mechanisms, usually through documentation, so people know ‘what’s going on’.

Assessment criteria are less challenging to define. We are focussed on learning, not on grading, therefore this paper is not concerned with notions of criterion-referencing. To this end, we define assessment criteria to be those artefacts that outline expectations for student work, especially characteristics that denote attainment of standards or describe aspirational quality. Generally, this function is associated with rubrics but by this definition can include aspects of task descriptions, learning objective exemplars and so on. It is through these generally written representations that academics, administrators and students jointly understand what constitutes achievement. We will frequently refer to rubrics during this paper, but they are by no means the only form of assessment criteria, however they are most often associated with ‘transparency’.

The challenge of transparent assessment criteria: views from the tertiary education literature

In seeking different ways of thinking about assessment criteria, it is worth reviewing the problems with transparency, which have been extensively explored over the last fifteen years (O'Donovan, Price, and Rust Citation2004). Textual explanations such as use of rubrics may clarify expectations, reduce anxiety, improve self-efficacy, and self-regulation (Panadero and Jonsson Citation2013). However, there is general agreement regarding the problematic nature of over-specification and associated reductionist approaches in higher education (O'Donovan, Price, and Rust Citation2004; Sadler Citation2007; Torrance Citation2007; Boud Citation2017). As Sadler (Citation2007, 390) notes, in his critique of post-compulsory education in the UK: ‘The assessment criteria have been reduced to pea-sized bits to be swallowed one at a time – and for each bit, once only.’ Studies of university student perspectives support these concerns. Some students view transparency of assessment primarily as a means to get better marks and therefore explicit instructions lead to ‘gaming’ or instrumentalism rather than attainment (Norton Citation2004; Bamber Citation2015). Bell, Mladenovic, and Price (Citation2013) found that half of their student sample thought of assessment materials as a ‘recipe’ book for achieving lecturer expectations.

Issues with transparency appear to be most bothersome with the complex, authentic pieces of assessment (Norton Citation2004; Sadler Citation2007; Jonsson Citation2014), which require deep rather than mechanistic thinking. In these instances, students often find it difficult to understand expectations and teachers find it impossible to articulate what these expectations are (O'Donovan, Price, and Rust Citation2004). As Jonsson (Citation2014, 841) notes: ‘even if transparency may be considered desirable in order to promote student learning, it does not seem to be easily attained’. If researchers struggle with how assessment materials provide meaningful transparency, these difficulties are further compounded within the day-to-day situations of teaching and learning. Torrance (Citation2007) describes teachers overspecifying and thereby limiting learning through finely explicated criteria. Dawson (Citation2015) notes the multiple and confusing use of rubrics in higher education.

One solution is to ‘do transparency better’. This is most prominent in the rubric literature: where authors seek to identify features of effective rubrics (Jonsson Citation2014) or identifying associated factors which improve their effects (Panadero and Jonsson Citation2013). Authors also suggest additional processes and tasks that surround the written criteria. These include: tasks that build an understanding of the purpose and criteria of the assessment (Norton Citation2004; O'Donovan, Price, and Rust Citation2004); self and peer assessment (Panadero and Jonsson Citation2013; Jonsson Citation2014); and teacher explanations (Jonsson Citation2014). While useful, these seem to somewhat stretch the notion of transparent assessment criteria to something else again. Additionally, transparency itself has been problematised as a socially constructed device which serves other agendas (Strathern Citation2000; Orr Citation2007; Jankowski and Provezis Citation2014) and this is also true for assessment criteria (O'Donovan, Price, and Rust Citation2004; Bearman and Ajjawi Citation2018).

In order to move past this conundrum of being transparent but never being transparent enough, we would like to challenge the assumption that artefacts such as rubrics are a straightforward form of communication of expectations. The assessment criteria do not simply and only represent expectations, like some kind of photocopy of the teacher’s mind (Bearman and Ajjawi Citation2017, Citation2018; Ajjawi and Bearman Citation2018). Consider a leading international expert who is asked to fulfil a student project brief and is given an associated rubric outlining expectations. If the expert heeded the rubric, they might produce very different work from their own professional practice. In other words, the rubric doesn’t just depend on a person’s frame of reference; a person’s frame of reference (also) depends on the rubric. The assessment criteria shape students’ notions of what (for example) an assignment looks like and therefore students align their work to its dictates. This is more than transparency; the rubric is prompting students to ‘think’, ‘do’ and ‘make’, not just ‘understand’ expectations. In other words, assessment criteria do more than explicate, they promote (and constrain) activity.

Two views of assessment criteria: representation and enactment

In her sociomaterial study of teachers’ professional standards, Mulcahy (Citation2012) articulates two ways for thinking about and studying standards and we draw from this work to describe two views of assessment criteria. The first perspective is that assessment criteria represent the intentions of those who created them. This representative view is so dominant that it is taken for granted. It seems self-evident that students come to know the task requirements as explained by the written materials. Here, the metaphor of transparency makes most sense, as the purpose of the assessment criteria is to ‘communicate the expectations to the students’ (Jonsson Citation2014, 840). However, this view also comes with all the critiques outlined previously. The second view forms an essential point of this paper: that assessment criteria can be thought of as be thought of as performative. This contention is informed by a sociomaterial perspective (Barad Citation2003; Mulcahy Citation2012). From this perspective, the assessment criteria are enacted and the meaning that they have at any given time is emergent and so dependent on who (and what) is present in the doing. As we noted earlier, a rubric can prompt students to ‘read’, ‘think’, ‘do’ and ‘make’ – it becomes ‘an activity in which people might participate’ (Mulcahy Citation2012). In this way, the rubric is significant at the point of enactment: how the students and materials work together to co-produce the assignment. The unique situation of the student and their context is integrated into the task at hand.

But what about consistency and equity across students? One concern about considering a rubric as a locally produced enactment might be around meeting global requirements. What if the student does something entirely different to what we intend? We look to again to the standards literature (Timmermans and Berg Citation2003; Mulcahy Citation2011; Tummons Citation2014); these suggest that professional standards are always enacted. For example, when any nurse takes your blood pressure, they are following a strict protocol but at the same time making adjustments for your particular circumstance such as your current state of health, restrictive clothing and recent physical activity. In other words, the nurse is ‘working with’ the protocol rather than ‘following it’. In the same way, rubrics act as coordinating devices, for both students and markers. This coordination is not static, it is made of dynamic and ephemeral enactments that necessarily vary from one student to another and from one marker to another. This is why we can have diverse responses to any project-based assignment – different topics of investigation, different methods and different writing styles – but the assignments generally come to the same point. The assessment criteria, task descriptions and exemplars are not fixed expressions of quality but constitute quality in a slightly different way at any given time or place. The assessment is constantly being remade: similar but different.

We are not suggesting that representation or enactment present a better perspective than the other (although the representative view can and should be critiqued due to its taken-for-granted status). We suggest that both views – assessment criteria as representations and as enactments – have value and that teachers constantly engage in ‘strategic juggling of representational ambiguity’ (Mulcahy Citation1999, 97). This juggling is made easier if we remember that transparency is only a metaphor, one which strongly aligns with assessment criteria as representative. Therefore, we suggest that finding a new metaphor will better support the notion of assessment criteria being enactments. In this way, we can move between two metaphors, depending on the value we require at the particular time.

Invitation: a second metaphor for assessment criteria

Metaphors exert significant power over how we think. Sfard (Citation1998, 5) notes: ‘[b]ecause metaphors bring with them certain well-defined expectations as to the possible features of target concepts, the choice of a metaphor is a highly consequential decision’. As already outlined, transparency as a metaphor is well accepted and has many benefits. We have also described its many limitations, including trying to explicate the inexplicable and students who become overly focussed on recipes for success or ‘gaming the system’. We suggest that a partner metaphor could usefully emphasise enactments and provide a means for teachers to counter these challenges.

We propose a new metaphor – assessment criteria as invitations. This avoids the pitfalls of transparency as ‘seeing through’ to institutional intentions by focussing on prompting students to ‘work with’ assessment criteria (Bearman and Ajjawi Citation2018). Invitational assessment criteria can provide valuable opportunities for students to make meaning, in particular with respect to holistic, dynamic and highly tacit concepts which are poorly captured in writing. Teachers can therefore consider how their assessment artefacts promote learning activities – such as thinking, studying, regulating, writing, devising, and interacting. This highlights the value of the pedagogies associated with assessment criteria (Norton Citation2004; O'Donovan, Price, and Rust Citation2004; Panadero, Alonso-Tapia, and Reche Citation2013), which promote dynamic opportunities for students to ‘work with’ assessment materials.

Of course invitation as a metaphor has limitations. In the same way that transparency is not the same as explicitness, invitations are not literally made and students may not have an understanding of what enacting criteria means. The key argument of this paper is that teachers (and students) can use both metaphors as and when they are useful. In the same way that Sfard (Citation1998) suggests learning can be both acquisition and participation, it is possible that assessment criteria texts such as rubrics can be both transparent and invitational. Thinking with both metaphors allows teachers to interrogate their own materials more critically and design them more carefully by attuning to what the student does.

Invitation as a metaphor: implications for assessment practice

The literature suggests that we know how to make rubrics and other assessment materials representative (Reddy and Andrade Citation2010). So, if we don’t need to explain more or clarify further, what else should our assessment criteria be doing? With the enactment metaphor, criteria must necessarily work in concert with the task design and the particular circumstances of the student, as students enact the criteria through doing the task. We offer three propositions to frame how to design assessment criteria as invitational enactments. We illustrate our points using rubrics as these provide practical and concise examples, however these propositions apply across all assessment criteria artefacts.

Proposition 1: assessment criteria promote learning when they invite students into a ‘productive space’

Our alternative metaphor suggests assessment criteria such as rubrics should invite rich student learning. However this leads to the question: how should we phrase our invitations? Highly directive formats (e.g. rubrics with items like ‘has provided three references with correct format’) leave little room for meaningful enactments although they certainly prompt the students to complete a task. On the other hand, criteria that are associated with tasks that are too abstract or criteria which cannot be grasped by students (e.g. instructions like ‘write a coherent essay’) are equally uninviting.

In order to promote meaningful enactments of criteria, we propose that materials should: (1) set appropriate boundaries for the assessment activity, taking account of the particular context of the student (such as their level, the course, a particular teacher and other associated social and material circumstances); and (2) within these boundaries, invite liberal opportunities for students to contribute their own thinking and doing to the development of quality work. These invitations for student contribution define the ‘productive space’ of the assessment. In other words, the student’s context, the task and the criteria all work together and cannot be considered in isolation.

There are three major implications for assessment criteria. Firstly, generic rubrics or similar, which are used repeatedly across tasks, by themselves do not take account of student context and therefore cannot set boundaries for the productive space. Therefore, generic rubrics may not provide useful invitations, unless care is taken with the other elements of the assessment such as task descriptions, exemplars and teacher explanation. Secondly, descriptions of quality should contain salient details that can inform the students’ actions. This will avoid some of the flaws described by Popham (Citation1997) of rubrics being overly precise or too broad and, with some thought, these descriptors can also allow for student work that exceeds the educators’ imaginings (De Vos and Belluigi Citation2011) within the bounds of the task. Finally, the criteria must be explicit or transparent ‘enough’ for the student to understand the invitation. In this way, the criteria can invite rather than dictate and allow for the variation to emerge between students and across markers.

What does this look like in practice? For example, in a science lab report, we might have a generic criterion, such as the one provided on a University of Michigan website: ‘Graphs, charts, tables …: Are all relevant figures included? Are figures and axes labelled appropriately? Do they only ever contain appropriate information? Are the tables redundant with the figures?’ (University of Michigan, Citationn.d.). There are very much aligned with the transparency metaphor and while the question format appears invitational, the student can only respond ‘yes’ or ‘no’ and cannot generate their own responses. This is a statement that represents desired achievements and while there are markers of quality, they tend to be captured in words like ‘appropriate’ and ‘relevant’, which are difficult for students to understand if they don’t already know what they mean. We suggest the following criterion may represent an invitational version, one that prompts enactments: ‘Visual representations of lab report data: The charts, graphs and tables align with the conventions of lab report labelling and formatting with the data accurately represented in an easily readable format.’ In this way the criterion invites the student to seek the appropriate conventions and enact concepts such as accuracy and readability. However, in order to make sense of this, the students must look for, or be provided with, examples of the conventions, including accurate representation and readability. Additionally, the task itself must generate data which can be appropriately visualised. And all this takes place within particular social and material circumstances: how a teacher explains the work to the students; how a distance student might work on a lab simulation; how the lab group works together in an embodied manner with the ‘stuff’ of science; and how particular conventions of science privilege some forms of information and disendorse others.

Switching metaphors shifts the teacher’s purpose from being explicit to inviting enactments. While this suggests many of the same educational approaches suggested by the transparency literature, such as alignment with assessment (Jonsson Citation2014), exemplars (O'Donovan, Price, and Rust Citation2004) and teacher explanation (Jonsson Citation2014), it allows a focus on learning rather than clarification. For example, the perpetual fear with exemplars is that students will use them as ‘model answers’ (Handley and Williams Citation2011); Carless and Chan (Citation2017, 931) consequently suggest exemplars should always be used dialogically to promote ‘student understanding of goals and standards’. However, this metaphor takes exemplar use one step beyond understanding: our alternative framing asks teachers to consider exemplars as an invitation to a productive space. Teachers can then consider how students ‘work with’ the exemplars rather than ‘see through’ them (Bearman and Ajjawi Citation2018).

Developing rubrics, tasks and exemplars that invite students into activity presents a different challenge to making an assessment ‘transparent’. A ‘productive space’ necessarily requires formats that discourage ‘gaming the system’ but at the same time emphasises that meaning-making can be generated within clear boundaries. In seeking to design materials and write rubrics, teachers can ask: What can students contribute that will help them come to enact the criteria? How can the assessment criteria be written in a way that invites them to do so?

This concept of maximising the ‘productive space’ assists with how educators may apply the metaphor of the invitation. It is not an opaque version of assessment, where the students have to guess what the lecturer is thinking, but also avoids the trap of pure transparency, with its aspirations of being explicit and knowable. In short, an invitation to the ‘productive space’ provides an opportunity for students to make meaning through enacting the criteria in a holistic way. However, there is no recipe; promoting student learning is dependent on the level of the student, the purpose of the assessment and criteria. This aligns with the sociomaterial foundations of this metaphor: enactments always emerge in a particular place and time.

Proposition 2: assessment criteria coordinate sustained learning by inviting multiple enactments across tasks

The previous proposition focussed on a singular piece of assessment. However, from a sociomaterial perspective, it might be worth asking: what work does an assessment criterion do – and where and when? When viewed in this way, it can be seen that criteria can coordinate activity across time and space – this could be across a semester, over the course of a year, or indeed over the course of a career. For example, a medical student who comes to know the characteristics of good communication with patients draws on this understanding both within their degree and many years later as a senior doctor. Indeed, Boud (Citation2000) argues assessment should both promote learning for the present and be sustainable, that is, sustain future practice.

Assessment criteria have a key role to play in making assessment sustainable through building a students’ capacity to make evaluative judgements about their own work, after formal education has finished (Boud Citation2000, Tai et al. Citation2018). A singular assessment task, however well designed and appropriately supported, may not be sufficient. Boud and Molloy (Citation2013) suggest that assessment promotes and sustains learning when students have multiple opportunities to make meaning of holistic and dynamic indicators of quality, through drawing on feedback information from teachers and peers. We suggest that assessment criteria such as rubrics can usefully invite multiple enactments across a range of tasks with similar criteria. These repeated invitations to produce similar work may assist students to understand the gaps and contextual challenges of producing ‘good’ quality of work in different circumstances. Among other benefits, this illustrates to students that characteristics of quality are relevant to multiple occasions and beyond the confines of a specific task.

We illustrate this with another selection of a criterion published on the internet. The Eberley Centre at Carnegie Mellon provide an exemplar rubric for a philosophy paper, which suggests that for a student to receive an ‘excellent’ regarding the ‘premises’ of their assignment, their paper should meet the following conditions:

Each reason for believing the [central argument of the paper] is made clear, and as much as possible, presented in single statements. It is also clear which premises are to be taken as given, and which will be supported by sub-arguments. The paper provides sub-arguments for controversial premises. If there are sub-arguments, the premises for these are clear, and made in single statements. The premises which are taken as given are at least plausibly true. (Carnegie Mellon University Citationn.d.)

We can see that this fulfils the ‘transparency’ agenda by outlining the characteristics of what is expected. But what does it invite? As with the previous example of the lab report, we can see the disciplinary influence within the wording of criteria (Bearman Citation2018): this is a criterion for a philosophy paper and invites the student to present an argument like a philosopher. Therefore, this assessment criteria is relevant beyond the particular task, and can be used by students when completing different but similar tasks. By focussing on the invitation metaphor, teachers can use criteria to deliberately coordinate work across a unit or a course.

We suggest one pedagogic strategy for doing this coordination is to varying the assessment task in non-traditional ways. The time-honoured means of learning a criterion like the one suggested above is to repeat the same assessment, such as an essay, multiple times. This aligns with the transparency metaphor: through this activity the students will come to ‘see’ the criteria. We suggest, drawing on Proposition 1 above, that if we think of this criterion as an invitation into the productive space, then we should think about how multiple enactments of the criterion take place within different boundaries. For example, when students first commence in philosophy, they may have short tasks that purely focus on outlining the premises of an argument using the criterion above. These need not be written: they could be for example, within a class debate. Alternatively, later, they may be asked to write work that highlights ‘controversial’ premises. In this way, these multiple enactments allow students to grapple with complex, tacit notions of quality even as they are (re)constructing them in various forms. These multiple invitations present an alternative way to think about how students learn complex concepts, where, as argued previously, transparency always is most problematic (Norton Citation2004; Sadler Citation2007; Jonsson Citation2014). In particular, it allows students to develop a deeper understanding of what quality is, not only through making evaluative judgements about quality but through the enactment of the criteria in multiple contexts.

Proposition 3: assessment criteria develop sophisticated ways of knowing by inviting student reflection

One of the most exciting possibilities for the invitational metaphor may lie in students themselves using it with respect to assessment criteria. For example, teachers can ask students: what do you think this rubric is inviting you to do? This can then lead to a discussion about the breadth of variation rather than a single path to ‘meet the criteria’. Equally, teachers can outline how the criteria invite responses, drawing from exemplars or through class discussion. This highlights the important notion that academic standards are living, dynamic concepts, which develop and change over time.

The most significant value of this final proposition can be found in self-assessment, where students can interrogate their own responses to the assessment criteria ‘invitations’. By explicating how they have responded to the criteria, students focus on the nature of their contribution, not the ‘right answer’. We already have effective and productive self-assessment task designs – where students articulate why and how their work has met the criteria and how this relates to professional practice broadly (Barton et al. Citation2016; Ajjawi and Boud Citation2018) or articulate why and how their judgements differ from expert judgements (Boud, Lawson, and Thompson Citation2015). However, these approaches provide the student with limited opportunities to generate further understandings of the criteria themselves. Asking students to explore assessment materials as invitational, can productively build on these approaches by asking how and why the student enacted the criteria. Sample questions are: Which assessment materials inform my work and how? What did I draw from the various exemplars and why? In what ways did I rely on the rubric to influence my thinking? What other possibilities might have fulfilled the brief? What criteria were most difficult to fulfil and why? How do various exemplars diverge from my approach?

We illustrate this with a holistic rubric assessing critical thinking in engineering from the University of Louisville. The aspirational criterion reads in part: ‘ … Accurate, complete information that is supported by relevant evidence. Complete, fair presentation of all relevant assumptions and points of view. Clearly articulates significant, logical implications and consequences based on relevant evidence.’ (University of Louisville, Citationn.d.). This is an extract from a rubric designed to assess written tasks with respect to critical thinking. Asking students to self-assess by reflecting and commenting upon how they have responded to the invitation presented by this criterion adds an additional and valuable element to any associated assignment. In this way, students can discuss how their work is accurate and how this links to the evidence; they can surface any assumptions; and they can consider what ‘significant’ and ‘logical’ mean with respect to their work. This is a very different task to ‘grading’ their own work; this type of self-assessment is intended to promote deep learning.

Asking students to describe the relationship of their work to the assessment criteria performs a number of very useful functions. Firstly, it may build the tacit, holistic understandings of quality around specific criteria (Tai et al. Citation2018) as well as developing the types of self-assessment skills that we all use as unsupervised workers (Boud Citation2000; Boud and Soler 2016). Secondly this design approach promotes students’ repeated engagement with the necessarily imperfect rubrics, treating them as learning opportunities. This avoids the need to be absolutely transparent and acknowledges the necessary limitations of our assessment texts (O'Donovan, Price, and Rust Citation2004). Moreover, it suggests that markers too must have variation and that sometimes consistency is neither possible or desirable (Bloxham et al. Citation2016); coming to this understanding helps develop students’ assessment literacy (Price et al. Citation2011). Finally, and most significantly, this design approach helps students move beyond black and white thinking and thereby developing more sophisticated ways of knowing (Perry Citation1968; Hofer Citation2001).

Conclusions

In this conceptual paper, we have interrogated the notion of transparent assessment criteria. Assessment materials are habitually considered to be representative of expert knowledge. Thinking about them as enactments may provide different insights. We provide invitation as an alternate metaphor, which allows us to conceptualise assessment criteria as dynamic enactments. We suggest designing assessment materials with both metaphors in order to promote holistic tacit understandings of criteria and also to avoid ‘secret assessment business’. We provide three propositions to guide design of invitational criteria. Firstly, assessment criteria should optimise the productive space; that is, they should allow for the student usefully contributing to the assessment in concert with other materials and the student context. Secondly, assessment materials should support multiple enactments of similar criteria, coordinating students’ learning across units and courses. Finally, asking students to reflect on how they respond to assessment ‘invitations’ can be used to prompt student learning about the specific criteria as well as the overall nature of criteria. Seeing criteria as an invitation shifts the focus from explaining the tacit criteria to providing opportunities to enact tacit criteria. By drawing from both metaphors, and not privileging one, teachers can design assessment materials which convey their intentions while also prompting students to develop their own ways of working and learning.

Acknowledgements

We would like to acknowledge Professor David Boud for his helpful comments.

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  • Ajjawi, Rola, and Margaret Bearman. 2018. “Problematising Standards: Representation or Performance.” In Developing Evaluative Judgement in Higher Education: Assessment for Knowing and Producing Quality Work, edited by D. Boud, Rola Ajjawi, Phillip Dawson, and Joanna Tai, 57–66. Abingdon: Routledge.
  • Ajjawi, Rola, and David Boud. 2018. “Examining the Nature and Effects of Feedback Dialogue.” Assessment & Evaluation in Higher Education 43 (7): 1106–1119. doi:10.1080/02602938.2018.1434128.
  • Bamber, Matthew. 2015. “The Impact on Stakeholder Confidence of Increased Transparency in the Examination Assessment Process.” Assessment & Evaluation in Higher Education 40 (4): 471–487. doi:10.1080/02602938.2014.921662.
  • Barad, K. 2003. “Posthumanist Performativity: Toward an Understanding of how Matter Comes to Matter.” Signs: Journal of Women in Culture and Society 28 (3): 801–831. doi: 10.1086/345321
  • Barton, Karen L., Susie J. Schofield, Sean McAleer, and Rola Ajjawi. 2016. “Translating Evidence-Based Guidelines to Improve Feedback Practices: The interACT Case Study.” BMC Medical Education 16 (1): 53. doi: 10.1186/s12909-016-0562-z
  • Bearman, Margaret. 2018. “Prefigurement, Identities and Agency: The Disciplinary Nature of Evaluative Judgement.” In Developing Evaluative Judgement in Higher Education: Assessment for Knowing and Producing Quality Work, edited by D. Boud, Rola Ajjawi, Phillip Dawson, and Joanna Tai, 147–155. Abingdon: Routledge.
  • Bearman, Margaret, and Rola Ajjawi. 2017. “Actor-network Theory and the OSCE: Formulating a New Research Agenda for a Post-Psychometric Era.” Advances in Health Sciences Education 23 (5): 1037–1049. doi:10.1007/s10459-017-9797-7.
  • Bearman, Margaret, and Rola Ajjawi. 2018. “From ‘Seeing Through’ to ‘Seeing With’: Assessment Criteria and the Myths of Transparency.” Frontiers in Education 3: 96. doi:10.3389/feduc.2018.00096.
  • Bell, Amani, Rosina Mladenovic, and Margaret Price. 2013. “Students’ Perceptions of the Usefulness of Marking Guides, Grade Descriptors and Annotated Exemplars.” Assessment & Evaluation in Higher Education 38 (7): 769–788. doi:10.1080/02602938.2012.714738.
  • Bloxham, Sue, Birgit den-Outer, Jane Hudson, and Margaret Price. 2016. “Let’s Stop the Pretence of Consistent Marking: Exploring the Multiple Limitations of Assessment Criteria.” Assessment & Evaluation in Higher Education 41 (3): 466–481. doi:10.1080/02602938.2015.1024607.
  • Boud, David. 2000. “Sustainable Assessment: Rethinking Assessment for the Learning Society.” Studies in Continuing Education 22 (2): 151–167. doi:10.1080/713695728.
  • Boud, David. 2014. “Shifting Views of Assessment: From Secret Teachers’ Business to Sustaining Learning.” In Advances and Innovations in University Assessment and Feedback, edited by Carolin Kreber, Charles Anderson, Noel Entwhistle, and Jan McArthur, 13–31. Edinburgh: University of Edinburgh.
  • Boud, David. 2017. “Standards-Based Assessment for an Era of Increasing Transparency.” In Scaling Up Assessment for Learning in Higher Education, edited by David Carless, Susan M. Bridges, Cecilia Ka Yuk Chan and Rick Glofcheski, 19–31. Singapore: Springer Singapore.
  • Boud, David, Romy Lawson, and Darrall G. Thompson. 2015. “The Calibration of Student Judgement Through Self-Assessment: Disruptive Effects of Assessment Patterns.” Higher Education Research & Development 34 (1): 45–59. doi: 10.1080/07294360.2014.934328
  • Boud, David and Molloy. 2013. “Rethinking models of feedback for learning: the challenge of design.” Assessment & Evaluation in Higher Education 38 (6): 698–712. doi:10.1080/02602938.2012.691462.
  • Carless, David, and Kennedy Kam Ho Chan. 2017. “Managing Dialogic Use of Exemplars.” Assessment & Evaluation in Higher Education 42 (6): 930–941. doi:10.1080/02602938.2016.1211246.
  • Carnegie Mellon University, n.d. “Grading and Performance Rubrics.” Eberly Center, Carnegie Mellon University. Accessed June 5, 2019. https://www.cmu.edu/teaching/designteach/teach/rubrics.html.
  • Dawson, Phillip. 2015. “Assessment Rubrics: Towards Clearer and More Replicable Design, Research and Practice.” Assessment & Evaluation in Higher Education 42 (3): 347–360. doi:10.1080/02602938.2015.1111294.
  • De Vos, Mark, and Dina Zoe Belluigi. 2011. “Formative Assessment as Mediation.” Perspectives in Education 29 (2): 39–47.
  • Handley, Karen, and Lindsay Williams. 2011. “From Copying to Learning: Using Exemplars to Engage Students with Assessment Criteria and Feedback.” Assessment & Evaluation in Higher Education 36 (1): 95–108. doi:10.1080/02602930903201669.
  • Hofer, Barbara K. 2001. “Personal Epistemology Research: Implications for Learning and Teaching.” Educational Psychology Review 13 (4): 353–383. doi: 10.1023/A:1011965830686
  • Jankowski, Natasha, and Staci Provezis. 2014. “Neoliberal Ideologies, Governmentality and the Academy: An Examination of Accountability Through Assessment and Transparency.” Educational Philosophy and Theory 46 (5): 475–487. doi:10.1080/00131857.2012.721736.
  • Jonsson, Anders. 2014. “Rubrics as a Way of Providing Transparency in Assessment.” Assessment & Evaluation in Higher Education 39 (7): 840–852. doi:10.1080/02602938.2013.875117.
  • Lakoff, G., and M. Johnson. 1980. Metaphors We Live By. London: University of Chicago Press.
  • Mulcahy, Dianne. 1999. “(Actor-net) Working Bodies and Representations: Tales From a Training Field.” Science, Technology and Human Values 24 (1): 80–104. doi: 10.1177/016224399902400105
  • Mulcahy, Dianne. 2011. “Assembling the ‘Accomplished’ Teacher: The Performativity and Politics of Professional Teaching Standards.” Educational Philosophy and Theory 43 (sup1): 94–113. doi:10.1111/j.1469-5812.2009.00617.x.
  • Mulcahy, Dianne. 2012. “Thinking Teacher Professional Learning Performatively: A Socio-Material Account.” Journal of Education and Work 25 (1): 121–139. doi:10.1080/13639080.2012.644910.
  • Norton, Lin. 2004. “Using Assessment Criteria as Learning Criteria: A Case Study in Psychology.” Assessment & Evaluation in Higher Education 29 (6): 687–702. doi:10.1080/0260293042000227236.
  • O'Donovan, Berry, Margaret Price, and Chris Rust. 2004. “Know What I Mean? Enhancing Student Understanding of Assessment Standards and Criteria.” Teaching in Higher Education 9 (3): 325–335. doi:10.1080/1356251042000216642.
  • Orr, Susan. 2007. “Assessment Moderation: Constructing the Marks and Constructing the Students.” Assessment & Evaluation in Higher Education 32 (6): 645–656. doi:10.1080/02602930601117068.
  • Panadero, Ernesto, Jesús Alonso-Tapia, and Eloísa Reche. 2013. “Rubrics vs. Self-Assessment Scripts Effect on Self-Regulation, Performance and Self-Efficacy in pre-Service Teachers.” Studies in Educational Evaluation 39 (3): 125–132. doi:10.1016/j.stueduc.2013.04.001.
  • Panadero, Ernesto, and Anders Jonsson. 2013. “The Use of Scoring Rubrics for Formative Assessment Purposes Revisited: A Review.” Educational Research Review 9: 129–144. doi:10.1016/j.edurev.2013.01.002.
  • Perry, William G. 1968. Forms of Intellectual and Ethical Development in the College Years: A Scheme. New York: Holt, Rinehart & Winston.
  • Popham, W. James. 1997. “What's Wrong-and What's Right-with Rubrics.” Educational Leadership 55: 72–75.
  • Price, Margaret, Jude Carroll, Berry O’Donovan, and Chris Rust. 2011. “If I was Going There I Wouldn’t Start from Here: a Critical Commentary on Current Assessment Practice.” Assessment & Evaluation in Higher Education 36 (4): 479–492. doi: 10.1080/02602930903512883
  • Reddy, Y. Malini, and Heidi Andrade. 2010. “A Review of Rubric Use in Higher Education.” Assessment & Evaluation in Higher Education 35 (4): 435–448. doi:10.1080/02602930902862859.
  • Sadler, D. Royce. 2007. “Perils in the Meticulous Specification of Goals and Assessment Criteria.” Assessment in Education: Principles, Policy & Practice 14 (3): 387–392. doi:10.1080/09695940701592097.
  • Sfard, Anna. 1998. “On Two Metaphors for Learning and the Dangers of Choosing Just One.” Educational Researcher 27 (2): 4–13. doi:10.3102/0013189X027002004.
  • Strathern, Marilyn. 2000. “The Tyranny of Transparency.” British Educational Research Journal 26 (3): 309–321. doi:10.1080/713651562.
  • Tai, Joanna, Rola Ajjawi, David Boud, Phillip Dawson, and Ernesto Panadero. 2018. “Developing Evaluative Judgement: Enabling Students to Make Decisions About the Quality of Work.” Higher Education 76 (3): 467–481. doi:10.1007/s10734-017-0220-3.
  • Timmermans, S., and M. Berg. 2003. The Gold Standard: The Challenge of Evidence-Based Medicine and Standardization in Health Care. Philadelphia: Temple University Press.
  • Torrance, Harry. 2007. “Assessment as Learning? How the use of Explicit Learning Objectives, Assessment Criteria and Feedback in Post-Secondary Education and Training can Come to Dominate Learning.” Assessment in Education: Principles, Policy & Practice 14 (3): 281–294. doi:10.1080/09695940701591867.
  • Tummons, Jonathan. 2014. “Professional Standards in Teacher Education: Tracing Discourses of Professionalism Through the Analysis of Textbooks.” Research in Post-Compulsory Education 19 (4): 417–432. doi:10.1080/13596748.2014.955634.
  • University of Louisville, n.d. “Holistic Critical Thinking Rubric.” JB Speed School of Engineering, University of Louisville, Accessed June 5, 2019. louisville.edu/ideastoaction/-/files/evaluation/engineering.doc.
  • University of Michigan, n.d. “Sample laboratory report rubrics.” Centre for Research on Learning and Teaching, Accessed June 5, 2019. http://www.crlt.umich.edu/print/220.