5,546
Views
11
CrossRef citations to date
0
Altmetric
Articles

Towards signature assessment and feedback practices: a taxonomy of discipline-specific elements of assessment for learning

ORCID Icon & ORCID Icon

ABSTRACT

Through an extended commentary on the empirical articles in this special issue ‘Signature Assessment and Feedback Practices in the Disciplines’ we elaborate the concepts of signature assessment and signature feedback practices by developing a new taxonomy of their elements. We propose that signature assessments focus on conceptual, epistemological, social, material and/or moral dimensions of the discipline. We also propose four discipline-specific sources of feedback information, particularly highlighting what we are calling consequential feedback, which can be generated by users of disciplinary knowledge or objects. Finally, we identify three levels of feedback timings that can support students’ use of feedback information: rhythms, cycles, and spirals. Based on gaps in the literature in relation to this taxonomy, we identify areas for further empirical research that will advance understanding and application of signature assessment and signature feedback practices.

In proposing this special issue on ‘Signature Assessment and Feedback Practices in the Disciplines’, we introduced the notion of signature assessment and feedback practices, extending Lee Shulman’s (Citation2005) concept of signature pedagogies. In this conclusion to the special issue, we build on this unique collection of papers to theorise these concepts and to provide a framework for future research on discipline-specific elements of assessment and feedback.

We first situate the concepts of signature assessment and feedback practices into the assessment for learning (AfL) approach (Stobart, Citation2008). Just as we are extending a pedagogical concept into assessment and feedback, we deliberately seek a tight connection between learning and assessment and feedback. Thus, following Gravett (Citation2020), we situate the theoretical background to the proposed concept within socio-material learning theories.

We then define our key constructs: disciplinarity, assessment, and feedback. Those definitions come together in a taxonomy of dimensions of signature assessment and feedback practices, which have arisen from careful reading of the papers in this special issue through a socio-material theoretical perspective (Fenwick, Citation2016; Gravett, Citation2020). Finally, we consider gaps in research in relation to our taxonomy and propose directions for future work that seeks to further understand and advance signature assessment and signature feedback practices across a range of disciplines.

Assessment for learning of discipline-specific lessons

Assessment for learning (AfL) focuses on ensuring that assessments constructed by teachers are designed to enhance students’ learning (Stobart, Citation2008). Several principles have been central to this approach, including: a) clarifying and sharing intended learning outcomes and criteria for success; b) designing effective assessment tasks that generate evidence of students’ achievement against those learning outcomes; c) providing feedback that moves learners forward; d) engaging students as instructional resources for each other; e) and activating students as the owners of their own learning through processes such as self-assessment (Black & Wiliam, Citation2018).

Fulfiling the educational potential of AfL suggests that at least some assessment tasks and processes should reflect the deep and implicit structures of a discipline and its knowledge generation practices. In higher education, the term signature pedagogies (Shulman, Citation2005) has been used to describe the ways in which students are socialised into these structures, practices, and characteristic habits of mind, heart and hands of a particular profession or discipline:

Signature pedagogies … implicitly define what counts as knowledge in a field and how things become known. They define how knowledge is analysed, criticized, accepted or discarded. They define the functions of expertise in a field, the locus of authority, and the privileges of rank and standing … these pedagogies even determine the architectural design of educational institutions, which in turn serves to perpetuate these approaches.. (Shulman, Citation2005, p. 55)

Researchers are increasingly attending to disciplinary practices in broader learning and teaching literature (e.g. Anderson & Hounsell, Citation2007), as well as in their longstanding home in discipline-specific educational journals. However, literature on assessment and feedback still tends to focus on generic concerns, with little attention to the specific disciplinary contexts that shape the design and enactment of those practices (Esterhazy, Citation2018; Coffey,  Hammer, Levin, & Grant, Citation2011, Wiliam, Citation2019).

In part, this problem may be a corollary to the observation that, ‘Assessment and learning theories seem to be fields apart … ’ (Baird et al., Citation2017).Baird et al. (Citation2017) argued that, ideally, learning theory should shape the design of assessment and feedback processes and, subsequently, the outcomes of learning. Social-constructivist and sociocultural approaches to learning have largely underpinned the AfL movement, which positions assessment as part of the process of creating a culture of learning (Baird et al., Citation2014; Shepard, Citation2000).

These same socio-constructivist and sociocultural learning theories have underpinned increased attention to disciplinary contexts, communities, and social practices when analysing learning and teaching practices more broadly. The papers in this special issue help bring learning and AfL together, by using social constructivist and Vygotskian-inspired (Vygotsky, Citation1978) sociocultural learning theories to underpin analyses of discipline-specific assessment and/or feedback practices. Although specific theories within these broad categories have different emphases, foci, and assumptions, their implications and use in AfL practice often overlap (Baird et al., Citation2017).

Guided by the broad assumption that learning is entangled in social, cultural, and material contexts (Fenwick, Citation2016), our aim is to draw attention to key dimensions of disciplinary contexts that need to be considered in AfL practices. The taxonomy we propose can be used to describe the ‘signatures’ of particular disciplines. The notions of ‘signature assessment’ and ‘signature feedback’ practices can enrich the discourse about assessment and feedback and guide the design and critique of existing assessment practices for their authenticity with respect to disciplinarity. Our approach also takes a different approach to much of the assessment and feedback literature that starts with generic concepts and frameworks of assessment and feedback. Instead, we propose starting with analyses of the disciplinary context and seeking ways in which assessment and feedback is naturally occurring or might naturally occur in those contexts. Doing so, we contend, is more likely to challenge and reframe generic models that have been difficult to put into practice (Quinlan, Citation2016). In describing and illustrating this taxonomy, we draw mainly on examples from the papers in this special issue. Before doing so, we briefly define disciplinarity, assessment, and feedback.

Definitions of disciplinarity

Our taxonomy is built on the concept of disciplinarity. Becher and Trowler (Citation2001) defined disciplines as sites of knowledge production that vary on the basis of two interconnected dimensions: how knowledge is constructed (i.e. an epistemological dimension) and how knowledge communities interact socially (i.e. a social dimension). These two features are highlighted in our taxonomy. Bernstein (Citation2000) argued that these sites of knowledge production can either be highly self-referential and inward-facing (e.g. physics) or overlapping with and in service to fields of practice, such as professional education in engineering or nursing.

Shulman’s (Citation2005) emphasis was on Bernstein’s (Citation2000) outward-facing disciplines insofar as signature pedagogies arose from research on professional education. Shulman argued that, in professional education, students undertake apprenticeships in three domains: ‘A cognitive apprenticeship wherein one learns to think like a professional, a practical apprenticeship where one learns to perform like a professional, and a moral apprenticeship where one learns to think and act in a responsible and ethical manner that integrates across all three domains.’ (Shulman, Citation2005, p. 3). Thus, Shulman added a practical (physical/material) dimension and an explicitly moral dimension to understanding disciplinarity.

Finally, disciplines typically have a core content or knowledge base, which we refer to as the conceptual dimension. School and university assessments of learning often focus disproportionately on content, with less attention to other aspects of disciplinarity. This taxonomy is intended to help correct that imbalance.

Definitions of assessment and feedback

To adapt Shulman’s concept of signature pedagogy to signature assessment and feedback, we must define both assessment and feedback. Although these two concepts are closely related to produce AfL, we will treat each of them in separate sections below. First, an educational assessment is a procedure for making inferences about student learning. Assessment is when ‘Learners engage in tasks, which generate data. These data become evidence when they are used in support of particular claims.’ (Black & Wiliam, Citation2018, p. 553). Some examples of signature pedagogy, such as case discussions in business, fit Black and Wiliam (Citation2018) definition of assessment. Others, such as lecture-like grand rounds in medical education, do not. A disciplinary signature assessment requires that learners do something that generates a product about which one can make inferences about their expertise in that discipline. That is, disciplinarity is built into the goal of learning. As such, we consider discipline-specific tasks and learning outcomes under ‘dimensions of signature assessment’ below.

Feedback is a ‘process where the learner makes sense of performance-relevant information to promote their learning’ (Henderson et al., Citation2019, p. 17). In educational contexts, performance-relevant information may be highly discipline-specific (e.g. learning to build a robot that retrieves the mail) or more generic (e.g. learning to contribute equitably to a team). Both are important (Wiliam, Citation2019). In exploring ‘signature feedback’, though, we highlight discipline-specific information. This information can come from various sources. We will consider categories within which to organise discipline-specific sources of feedback under ‘dimensions of signature feedback’. Information also may come at various times, which affects how useful it might be for students to act on it. We will discuss three different ways of conceptualising feedback timing in discipline-specific contexts under the section ‘dimensions of signature feedback’.

In creating these frameworks for signature assessment and signature feedback, we do not assume that enculturation into a particular discipline is the sole aim of education. Yet traditional disciplines (e.g. English, history, mathematics, science) underpin most school and university curricula. Professions (e.g. law, social work, nursing) also typically underpin many higher education programmes. Thus, across the range of purposes of education and their associated learning outcomes, disciplines shape teaching and colour students’ learning. In proposing this taxonomy, we are also aware that disciplines are not static over time (Fuller, Citation1991; Finch and Willis, this issue) and vary in their boundedness from other disciplines (Bernstein, Citation2000) as well as their degrees of internal consensus (Quinlan, Citation1999; Weingart, Citation2010). All of these characteristics must be considered in discussions of signature assessment and feedback practices, especially when considering the extent to which a given assessment or feedback design is unique to a particular discipline.

Dimensions of signature assessments

Following our brief definition of disciplinarity above, we propose five key dimensions of disciplinarity that shape assessments (See ). Attending to these dimensions of a discipline enables one to design assessments that are authentic to the discipline. That is, assessment design (and the focus of feedback, as we will discuss later) can focus on one or more of these dimensions. Every assessment need not address every dimension, but to prepare students who are well-grounded in a particular discipline, we contend that an overall programme should assess the range of dimensions. We illustrate each dimension with examples from one or more of the papers in this special issue. Often the papers illustrate multiple dimensions, but they have foregrounded one dimension or presented it as the gateway to other dimensions.

Table 1. Definitions and examples of five key dimensions of signature assessments

Conceptual

Finch and Willis (this issue) focused on how teachers used specific conceptual (cultural) tools such as essay planners to scaffold disciplinary knowledge about persuasive writing. Their article emphasised how disciplinary norms are translated into broad school syllabi and criteria that are then translated again by teachers into specific assessment and feedback-related tools. They highlighted the ‘important role of cultural tools that link culturally situated values and disciplinary assessment practices’ (Finch and Willis, this issue). Through their overview of the historical development of writing pedagogy, they reminded readers of the historical shifts in the nature of the discipline. Not only does content evolve over time, but the norms of the discipline may also shift over time or may be expressed in different ways in different local, cultural contexts. Thus, their analysis also considered other dimensions of our framework, particularly the epistemological and social dimensions of disciplinary assessment and feedback practices.

Zhao, Zhou and Dawson (this issue) also focused on a particular conceptual tool: rubrics. They co-constructed rubrics for assessing student learning in an international business course, beginning with interviews with recruiters in several different business sectors. Given the importance of graduates being able to provide synthetic and original analyses of novel business problems, they selected business case analyses and asked students to develop rubrics for assessing such analyses. Those rubrics guided students’ collaborative analysis of a novel business case, oral presentation, and written reflection.

Like Finch and Willis (this issue), Zhao et al.’s (this issue) conceptual tool captured several of the discipline-specific dimensions included in our proposed taxonomy. In their discussion, they highlighted students’ epistemological shifts as they came to appreciate the limits of existing theories within traditional disciplinary business curricula, and the subsequent need for professionals to create original analyses. Collaboration and personal and social accountability were presented as the ethical, deep structure of the field. They discussed how the process of co-construction of rubrics and use of those co-constructed rubrics in subsequent peer assessment mimicked the way in which professionals work in teams. Doing so also gave students opportunities to practice the emotional, cognitive, and moral dimensions of peer feedback they might engage in during business collaborations.

Epistemological

Disciplines have particular ways of generating and validating knowledge that can be translated into specific assessment tasks and criteria. Swanson and Midura (this issue) drew heavily on science education literature and standards that emphasise students’ engagement in authentic scientific practices. They focused specifically on developing 8th graders’ skills in theory-building through cycles of assessment and feedback that helped students iteratively develop and test their own theories. Their example also illustrated other dimensions of the framework, including experiments with physical objects (material), key concepts such as phases, limits and thresholds (conceptual), and social interactions through repeated rounds of discursive dialogue (social). However, these elements were presented as being in service to understanding the epistemological element of the discipline (i.e. what scientific theories are and how they are developed, tested, and refined).

Social

Penman, Tai, Thomson, and Thompson (this issue) used the social structure and practice architecture (Kemmis et al., Citation2017) of clinical placements in allied health professions as their focal point. Busy clinic settings make it difficult for clinical educators to attend to important relationships with students and the ways in which students are relating to clients. To address this structural challenge, they constructed a near-peer mentoring scheme, with second year students mentoring first year students. Junior students interacted with clients and were observed by senior students. The senior student mentors were charged with generating and delivering feedback information to their mentees on their performance. The authors investigated the features of these near-peer feedback encounters to better understand how feedback takes place in the allied health setting. Their research acknowledged other socio-material dimensions, including typical cultural-discursive terminology such as ‘positive’ and ‘negative’ feedback and the material sites in which these feedback events took place, such as walking down the corridor between patients. Nonetheless, their primary focus seemed to be on the social structures and demands of practice.

Material

Esterhazy and colleagues (this issue) focused on professional artefacts, exploring how radiographs were used by dental hygiene educators in three different kinds of assessment situations: seminars, exams, and clinical practice. Following Wertsch (Citation1994), they defined ‘professional artefacts’, as ‘material objects, concepts, or processes that mediate human action in the context of professional practice.’ (Esterhazy et al., p. 3) Dental radiographs offer opportunities to integrate various bodies of disciplinary knowledge such as anatomy and pathology. Hygienists must be able to generate and interpret radiographs within that accepted knowledge base. Esterhazy and colleagues also showed how working with these material objects enabled students to engage with the conceptual, practical, and moral dimensions of the practice of dental hygiene.

Moral

The moral dimension refers to the socio-cultural and psychological processes involved in evaluating and choosing desirable actions that attend to responsibilities for other beings. Although several of the authors referred to ethical or moral dimensions, it was not the centrepiece of any of the papers reported here. This finding is, perhaps, unsurprising because the moral dimensions of disciplinary curricula are often tacit (Quinlan, Citation2016). The absence of explicit attention to the moral dimensions of disciplines and professions suggests a gap to be addressed in future literature. Esterhazy was the most explicit by showing how assessment moments focused on radiographs prepared students to navigate the ethical complexities of practices, particularly vis-à-vis negotiating their role in relation to other health care professionals.

Moral dimensions can include explicit proximal principles that are taught in a field, such as the Hippocratic Oath taken by doctors. But attention to moral dimensions of practice also means anticipating more distal implications of one’s choices, such as considering the energy requirements of an architectural design or the social justice consequences of a given economic theory or policy. Care for others and fairness are traditional moral concerns that play out in those kinds of scenarios. In addition, values such as loyalty to family/community or respecting authority and traditions are built into the practices of particular communities, though often implicitly. Different moral concerns may come into conflict to create ethical dilemmas that are particular to a given discipline or profession (Quinlan, Citation2019). Any of these moral concerns may be embedded in or the focus of the content or process of an assessment.

Dimensions of signature feedback

In the previous section, we focused on the assessment part of assessment and feedback processes within AfL. In this section, we consider disciplinarity in feedback. First, the information that is provided in feedback may be of any of the types we described under ‘dimensions of signature assessment’ and in . That is, information can be about conceptual, epistemological, social, material, or moral matters, just as the learning goals and the assessment tasks themselves can be about any of those dimensions. As we have elaborated those dimensions above, we do not repeat them here. Instead, we specifically focus on two other aspects of feedback practices here. We highlight different sources of feedback information available and different timings of feedback. Drawing particularly on socio-material theories (Fenwick, Citation2016; Gravett & Gravett, Citation2020), we emphasise social and material dimensions of feedback sources and timing.

Sources of feedback

We propose that there are four main ‘actors’ who are potential sources of feedback within a discipline or profession: self, disciplinary colleagues, audiences/users, and objects (See ). We discuss each below.

Figure 1. Sources of signature feedback

Note. Evaluative feedback = solid. Consequential feedback = stippled.
Figure 1. Sources of signature feedback

Self

First, learners can engage in assessing their own work (labelled ‘self’ in ). In Zhao and colleagues’ example (this issue), students were guided in deliberately constructing assessment criteria. This process enabled students to build their own evaluative expertise or judgement (Sadler, Citation1989; Tai et al., Citation2018). That is, students were afforded opportunities to make decisions about the quality of their own and peers’ work and, thereby, build their own conceptions of quality. Authors in this special issue have discussed how giving appropriate feedback requires internalising the standards of the discipline itself – whether that involves understanding what constitutes persuasive writing (Finch and Willis, this issue), knowledge creation in a business case analysis (Zhau et al., this issue), an empirically valid and complete scientific theory (Swanson and Midura, this issue), a good client consultation (Penman et al., this issue), or a good radiograph (Esterhazy et al., this issue).

This internalising of standards to be able to judge one’s own and others’ work has been theorised further as involving ‘self-feedback’ or ‘internal feedback’, emphasising the internal cognitive process of comparison between external information and one’s own thinking (Nicol, Citation2020). That is, even if students have received information from external sources directly about their performance, they must actively make sense of that information, filtering it through their own knowledge, beliefs, and dispositions to translate it into improvement (Nicol, Citation2020). Thus, we treat internal feedback as important to all the sources of feedback we discuss here. This interpretive process by the learner is captured in as the curved arrow from the feedback information through the self then towards enactment in subsequent learner attempts.

Disciplinary colleagues

Second, disciplinary or professional colleagues play a crucial role. We consider both teacher and peers to be part of this disciplinary community, thus we do not separate teachers from student peers, as is typically done (Wiliam, Citation2019). Several of the papers in this special issue showed how students were invited to simulate professional or disciplinary communities through processes of peer feedback. For example, Finch and Willis (this issue) engaged students in peer feedback on students’ writing and oral presentations, guided by assessment criteria that instantiated disciplinary expectations at an appropriate level. Likewise, Zhao et al (this issue) emphasised peer feedback using co-constructed rubrics.

Disciplinary colleagues (teachers, more advanced learners, or peers) are likely to provide evaluative feedback, represented by solid circles in . That is, when disciplinary colleagues generate feedback information, they provide comments on how well a student has performed in relation to explicit or implicit criteria. Sometimes this evaluative information is translated into advice about future performances. In Penman et al. (this issue), for example, the mentors (senior students) were conscious of cultural constructions of ‘positive’ and ‘negative’ feedback, which implies that they saw feedback as offering evaluative judgements about quality.

Knowledge users offering consequential feedback

The third group of people who can provide feedback information are the users of the knowledge or services provided by the disciplines, whether as audiences or clients. Attention to discipline-specific audiences and clients receives less attention in assessment and feedback literature and was not showcased in our special issue examples, though there is implicit reference to users in both Finch and Willis (this issue) and Zhao and colleagues’ (this issue) work. One of the examples Finch and Willis studied was the use of a planner to help scaffold Year 8 students’ preparation of a persuasive speech to be delivered at an imagined trial of Odysseus. Because users are not normally present in the classroom, they may be invoked in the imagination (as in Finch and Willis’s example) or played by students in deliberate role plays or simulations.

More typically, the classroom processes of peer review, as described in most of the papers in this special issue, focus on helping students become inducted into the disciplinary community, not a community of knowledge users. Therefore, they tend to simulate – and support the practice of – collegial and self-assessment. Yet we argue that consideration of knowledge users is vital to embedding disciplinarity into feedback practices.

With users, the feedback information is often consequential, not evaluative as it is with self and colleagues. Consequential feedback assumes a cause-effect relationship: ‘If I do X, Y happens’. That is, the effect of the student’s performance (X) has consequences (Y). For these effects to count as feedback, the student needs to make sense of the information (the consequences) and choose whether to enact this feedback to enhance their process and/or product (Henderson et al., Citation2019). To do so, the student must first observe the effect (Y) and appreciate that it was caused by their action or performance (X, the assessed task). Next, they need to judge whether Y was the desired effect. If not, they need to generate appropriate strategies for changing their process or product and, finally, choose to enact these strategies in subsequent opportunities in order to improve the outcome.

For example, when learning stand-up comedy, where the goal is to make an audience laugh, the real test comes in whether and when that audience laughs. The audiences’ laughter (or lack thereof) is consequential feedback information. Likewise, in healthcare, when a goal may be to respectfully take sufficient medical history to be able to diagnose and treat a patient appropriately, feedback comes directly from the patient’s response, through body language and the quality, thoroughness, and relevance of the medical history elicited. In medicine, patient educators have been trained and hired to give feedback on the performance of, for example, intimate exams (Towle et al., Citation2010). Trained patient educators are able to provide consequential feedback (i.e. ‘that is painful’ or a sudden intake of breath) and may, sometimes, offer advice (e.g. ‘try a different angle’). Simulations and work-integrated learning provide opportunities for this kind of authentic, consequential feedback from audiences, clients and users. Sources of consequential feedback and the information they provide (block arrows) are depicted as stippled in . Knowledge or service users are stippled darker because they may be able to offer evaluative feedback, in addition to consequential feedback, particularly if they are trained or supported to do so such as patient educators (Towle et al., Citation2010).

Objects offering consequential feedback

In socio-material theories, objects themselves also are seen as actors (Fenwick, Citation2016), constituting our fourth group of sources of feedback. Unlike people in the roles of audiences and users described above, objects can only offer consequential feedback, rather than evaluative feedback or a blend of the two. Some objects are integral to the disciplines. Esterhazy (this issue) offered a rich example of radiographs in the learning of dental hygiene. As students make radiographs, the quality of those radiographs themselves offer information that can guide their learning. Similarly, when engineering students design a rocket, the effectiveness of that rocket can be tested by launching it and observing its trajectory. When computing students create a code, the success of that code in fulfiling its purpose can be observed.

Again, consequential feedback offered by objects, like that of authentic users of disciplinary knowledge, is researched much less than other information sources. Investigating consequential feedback offers potentially rich terrain for investigating how students make sense of this kind of information, how they translate it into better products, and how educators and peers can help them to do so. Rooted in scientific practices, Swanson and Midura’s study (this issue) offered interesting insight into how this type of consequential feedback can be generated and supported in developing disciplinary knowledge. Like scientists, students generated theories that were tested in specific situations (experiments). If those experiments failed to generate the expected results, students needed to re-examine their theory. In this example, feedback information had somewhere to ‘land’ and, crucially, students had space and support in the instructional sequence to enact it by successively refining their theories on the basis of evidence (Pitt, Citation2019).

In sum, we have proposed four main sources of signature feedback information. The information each provides is shown as flowing via arrows into the lighter grey circle in . These sources can be thought of as putting their feedback on a metaphorical table from which the learner can choose to pick up that information (via the curved arrow) and make sense of it. As this feedback information can focus on any of the five dimensions of signature assessments, one should imagine each of those five dimensions (see ) embedded in each arrow, as well as in the student’s task or performance.

Timing

Feedback may come at various times. We propose three main ways of conceptualising feedback timing rooted in analyses of disciplines or professions. Each one operates on a different scale or level; thus we can see them as nested within one another, rather than as mutually exclusive (See ). Insofar as any of the sources of feedback information may operate in any of these timescales, one should imagine embedded in each of the layers depicted in . First, we will consider the moment-to-moment, everyday rhythms that characterise a discipline or profession. Rhythms explicitly acknowledge the immediate socio-material contexts within which learning takes place. Second, we consider natural cycles, such as the stages or phases that characterise disciplinary or professional practice. Cycles suggest that there may be different tasks that build on each other to make up a meaningful sequence that is central to the work of a field. Third, we consider spirals (Carless, Citation2019), which reflect the longer-term journey of learning across multiple cycles and across many day-to-day moments. The concept of spirals makes explicit that learning is a gradual process that unfolds over weeks, months, years or, in the case of experts, decades.

Figure 2. Three levels of timing at which signature feedback may operate

Figure 2. Three levels of timing at which signature feedback may operate

We assume that across all three of these timescales, it is important that learners have access to information that is timely. Timeliness is not defined in terms of promptness, but rather in relation to the learner’s opportunity to act upon that feedback to enhance their performance in subsequent iterations (Pitt & Norton, Citation2017; Winstone et al., Citation2016). That is, an assessment may have taken place in early October, but a second opportunity to enact the feedback may not occur until February. Promptness emphasises returning feedback in October, while timeliness implies that feedback may be most useful in January when a learner is preparing their February attempt. Timeliness, as well as the content and the nature of feedback information, is vital to ensuring that the learner is able to make sense of and act on information available from external sources.

Rhythms

Penman et al (this issue) premised their intervention of mentor-mentee pairs on an analysis of the constraints imposed by the everyday rhythms of allied healthcare practice in busy clinics. Interestingly, they observed how much of the feedback provided by mentors to their mentees occurred while walking down the corridor from one patient to another patient. Students were taking advantage of the affordances of the rhythms of their day to provide timely, proximal feedback when their patient encounters were fresh in their minds and before the next opportunity to apply the feedback.

Cycles

Swanson and Midura (this issue) illustrated stages in authentic disciplinary cycles. For example, the teacher underpinned the design of her theory building course with the process of theory building used by scientists. Students – like scientists – explored examples and then generated preliminary theories. They tested those theories against another example, revised their theories, and tested them again against further examples generated among the class. Assessment activities and feedback episodes were tied to each of those stages of the theory-building process. Thus, timing of assessment and feedback were discipline-specific.

Spirals

None of the papers foregrounded disciplinary spirals in their analysis of feedback. Swanson and Midura’s (this issue) study, though, was clearly part of a feedback spiral. In their paper, they focused on just one unit in a year-long course that introduced students to four different patterns: threshold, equilibration, exponential growth, and oscillation. Their study focused on cycles within the threshold unit. However, it was implied that the key criteria for a theory that the authors detailed were revisited in a spiral as students practiced applying those criteria in relation to the other three patterns.

Spiral feedback is supported by particular relationships and relational structures that facilitate longer term engagement between learners and other actors who provide feedback. Again, in Swanson’s scientific theory building classes (Swanson & Midura, this issue), students began to act like a scientific community insofar as they engaged in peer review processes (e.g. poster presentations) that were similar to those used by scientists. The scientific community consists of longstanding relationships in which discussions take place about key concepts, theories, and studies. Particular language evolves to facilitate those discussions and particular people are identified with different positions in the debate.

Summary and future directions

Assessment and feedback literature tends to be generic, rather than focusing on how to assess and facilitate feedback about deep structures, knowledge, aims and values of the disciplines in which assessment and feedback is happening. This special issue is intended to set an agenda for addressing that gap. We have extended Shulman’s term ‘signature pedagogies’ to propose dimensions of ‘signature assessment’ and ‘signature feedback’ practices. Drawing on definitions of disciplinarity, we have described assessment tasks, learning outcomes, and the content of feedback information as including conceptual, epistemological, social, material, and moral elements. We have illustrated these elements using the papers from this special issue.

Further research needed on signature assessment

Of our proposed dimensions of signature assessments, there has been considerable research on criteria and rubrics (conceptual dimension) and, to a lesser extent, how criteria are translated into conceptual tools for students to use. The papers in this special issue reflect that emphasis, with criteria and their translation by teachers (Finch & Willis, this issue; Swanson & Midura, this issue) and their use by students (Zhao et al., this issue), present in many of the articles. The other four proposed dimensions of signature assessments, namely epistemological, social, material, and moral dimensions, have received less attention in the literature. Further research that begins with analyses of those dimensions of a given discipline may yield richer assessment designs that are rooted in the deep features of those disciplines.

In particular, none of the papers centred ethical or moral issues as they relate to the discipline. In higher education, the importance of developing students’ personal and social responsibility is more often espoused than taught (Dey & Associates, Citation2008), which may account for relative lack of attention to this element in assessment and feedback literature. Yet, the moral dimensions of practice are vital and one of the defining characteristics of signature pedagogies (Shulman, Citation2005). To advance learning and teaching of the moral bases of disciplinary practices, more attention needs to be focused on the design and investigation of assessment and feedback practices that attend explicitly to moral elements of disciplines. Because disciplines have their own values (Quinlan, Citation2016), research framed in terms of signature assessment and/or feedback practices may be particularly important for opening up this line of research inquiry and advancing practice.

Further research needed on signature feedback

We have also proposed that signature feedback processes can be explicitly designed based on analyses of disciplinary or professional demands and requirements. First, our overview of sources of feedback information extends beyond the common triad of teacher, peer, and learner (Wiliam, Citation2019) to include users and objects. It is perhaps unsurprising, then, that the new sources we propose (audiences/users and objects) have been largely overlooked in the literature.

To develop assessment and feedback practice, more attention needs to be paid to how educators set up assessment-related tasks to allow students to benefit from consequential feedback. In particular, how are students prepared to notice, interpret and learn from this feedback, and how do students use this information to inform the development of their practice (Dawson et al., Citation2020)? This translation process lies at the heart of recent definitions of feedback that privilege learners’ use of performance-related information (Henderson et al., Citation2019; Nicol, Citation2020). It is this translation process that is key to building expertise in a discipline.

To return to the examples provided above, reading and responding appropriately to patients’ nonverbal signals in a consultation is a key skill in healthcare settings. Likewise, de-bugging code in computing is a vital skill. Those skills have been presented here as consequential feedback processes. Because students are engaged in the action itself (e.g. examining a patient), it may be challenging to simultaneously act and observe the impact of their actions (Pitt, Citation2019). Indeed, one could argue that the ability to do both at once is characteristic of an expert. Thus, a process of scaffolding (especially in the early years of study), such as by video recording and then critiquing and analysing the video, may be necessary to ensure students can learn from some kinds of consequential feedback information.

Furthermore, previous discussions of formative assessment and feedback have largely left issues of timing implicit. Timeliness has been acknowledged and emphasised (Pitt & Norton, Citation2017; Winstone et al., Citation2016). Our taxonomy, rooted in careful readings of the papers in this special issue, makes explicit three different ways of understanding timing: rhythms, cycles, and spirals. Attending to these types of timing and exploring how they manifest themselves in specific professional or disciplinary contexts offers a useful way forward for intentionally designing assessment and feedback into curricula. We suggest that it will be particularly helpful in considering work-integrated learning, simulations, inquiry based learning, and other practical activities.

Conclusion

Through discipline-sensitive implementation of assessment for learning, students have the opportunity to become apprentices in the practices of the disciplinary or professional community they are studying. Following Shulman (Citation2005), we argue that this apprenticeship is facilitated by signature tasks, performances, and feedback practices that are particular to a given discipline or profession. Through close reading of the papers in this special issue, we have proposed a taxonomy of elements of signature assessment and signature feedback practices and described how these elements fit together. We have also suggested where gaps may lie in our present understanding, thereby setting an agenda for future research that could enrich the literature on assessment and feedback through deep exploration of what makes a discipline or profession unique.

We recognise that advancing assessment and feedback practices needs to attend to both discipline-specific and generic concepts (Wiliam, Citation2019). Many teachers find it difficult to translate generic concepts into their own practice, so discipline-specific examples are helpful to practitioners. The papers in this special issue offered examples from English, science, allied health professions, and business, that illustrated different aspects of our proposed concepts. We also recognise that without a common vocabulary and set of concepts, discussion across disciplines is difficult.

By abstracting from discipline-specific analyses in this special issue, we have presented a taxonomy of common theoretical language that will facilitate conversations and policies that cross disciplines, even as authors focus on specific disciplines. The categories proposed in our taxonomy (i.e. the dimensions of signature assessment; dimensions of signature feedback) can serve as a catalyst for new discipline-specific assessment and feedback research. Thus, we hope this special issue will spawn further empirical research on signature assessment and signature feedback practices.

Disclosure of potential conflicts of interest

No potential conflict of interest was reported by the author(s).

Acknowledgments

We are grateful to two Executive Editors for their thoughtful review of and feedback on this commentary.

Additional information

Notes on contributors

Kathleen M Quinlan

Kathleen M Quinlan, PhD is a Professor in Higher Education and the Director of the Centre for the Study of Higher Education at the University of Kent, UK.  She is also a Principal Fellow of the Higher Education Academy. Her research interests are in discipline-specific aspects of teaching in higher education, values and ethics in teaching, and how educators can support students' interest and holistic development.  https://www.kent.ac.uk/cshe/people/staff/quinlan2.html

Edd Pitt

Edd Pitt,Phd is a Senior Lecturer in Higher Education and Academic Practice and Programme Director of the Postgraduate Certificate in Higher Education programme in the Centre for the Study of Higher Education at the University of Kent. He is also an Honorary Fellow at the Centre for Research in Assessment and Digital Learning, Deakin University, Australia. His research interests are in assessment and feedback in higher education, with a particular focus on the relational and affect domains in feedback.  https://www.kent.ac.uk/cshe/people/staff/epitt2.html

References