445
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Why not rubrics in doctoral education?

ORCID Icon, ORCID Icon &

Abstract

Research on assessment in the field of doctoral education remains an underexplored domain, particularly concerning the preconfirmation or probationary stage of candidature, when attrition is at its highest. We propose that the use of rubrics in this preconfirmation stage has the potential to improve feedback literacy and overall progression to completion. While rubrics have been commonplace in undergraduate education for over two decades, their application in doctoral education remains rare. While it has been argued that the variability in doctoral assessment may not align with rubrics, customizing discipline-specific rubrics could provide clarity for students and supervisors, helping them better understand the expectations and standards. A review of existing studies on the use of rubrics in doctoral education highlights the need for further research in this area. We conclude by raising the overarching research question of whether rubrics have a place in doctoral education at the preconfirmation or probationary stage of candidature.

Introduction

Despite the flourishing of enrolments of doctoral candidates in Australia and worldwide since the 1990s (Lovat et al. Citation2015) there have been few studies that have focused on feedback literacy of doctoral students and supervisors (Carless, Jung, and Li Citation2023). This is despite research from over a decade ago that suggests feedback is one of the major issues in doctoral studies (Jones Citation2013).

Rubrics have been used in tertiary education for over two decades and are an expected part of the process of evaluating student performance at the undergraduate level (Boud and Dawson Citation2023; Carless and Winstone Citation2023; Nieminen and Carless Citation2023). However, the use of rubrics in doctoral education is rare (Lovitts Citation2007) and, hence, students rely on written feedback from supervisors to guide them through the process of becoming academic writers leading up to the thesis submission stage. This feedback is often in the form of substantive written comments pertaining to each extended piece of writing submitted, (often in the form of discrete chapters) and/or typographical, stylistic and substantive comments within the drafts. This feedback is often misinterpreted, not understood, rejected or not acted upon (Carter and Kumar Citation2017).

Completion of a higher degree by research (HDR) is a strategic objective of Australian universities because funding from the federal government is aligned to HDR completions. The attrition rate of HDR students in the US and Australia ranges from 33% to 70% (Ivankova and Stick Citation2007; Jiranek Citation2010; Kim and Otts Citation2010; Gardner and Gopaul Citation2012), with many students leaving in their first year (Lott, Gardner, and Powers Citation2010), representing a great loss of academic workload and revenue for universities. In Australia, according to a government report on higher degree by research students over a nine year period, the completion rate was 48% after six years (Australian Government Department of Education Citation2019). The research funding model in Australia is based on student completions, not enrolments, and ‘maintaining a reliable supply chain of researchers is crucially important, particularly in today’s knowledge economy in which researchers are key knowledge workers actively engaged in knowledge transfer’ (Park,Citation2007 13).

Doctoral programs in Australia are designed for students to complete successive ‘milestones’ during their candidature. One of the most significant milestones is the confirmation milestone, also referred to as the probationary stage. Successful completion of the confirmation milestone enables the HDR student to progress their research project and embark on data collection. A similar process occurs in the UK where doctoral candidates typically go through a formal assessment process known as the ‘confirmation’ or ‘transfer’ process, which usually occurs within the first year of the doctoral program and serves to determine whether the candidate’s research project is viable and has the potential to meet the standards required for a doctorate. While this confirmation process is common in the UK, there can be variations in the specific requirements and procedures at different universities and within different academic disciplines.

The process of ‘confirmation’ or ‘comprehensive examinations’ for PhD candidates in the United States can vary significantly from one institution to another. It is not a universal requirement for all American PhD programs, and the specific requirements and timing may differ depending on the field of study, department and university. In some programs, there may be a formal confirmation or comprehensive examination that candidates must pass after their first year or at some point during their coursework. This examination typically assesses the candidate’s knowledge of the field and their preparedness for conducting independent research. However, in many American PhD programs, the confirmation process may not occur after the first year. Instead, the focus may be on completing coursework, passing qualifying examinations, and advancing to the research and dissertation proposal stage.

Reported attrition rates from PhD programs in the UK, Canada and Australia differ, ranging from 10% to approximately 85% (Bourke et al. Citation2004) and are subject to variables including discipline area. For example, the arts and humanities report higher attrition rates of 45–51%, while the sciences observe rates of 30–40% (Wright and Cochrane Citation2000; Elgar Citation2003). Research (Jiranek,Citation2010 2) suggests that there are three main factors for attrition: 1) the quality and personal situation of the student including academic ability, financial situation, language skills, interpersonal skills and persistence versus so-called self-sabotaging behaviours (Kearns, Gardiner, and Marshall Citation2008), 2) the nature and quality of supervision, including frequency of meetings, support from other students and research colleagues, and 3) the resources and facilities available to the project (e.g. culture collections, analytical facilities, necessary expertise, etc.).

Statistics reported by Bourke et al. from Australia (2004), and Lovitts and Nelson (Citation2000) from the US, identify that HDR attrition is reduced once students complete the confirmation milestone; hence recent initiatives from providers to induct HDR students into the academic profession through systematic training programs for both students and supervisors, designed to ensure timely completions (Nerad et al. Citation2022). Examples of these initiatives include mandatory training modules and university registers, emphasising meeting frequency, student feedback and outputs with measurable indicators of quality, as well as peer-activated learning communities, postgraduate workshops, expanded supervision panels, comprehensive supervisor training and student writing groups (Laurie and Grealy Citation2023). The preconfirmation stage of the HDR candidature is therefore highly significant regarding successful and timely completions, and the development of feedback literacy in the preconfirmation stage, as indicated through research in the undergraduate environment, may be crucial to these timely completions.

Feedback literacy involves feedback provision (by supervisors) and feedback processing (by students); these skills are not easily learned. For the feedback to be effective, it is essential that students understand how to use the information contained with which they have been provided, a concept termed feedback literacy. Conversely, feedback literate supervisors need to be able to provide feedback that is construed by the receiver as constructive, explicit and not personal. Feedback literate individuals, therefore, can understand and appreciate the role of feedback to improve performance, make judgements about the quality of their work and the work of others, manage the affective domain when receiving feedback, and take action (Carless and Boud Citation2018; Winstone and Carless Citation2019). Professional development involving the development of these skills for both supervisors and students is rapidly becoming a major focus for universities across the sector (Nerad et al. Citation2022).

In the undergraduate context, traditional models of feedback, usually provided by assessors to students in the form of annotations on written drafts, have failed, students consistently complain about assessment, and it is the most criticized aspect of the student experience in tertiary education. These criticisms are often levelled at the quality of rubrics used as the grading tool for assessors, specifically the lack of clarity about behaviours required to meet standards (Grainger Citation2020). Much of the research in doctoral studies has focussed on the final examination process (thesis submission), the nature of examiner reports, student responses to examiner reports and institutional criteria used to make judgments about student theses (Dally et al. Citation2019). Although most tertiary institutions provide guidelines for examining theses, rubrics are rarely used in doctoral education at any stage of the candidature. This is despite the general acceptance that rubrics are not just used for summative purposes, they can also be used as an additional feedback mechanism (Grainger et al. Citation2017).

Some researchers (Wellington Citation2013; Carless Citation2015) have claimed that rubrics do not align with doctoral studies, because of the varying nature of the doctorate across space, time and different disciplines. This variation includes the way these are structured within some methodologies.

Arguments against the use of ‘good’ rubrics are difficult to construct but include the constraining aspect of describing explicit behaviours for each criterion. Visser, Chandler, and Grainger (Citation2017) identified the difficulty of constructing explicit rubrics when assessing creativity because creativity can involve challenging set parameters and generating unexpected outcomes. Creative texts are often graded with holistic, rather than analytic, descriptors and scales: ‘A holistic scale measures the relative success of a text but does so through a rubric that incorporates many of the traits in analytic scoring as heuristics towards a conception of a whole rather than as a sum of autonomous components’ (Perelman,Citation2018 11). In this way, specifying explicit behaviours in doctoral education can be argued as similarly problematic, given the variability of doctoral expectations.

However, despite these differences, explicit, customised and discipline-specific rubrics might provide clarity for students about what is needed to meet certain standards of performance (Lovitts Citation2007), alleviating, at least to some extent, the necessity for supervisors to be skilled feedback givers, and providing students with signposting and targets, claimed to be a major advantage of rubrics by researchers in the undergraduate environment (Grainger et al. Citation2017). Considering the high attrition rates of doctoral candidates, it can be argued that providing a variety of feedback mechanisms, not limited to written feedback on scripts, might be advantageous to HDR students, particularly in the pre-confirmation stage. Given the universally accepted significance of rubrics in almost all educational sectors bar doctoral education, we ask the question, why is doctoral assessment the least mapped frontier, (Holbrook Citation2001), why is it still ‘shrouded in mystery’, and how can the doctoral examination process be ‘demystified’ (Golding, Sharmini, and Lazarovitch Citation2014)?

A database search spanning the past 20 years did not reveal any research that has been conducted regarding HDR students at preconfirmation stages of candidature. We found just a handful of studies describing rubrics used in doctoral studies (e.g. Lovitts Citation2007). A review of guidelines from multiple institutions revealed that a typical confirmation document in social sciences varies between 5,000 and 10,000 words, usually consisting of a series of chapters, aligned with the final thesis document: Introduction, Literature Review, Theoretical Framework and Methods. A confirmation document is viewed as a roadmap, explicitly identifying how and why the research project will be conducted. These chapters might be viewed as traditional ‘assignments’, each with its own discrete requirements in terms of content. If so, might a rubric consisting of criteria and standards descriptors for each of these discrete sections assist both students and supervisors in the pre-confirmation of candidature stage, by specifying the behaviours required to meet the expected standard?

The overarching research question is:

  • Is there a place for rubrics in doctoral assessment at the probationary stage of candidature?

Literature review

The examination of research theses has been a focus for researchers for about 30 years now, but only relatively recently has it attracted research interest that has focused on what examiners do and how consistent they are in their assessment judgments. Very little research has been carried out on assessment at doctoral level (Tinkler and Jackson Citation2000) although there is increasing interest (Pereira, Assunção, and Niklasson Citation2016).

Specific issues have been identified, namely, the use of criteria and the standards by which the performance is judged, the disregard for official criteria and the preference for latent criteria when judging student work, as well as the variability between standards of achievement (Bourke and Holbrook Citation2013). It is acknowledged that this problem is due to the very nature of the PhD and the esteem with which it is held. It is the most prestigious academic degree; it is recognised and revered worldwide, a gold standard, a rite of passage into the academic community, a journey steeped in mystery, its existence ‘taken for granted’ (Williams, Bjarnason, and Loder,Citation1995 21). Holbrook (Citation2001) refers to it as assessment’s least mapped frontier. There is a pressing need for more transparency about doctoral assessment (Denicolo, Duke, and Reeves Citation2020).

Research in relation to pre-confirmed or probationary (we use the terms interchangeably) doctoral candidates is seemingly non-existent, hence the following review focuses on reporting what has been published in the assessment of doctoral education, using rubrics, at the final stages of candidature.

Rubrics in doctoral education

There has been extensive research into the use of rubrics in higher education contexts at the undergraduate level, much of which has focussed on two aspects: the summative or evaluative use of rubrics to grade student work and the formative use of rubrics as a feedback mechanism to guide student work (Reddy and Andrade Citation2010). Research has shown that rubrics are valued by students and assessors alike, because rubrics describe explicit behaviours, provide targets for students to aim for, promote transparency and consistency of teacher judgments, reduce subjectiveness and hence assisting in formal moderation processes (Grainger, Heck, and Carey Citation2018). However, despite the overwhelming conclusion that rubrics are generally well received by target audiences/stakeholders, there remains resistance to their use, especially in the context of doctoral education. Hafner and Hafner (Citation2003) claimed the resistance is probably due, at least in part, to the ‘overwhelming majority’ of instructors in higher education having little or no preparation as teachers, and minimal access to new trends in assessment (Hafner and Hafner Citation2003).

Little published research could be found on formal assessment of the doctoral thesis using rubrics. Holbrook (Citation2001) reported: ‘there has been scant attention paid to PhD outcomes, particularly the examination of the thesis, the quality or features of the research undertaken by PhD students and the effectiveness, usefulness and application of the research training received across disciplines’. In reviewing the literature, few studies were found that reported the use of a rubric to assess a PhD thesis. Assessment throughout PhD candidature is predominantly formative, including final examiner reports, ‘suggesting that PhD knowledge has no defined parameters; there is no specific set of learning objectives for candidates, or criteria for examination that translate across disciplines or institutions, and yet supervisors, examiners and institutions operate as if there are; there is an untested faith in the procedures of selection of examiners, and their rigour and application of standards’ (Holbrook Citation2001). Some researchers (Kumar and Stracke Citation2017; Paltridge Citation2017) identified significant differences between undergraduate and doctoral assessment in that doctoral assessment is more akin to journal peer review, because doctoral students are able to respond to examiner feedback in order to revise the thesis.

One of the most detailed studies revealed in the literature review was published by Lovitts in 2007, who developed discipline specific rubrics for 10 academic disciplines in the hard sciences, social sciences and humanities, primarily as a guide for students, to make standards clear. Experienced doctoral supervisors (276) from nine universities described characteristics of quality across four standards, (outstanding, very good, acceptable, unacceptable) in the disciplines of biology, physics, electrical and computer engineering, mathematics, economics, psychology, sociology, English, history and philosophy. The resulting rubrics identified common characteristics and aspects for each section of the dissertation: introduction, literature review, theory, methods, results, discussion and conclusion.

Vaccari and Thangam (Citation2010) created rubrics in the context of engineering education. Their rubric was based on a review of practices in various countries, resulting in a set of universal criteria (not standards) as follows:

  • originality and novelty

  • advances the state of the art

  • literature survey

  • possesses practical and or academic utility (potential impact)

  • uses new or advanced techniques

  • has elements of theory

  • has elements of experiment

A study of examiner reports on 804 PhD theses from 8 Australian universities (Holbrook et al. Citation2004) identified 12 indicators of quality used by doctoral examiners. summarises the mean quality rating using the following rating scale: 1 = fundamentally flawed, 2 = low quality, 3 = moderate/low quality, 4 = moderate/high quality, 5 = high quality, 6 = exceptional quality. The rank is the order of importance examiners attributed to the indicators of quality. Interestingly, examiners placed the highest importance on presentation and the least on the substantive contribution of the thesis.

Table 1. Mean scores for the levels of importance of the 12 quality indicators for PhD thesis quality used by doctoral examiners (adapted from Bourke and Holbrook Citation2013).

Since 2020, there has been a gradual shift towards acceptance of the necessity to ensure standards in doctoral education, leading to the utilisation of generic criteria regardless of discipline and location (Denicolo, Duke, and Reeves Citation2020). Poole (Citation2015) noted that the use of generic, identifiable criteria has become common practice across many universities to ensure quality in doctoral education. These generic criteria include: original and significant contribution to knowledge, the ability to conduct research independently, and the ability to apply this knowledge in an increasingly interconnected globalised world. Examiners are generally asked to respond to questions such as the following:

  • Does the thesis comprise a coherent investigation of the chosen topic?

  • Does the thesis deal with a topic of sufficient range and depth to meet the requirements of the degree?

  • Does the thesis make an original contribution to knowledge in its field and contain material suitable for publication in an appropriate academic journal?

  • Does the thesis meet internationally recognised standards for the conduct and presentation of research in the field?

  • Does the thesis demonstrate both a thorough knowledge of the literature relevant to its subject and general field and the candidate’s ability to exercise critical and analytical judgment of that literature?

  • Does the thesis display mastery of appropriate methodology and/or theoretical material?

The use of rubrics in doctoral education has been criticised by Wellington (Citation2013) and Carless (Citation2015) because standardised criteria fail to capture the distinctiveness of doctoral work and the complex achievements of doctoral graduates as they develop their own identity as researchers. Wellington (p. 1495) described 15 key phrases and expressions used across institutions worldwide to describe the qualities (i.e. criteria) of ‘doctorateness’, evidencing its variability:

  • worthy of publication either in full or abridged form;

  • presents a thesis embodying the results of the research;

  • original work which forms an addition to knowledge;

  • makes a distinct contribution to the knowledge of the subject and offers evidence of originality shown by the discovery of new facts and/or the exercise of independent critical power;

  • shows evidence of systematic study and the ability to relate the results of such study to the general body of knowledge in the subject;

  • the thesis should be a demonstrably coherent body of work;

  • shows evidence of adequate industry and application;

  • understands the relationship of the special theme of the thesis to a wider field of knowledge;

  • represents a significant contribution to learning, for example, through the discovery of new knowledge, the connection of previously unrelated facts, the development of new theory or the revision of older views;

  • provides originality and independent critical ability and must contain matter suitable for publication;

  • adequate knowledge of the field of study;

  • competence in appropriate methods of performance and recording of research;

  • ability in style and presentation;

  • the dissertation is clearly written;

  • takes account of previously published work on the subject.

Wellington problematises the concepts of originality and criticality in doctoral work, despite the identification of seven categories that might describe the nature of originality, summarised as:

  • building new knowledge, e.g. by extending previous work or ‘putting a new brick in the wall’;

  • using original processes or approaches, e.g. applying new methods or techniques to an existing area of study;

  • creating new syntheses, e.g. connecting previous studies or linking existing theories or previous thinkers;

  • exploring new implications, for either practitioners, policy makers, or theory and theorists;

  • revisiting a recurrent issue or debate, e.g. by offering new evidence, new thinking, or new theory;

  • replicating or reproducing earlier work, e.g. from a different place or time, or with a different sample

  • presenting research in a novel way, e.g. new ways of writing, presenting, disseminating.

Some discrepancies have been identified in the way assessors interpret and prioritise some criteria over others, depending upon the individual and the discipline (Chetcuti, Cacciottolo, and Vella Citation2022). In order to make judgments about quality, assessors may be referred to guidelines supplied by universities; however, as Mullins and Kiley (Citation2002, 380) noted, experienced examiners do not necessarily follow the institution-specific criteria provided. In a similar way, Delamont, Atkinson, and Parry (Citation2000) identified the largely tacit nature of doctoral assessment and the complexities of adhering to institutional criteria which are not necessarily well articulated.

Although there has been a recent focus on preparing academics for their supervisory roles, ‘examiners of doctoral theses do not undergo formal preparation for their role and must rely on their own experience of being examined to guide their approach’ (Johnston,Citation1997 346), they have ‘idiosyncratic’ expectations (p. 341) of the way in which a study should be presented and draw on their own understanding of requisites (Mullins and Kiley Citation2002). Examiners, as gatekeepers, pass theses which correspond to their personal ideologies; therefore, it is not uncommon, as Johnston (Citation1997, 346) points out, ‘to find inconsistencies and variation in examiners’ reports’. The little research available confirms the largely tacit nature of examiner judgement and the tendency of examiners to pay scant attention to official criteria provided by the institution.

Many of these issues mirror those experienced in the undergraduate context of assessment decision making amongst assessors. The implication is that assessors often make assessment judgements based on both explicit and implicit criteria (Grainger, Purnell, and Zipf Citation2008).

Feedback in undergraduate contexts

Although assessment is a key driver for student engagement in tertiary study it is characterised by repeated criticisms from students, including the failure of feedback to improve student outcomes (Grainger Citation2020). Feedback literacy has been a growing research focus in recent years in undergraduate contexts, resulting in a major rethinking of feedback processes in order to improve feedback literacy of students. There is also acknowledgement in the literature that feedback has not been done well, that assessment is time consuming for students, and staff are frustrated by the lack of impact of feedback (Carless and Winstone Citation2023). According to Sadler (Citation2010, p. 535) ‘for many students, feedback seems to have little or no impact, despite the considerable time and effort put into its production’. Assessors note that students do not take notice of assessment feedback for a variety of reasons, including failure to understand the feedback messages received (Sadler Citation2010). Although a feedback loop is commonly acknowledged as essential to effective learning, no single process has been identified to positively impact student achievement.

Many, including Sadler (Citation1989, Citation2009, Citation2010) question the impact of teacher-constructed feedback as a contributor to learning improvement. Others (Dawson et al. Citation2019; Winstone and Boud Citation2020; Carless and Winstone Citation2023) have called for new directions away from unilateral teacher input-focussed feedback methods and a renewed focus on the student as an active learner in the process with added responsibility in the process—in short, the development of student agency in the feedback process and a focus on the teacher as a facilitator of the feedback environment (Carless and Boud Citation2018; Winstone and Carless,Citation2019 Carless and Winstone Citation2023). In this way, recent researchers, focussing on student actions in response to assessment information, conceptualise feedback ‘as a process in which learners make sense of comments about the quality of their work to inform the development of future performance or learning strategies’ (Carless and Boud Citation2018). This conceptualisation reflects acknowledgement of the importance of closing the feedback loop (Sadler Citation1989), where assessment information is acted upon by students.

Although this reconceptualisation has been restricted to undergraduate contexts, it has direct connections to the pre-confirmation stage of doctoral study, when students are engaged in drafting thesis chapters and redrafting based on feedback given by supervisors. Each redraft, whether it be discrete chapter by chapter, or holistically, can be viewed as an ‘assignment’, which can be evaluated according to criteria and standards. In the doctoral environment, characterised by the absence of formal rubrics, this has typically been enacted through formative and informal processes, typically regular supervisor meetings, with comments by supervisors, usually in written form, discussed and returned to students for redrafting.

Feedback in doctoral studies

Completing a PhD in Australia essentially involves production and submission of a single, extended piece of academic writing, normally 80,000 words in length. This thesis consists of a series of chapters, commonly, introduction, literature review, theoretical framework, methods, results, discussion and conclusion: each can be viewed as an ‘assignment’. The submission of chapters entails participation in a process of learning academic writing through enculturation into the values and behaviours of an academic community. This enculturation is predominantly characterised and influenced by feedback processes, most often, the feedback provided by doctoral supervisors to doctoral students, although this is not the only source of feedback students receive. There is a dearth of research investigating feedback on graduate writing and how it is processed by students, and what exists tends to be focussed on the final dissertation stage of the PhD rather than the processes leading up to the thesis submission stage. Research suggests that adequate support (for writing) is even more important during the first years of a doctoral program (Kim Citation2018). This sentiment is supported by Carter and Kumar (Citation2017) who noted that supervisory support of writing is under-researched and in need of discussion.

According to Kumar and Stracke (Citation2007, 462) feedback ‘lies at the heart of the learning experience of a PhD student… it is through written feedback that the supervisor communicates and provides advanced academic training, particularly in writing, to the supervisee’. Similarly, written feedback by supervisors is a fundamental source of input for academic writing such as thesis writing (Bitchener, Basturkmen, and East Citation2010).

Most of the studies on feedback in doctoral education have focussed on what supervisors focus on. Bitchener, Basturkmen, and East (Citation2010) identified the major focus for feedback as being related to gaps in literature reviewed as well as deficiencies in terms of the content. More recently, Kim (Citation2018) reported a range of feedback categories: interaction with content; academic writing; positive/negative comments, approval, or appraisal; linguistic accuracy. Analysis revealed that almost half of the feedback (interaction with content (29%) and academic writing (17%)) covered the content of the text, including discipline-specific knowledge and broad characteristics of academic writing.

Few studies have focussed on students’ expressed needs in relation to their desired feedback focusses. According to Manjet (Citation2016), students preferred content to be the focus, not formatting, language issues or referencing. Students wanted all of these to be commented on but not to the point that appropriateness of methodology and presentation, for example, were ignored. Researchers have identified a major disconnect between the expectations of the thesis committee and that of the Ph.D. candidate, and hence the use of a rubric is to be considered essential for evaluating the outcomes of all doctoral theses (Vaccari and Thangam Citation2010).

Recently, and probably as a direct result of the problematic nature of feedback from different perspectives of the student and the supervisor, Stracke and Kumar (Citation2020) developed the Feedback Expectation Tool (FET) for use by students and supervisors. The FET is a series of 13 contradictory or opposing statements formulated to provide stimuli to conversations about the issue being addressed in the statement. According to the authors, it is ‘through feedback that the supervisee is able to understand [that] writing is a form of learning, as revising drafts after feedback can lead to a process of discovery, [and this is] an integral part of PhD education’ (Kumar and Stracke,Citation2007 462).

Despite having written a thesis themselves, research (Woodward-Kron Citation2007; Bazerman Citation2009; Paré Citation2011) suggests that providing explicit, comprehensible, timely and useful feedback is not always necessarily a skill associated with supervisors, who ‘may often be inarticulate when providing feedback; this in turn reduces the potential of the advice they provide students’ (McAlpine and Amundsen,Citation2011 10). Supervisor comments about the challenges of providing feedback to doctoral students echo the research in the undergraduate context, that is students ignore feedback, spend too much time on correcting academic literacies, including language issues such as imprecise grammar, submit poor quality work, feel emotionally impacted by what is perceived as negative feedback from supervisors, and are unable to distinguish between critique and enablement (Carter and Kumar Citation2017). More recently, there has been a proliferation of research into feedback received by doctoral candidates from sources other than supervisors, including peer writing groups (Jeyaraj, Too, and Lasito Citation2022). This evidences the value of a variety of feedback mechanisms rather than reliance on traditional forms of written feedback.

Implications and further directions

Feedback in undergraduate education is commonplace, and rubrics are standard practice. In the doctoral context, however, feedback is commonly provided predominantly through written annotations on student work and without rubrics. Recent feedback literacy research in undergraduate education points to students as active agents and we know that students value rubrics as guides (Grainger et al. Citation2017), but not in doctoral education in which there is an assumption that students and supervisors are feedback literate. Due to the secrecy surrounding the examination process in doctoral education, there is difficulty in calibrating examiner judgments to ensure the quality of PhD work is at a universally accepted standard.

Consequently, we argue, there is a pressing need to consider the use of rubrics in doctoral education, not as summative evaluative tools but as formative and self-regulating tools for students, especially in the pre-confirmation stages of candidature when attrition is at its highest. Guidelines provided by institutions to examiners are generally not explicit enough, examiners tend to ignore guidelines, and use their own tacit or implicit criteria when making judgments.

We know from the published research that there are common characteristics of quality across many disciplines and common structures for organising and presenting a thesis. We acknowledge that there are alternative forms for presenting a thesis, for example, grounded theory does not have a literature review, post-qualitative methodology doesn’t require a conclusion. We acknowledge the arguments in the literature about the misalignment of rubrics and doctoral assessment. However, we make an assumption that these constraints can be overcome, that each of the organisational elements of a doctoral thesis can be treated as assignments, and each could be assessed discretely using a ‘good’ rubric with criteria, standards and standard descriptors that explicitly describe behaviours that can be evaluated. We suggest that an explicit rubric will be an additional source of feedback and will assist students in drafting their confirmation chapters.

Building on the pioneering work of Lovitts (Citation2007), more than 15 years ago, we contend that the development of methodology-specific, discipline-specific, even task specific rubrics, initially targeted at the preconfirmation stage of doctoral candidature, would benefit doctoral candidates and, hence, could be a priority for researchers in doctoral education. We suggest research into the development of a generic rubric framework that can be customised and/or applied to various contexts and further adapted, consisting of criteria, standards and performance-based standards descriptors to describe the behaviours at each of the standards. In doing so, we hope that the provision of such a framework will provide pre-confirmation students with an additional feedback mechanism, and importantly, explicit guidance on expectations about what is expected in various chapters of a confirmation document.

Our research suggests that a series of generic rubrics could be developed, commencing with the introductory section. We propose the following criteria, developed from the existing cross disciplinary literature: significance of research; knowledge of sources; validity of research questions; rationale/justification for the research; contribution to knowledge; writing according to genre and progress as a researcher. Further draft rubrics have been developed for subsequent sections, such as literature review and methods. We are currently piloting these rubrics and will report results in due course.

Disclosure statement of generative AI and AI-assisted technologies in the writing process

During the preparation of this work the author(s) used Elicit in order to generate a reference list of key authors. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.

Additional information

Notes on contributors

Peter Grainger

Dr Peter Grainger is a Senior Lecturer in the School of Education and Tertiary Access at the University of the Sunshine Coast. He is the School’s Higher Degree by Research (HDR) Coordinator. Peter’s research interests relate to educational assessment, particularly feedback and rubrics. He has published over 30 peer reviewed research outputs over the past 5 years.

Michael Carey

Dr Michael Carey is an Associate Professor in Education, School of Education and Tertiary Access (SETA), University of the Sunshine Coast. His research interests include language testing and assessment validation, particularly in the field of English language education. He is an experienced HDR supervisor and co-coordinates the capstone MEd Research course that is a pre-requisite for all pre-confirmation PhD students in SETA.

Craig Johnston

Dr Craig Johnston is a Lecturer in Education at the University of the Sunshine Coast, an historian by training and inclination, and an early career researcher. Drawing on the breadth and depth of his subject knowledge and recency of both high school and university teaching practice, he is interested in the uses and shortcomings of rubrics in providing actionable feedback to students.

References

  • Australian Government Department of Education. 2019. Higher Degrees by Research Cohort Analysis, 2007-2017.
  • Bazerman, C. 2009. “Genre and Cognitive Development: Beyond Writing to Learn.” In Genre in a Changing World, edited by Bazerman, C., Figueiredo, D., and Bonini, A., 279–294. West Lafayette: The WAC Clearing house.
  • Bitchener, J., H. Basturkmen, and M. East. 2010. “The Focus of Supervisor Written Feedback to Thesis/Dissertation Students.” International Journal of English Studies 10 (2): 79–97. doi:10.6018/ijes/2010/2/119201.
  • Boud, D., and P. Dawson. 2023. “What Feedback Literate Teachers Do: An Empirically-Derived Competency Framework.” Assessment & Evaluation in Higher Education 48 (2): 158–171. doi:10.1080/02602938.2021.1910928.
  • Bourke, S., A. Holbrook, T. Lovat, and P. Farley. 2004. “Attrition, Completion and Completion Times of PhD Candidates.” Paper presented at the AARE Annual Conference, Melbourne, November 28–December 2.
  • Bourke, S., and A. Holbrook. 2013. “Examining PhD and Research Masters Theses.” Assessment & Evaluation in Higher Education 38 (4): 407–416. doi:10.1080/02602938.2011.638738.
  • Carless, D. 2015. Excellence in University Assessment: Learning from Award-Winning Practice. London: Routledge.
  • Carless, D., and D. Boud. 2018. “The Development of Student Feedback Literacy: Enabling Uptake of Feedback.” Assessment & Evaluation in Higher Education 43 (8): 1315–1325. doi:10.1080/02602938.2018.1463354.
  • Carless, D., and N. Winstone. 2023. “Teacher Feedback Literacy and Its Interplay with Student Feedback Literacy.” Teaching in Higher Education 28 (1): 150–163. doi:10.1080/13562517.2020.1782372.
  • Carless, D., Jisun Jung, and Yongyan Li. 2023. “Feedback as Socialization in Doctoral Education: Towards the Enactment of Authentic Feedback.” Studies in Higher Education 49 (3): 534–545. doi:10.1080/03075079.2023.2242888.
  • Carter, S., and V. Kumar. 2017. “Ignoring Me is Part of Learning’: Supervisory Feedback on Doctoral Writing.” Innovations in Education and Teaching International 54 (1): 68–75. doi:10.1080/14703297.2015.1123104.
  • Chetcuti, D., J. Cacciottolo, and N. Vella. 2022. “What Do Examiners Look for in a PhD Thesis? Explicit and Implicit Criteria Used by Examiners across Disciplines.” Assessment & Evaluation in Higher Education 47 (8): 1358–1373. doi:10.1080/02602938.2022.2048293.
  • Dally, K., A. Holbrook, T. Lovat, and J. Budd. 2019. “Examiner Feedback and Australian Doctoral Examination Processes.” Australian Universities Review 61 (2): 31–41.
  • Dawson, P., M. Henderson, P. Mahoney, M. Phillips, T. Ryan, D. Boud, and E. Molloy. 2019. “What Makes for Effective Feedback: Staff and Student Perspectives.” Assessment & Evaluation in Higher Education 44 (1): 25–36. doi:10.1080/02602938.2018.1467877.
  • Delamont, S., P. Atkinson, and O. Parry. 2000. The Doctoral Experience. London: Falmer. doi:10.4324/9780203451403.
  • Denicolo, P., D. Duke, and J. Reeves. 2020. Delivering Inspiring Doctoral Assessment. London: Sage
  • Elgar, F. J. 2003. PhD Degree Completion in Canadian Universities. Nova Scotia, Canada: Dalhousie University, 1–31.
  • Gardner, S. K., and B. Gopaul. 2012. “The Part-Time Doctoral Student Experience.” International Journal of Doctoral Studies 7: 063–078. http://ijds.org/Volume7/IJDSv7p063-078Gardner352.pdf. doi:10.28945/1561.
  • Golding, C., S. Sharmini, and A. Lazarovitch. 2014. “What Examiners Do: What Thesis Students Should Know.” Assessment & Evaluation in Higher Education 39 (5): 563–576. doi:10.1080/02602938.2013.859230.
  • Grainger, P. 2020. “How Do Pre-Service Teacher Education Students Respond to Assessment Feedback?” Assessment & Evaluation in Higher Education 45 (7): 913–925. doi:10.1080/02602938.2015.1096322.
  • Grainger, P. R., D. Heck, and M. D. Carey. 2018. “Are Assessment Exemplars Perceived to Support Self-Regulated Learning in Teacher Education?” Frontiers in Education 3: 60. doi:10.3389/feduc.2018.00060.
  • Grainger, P., K. Purnell, and R. Zipf. 2008. “Judging Quality through Substantive Conversations between Markers.” Assessment & Evaluation in Higher Education 33 (2): 133–142. doi:10.1080/02602930601125681.
  • Grainger, P., M. Christie, G. Thomas, S. Dole, D. Heck, M. Marshman, and M. Carey. 2017. “Improving the Quality of Assessment by Using a Community of Practice to Explore the Optimal Construction of Assessment Rubrics.” Reflective Practice 18 (3): 410–422. doi:10.1080/14623943.2017.1295931.
  • Hafner, J. C., and P. M. Hafner. 2003. “Quantitative Analysis of the Rubric as an Assessment Tool: An Empirical Study of Student Peer-Group Rating.” International Journal of Science Education 25 (12): 1509–1528. doi:10.1080/0950069022000038268.
  • Holbrook, A. 2001. PhD Examination – Assessment’s Least Mapped Frontier. Paper presented at the AARE Conference, Fremantle, December 2001.
  • Holbrook, A., S. Bourke, T. Lovat, and K. Dally. 2004. “Investigating Ph.D. Thesis Examination Reports.” International Journal of Educational Research 41 (2): 98–120. doi:10.1016/j.ijer.2005.04.008.
  • Ivankova, N. V., and S. L. Stick. 2007. “Students’ Persistence in a Distributed Doctoral Program in Educational Leadership in Higher Education: A Mixed Methods Study.” Research in Higher Education 48 (1): 93–135. doi:10.1007/s11162-006-9025-4.
  • Jeyaraj, J. J., Wei Keong Too, and Eni Ermawati Lasito. 2022. “A Framework for Supporting Postgraduate Research Writing: Insights from Students’ Writing Experiences.” Higher Education Research & Development 41 (2): 405–419. doi:10.1080/07294360.2020.1849037.
  • Jiranek, V. 2010. “Potential Predictors of Timely Completion among Dissertation Research Students at an Australian Faculty of Sciences.” International Journal of Doctoral Studies 5: 001–013. http://ijds.org/Volume5/IJDSv5p001-013Jiranek273.pdf. doi:10.28945/709.
  • Johnston, S. 1997. “Examining the Examiners: An Analysis of Examiners’ Reports on Doctoral Theses.” Studies in Higher Education 22 (3): 333–347. doi:10.1080/03075079712331380936.
  • Jones, M. 2013. “Issues in Doctoral Studies – Forty Years of Journal Discussion: Where Have we Been and Where Are we Going?” International Journal of Doctoral Studies 8: 083–104. doi:10.28945/1871.
  • Kearns, H., M. Gardiner, and K. Marshall. 2008. “Innovation in PhD Completion: The Hardy Shall Succeed (and Be Happy!).” Higher Education Research & Development 27 (1): 77–89. doi:10.1080/07294360701658781.
  • Kim, D., and C. Otts. 2010. “The Effect of Loans on Time to Doctorate Degree: Differences by Race/Ethnicity, Field of Study, and Institutional Characteristics.” Journal of Higher Education 81 (1): 1–32. doi:10.1080/00221546.2010.11778968.
  • Kim, Kyung Min. 2018. “Academic Socialization of Doctoral Students through Feedback Networks: A Qualitative Understanding of the Graduate Feedback Landscape.” Teaching in Higher Education 23 (8): 963–980. doi:10.1080/13562517.2018.1449741.
  • Kumar, V., and E. Stracke. 2007. “An Analysis of Written Feedback on a PhD Thesis.” Teaching in Higher Education 12 (4): 461–470. doi:10.1080/13562510701415433.
  • Kumar, V., and E. Stracke. 2017. “Reframing Doctoral Examination as Teaching.” Innovations in Education and Teaching International 55 (2): 219–227. doi:10.1080/14703297.2017.1285715.
  • Laurie, T., and L. Grealy. 2023. “Curious Care: Tacit Knowledge and Self-Trust in Doctoral Training.” Pedagogy, Culture & Society. doi:10.1080/14681366.2023.2255220.
  • Lott, J. L., S. Gardner, and D. A. Powers. 2010. “Doctoral Student Attrition in the STEM Fields: An Exploratory Event History Analysis.” Journal of College Student Retention: Research, Theory & Practice 11 (2): 247–266. doi:10.2190/CS.11.2.e.
  • Lovat, T., A. Holbrook, S. Bourke, H. Fairbairn, M. Kiley, B. Paltridge, and S. Starfield. 2015. “Examining Doctoral Examination and the Question of the Viva.” Higher Education Review 47 (3): pp5–23.
  • Lovitts, B. E. 2007. Making the Implicit Explicit: Creating Performance Expectations for the Dissertation. Sterling, VA: Stylus Publishing.
  • Lovitts, B. E., and C. Nelson. 2000. “The Hidden Crisis in Graduate Education: Attrition from Ph. D.” Academe 86 (6): 44. doi:10.2307/40251951.
  • Manjet, K. 2016. “Graduate Students’ Needs and Preferences for Written Feedback on Academic Writing.” English Language Teaching 9 (12): 79–88.
  • McAlpine, L., and C. Amundsen. 2011. “To Be or Not to Be? The Challenges of Learning Academic Work.” In Doctoral Education: Research-Based Strategies for Doctoral Students, Supervisors and Administrators, edited by L. McAlpine and C. Amundsen, 1–13. Dordrecht: Springer
  • Mullins, G., and M. Kiley. 2002. “It’s a PhD, Not a Nobel Prize’: How Experienced Examiners Assess Research Theses.” Studies in Higher Education 27 (4): 369–386. doi:10.1080/0307507022000011507.
  • Nerad, N., D. Bogle, U. Kohl, C. O’Carroll, C. Peters, and B. Scholz. 2022. (Eds.) Towards a Global Core Value System in Doctoral Education. London: UCL Press.
  • Nieminen, J. H., and D. Carless. 2023. “Feedback Literacy: A Critical Review of an Emerging Concept.” Higher Education 85 (6): 1381–1400. doi:10.1007/s10734-022-00895-9.
  • Paltridge, B. 2017. “Peer Review in Academic Settings.” In The Discourse of Peer Review. London: Palgrave Macmillan. doi:10.1057/978-1-137-48736-0_1.
  • Paré, A. 2011. “Speaking of Writing: Supervisory Feedback and the Dissertation.” In Doctoral Education: Research-Based Strategies for Doctoral Students, Supervisors and Administrators, edited by L. McAlpine and C. Amundsen, 59–74. Dordrecht: Springer.
  • Park, C. 2007. “Defining the doctorate.” http://eprints.lancs.ac.uk/435/1/RedefiningTheDoctorate.pdf
  • Pereira, D., M. Assunção, and L. Niklasson. 2016. “Assessment Revisited: A Review of Research in Assessment and Evaluation in Higher Education.” Assessment & Evaluation in Higher Education 41 (7): 1008–1032. doi:10.1080/02602938.2015.1055233.
  • Perelman, L. 2018. “Towards a New NAPLAN: Testing to the Teaching.” Journal of Professional Learning 2: 1–52. https://cpl.asn.au/journal/semester-2-2018/towards-a-new-naplan-testing-to-the-teaching.
  • Poole, B. 2015. “The Rather Elusive Concept of ‘Doctorateness’: A Reaction to Wellington.” Studies in Higher Education 40 (9): 1507–1522. doi:10.1080/03075079.2013.873026.
  • Reddy, Y. M., and Heidi Andrade. 2010. “A Review of Rubric Use in Higher Education.” Assessment & Evaluation in Higher Education 35 (4): 435–448. doi:10.1080/02602930902862859.
  • Sadler, D. R. 1989. “Formative Assessment and the Design of Instructional Systems.” Instructional Science 18 (2): 119–144. doi:10.1007/BF00117714.
  • Sadler, D. R. 2009. “Indeterminacy in the Use of Preset Criteria for Assessment and Grading in Higher Education.” Assessment & Evaluation in Higher Education 34 (2): 159–179. doi:10.1080/02602930801956059.
  • Sadler, D. R. 2010. “Beyond Feedback: Developing Student Capability in Complex Appraisal.” Assessment & Evaluation in Higher Education 35 (5): 535–550. doi:10.1080/02602930903541015.
  • Stracke, E., and V. Kumar. 2020. “Encouraging Dialogue in Doctoral Supervision: The Development of the Feedback Expectation Tool.” International Journal of Doctoral Studies 15: 265–284. doi:10.28945/4568.
  • Tinkler, P., and C. Jackson. 2000. “Examining the Doctorate: Institutional Policy and the PhD Examination Process in Britain.” Studies in Higher Education 25 (2): 167–180. doi:10.1080/713696136.
  • Vaccari, D. A., and S. Thangam. 2010. A Proposed Doctoral Assessment Procedure and Rubric for Science and Engineering. Paper presented at 2010 Annual Conference & Exposition, Louisville, Kentucky.
  • Visser, I., L. Chandler, and P. Grainger. 2017. “Engaging Creativity: Employing Assessment Feedback Strategies to Support Confidence and Creativity in Graphic Design Practice.” Art, Design & Communication in Higher Education 16 (1): 53–67. doi:10.1386/adch.16.1.53_1.
  • Wellington, J. 2013. “Searching for Doctorateness.” Studies in Higher Education 38 (10): 1490–1503. doi:10.1080/03075079.2011.634901.
  • Williams, G., S. Bjarnason, and C. Loder. 1995. Postgraduate education in England. Report of a “Dipstick” survey, Centre for Higher Education Studies, Institute of Education, University of London, November reproduced in Review of Postgraduate Education Evidence Volume (May 1996) Section 1.
  • Winstone, N. E., and D. Boud. 2020. “The Need to Disentangle Assessment and Feedback in Higher Education.” Studies in Higher Education 47 (3): 656–667. doi:10.1080/03075079.2020.1779687.
  • Winstone, N., and D. Carless. 2019. Designing Effective Feedback Processes in Higher Education: A Learning-Focused Approach. Abingdon: Routledge.
  • Woodward-Kron, R. 2007. “Negotiating Meanings and Scaffolding Learning: Writing Support for non-English Speaking Background Postgraduate Students.” Higher Education Research & Development 26 (3): 253–268. doi:10.1080/07294360701494286.
  • Wright, T., and R. Cochrane. 2000. “Factors Influencing Successful Submission of PhD Theses.” Studies in Higher Education 25 (2): 181–195. doi:10.1080/713696139.