607
Views
0
CrossRef citations to date
0
Altmetric
Articles

Polyvalent Practices and Heteropraxis as Heuristic: A Survey of Doctoral Examination Processes in Rhetoric and Composition

ORCID Icon

Abstract

While scholarship in rhetoric and composition has deliberated its disciplinary identity, we do not yet have a current account of how pluralistic approaches to curriculum at the doctoral level professionalize graduate students as teachers, researchers, and future faculty. This article describes a survey distributed to program directors in the Consortium of Doctoral Programs in Rhetoric and Composition in September 2020. Doctoral examinations in the field differ widely in format and function to fit local needs, a heteropraxis of graduate curriculum that this article frames as a set of heuristic questions for PhD program administrators to use.

Doctoral examinations differ widely in format and can play a variety of roles in doctoral curriculum, from testing competency in core knowledge covered in coursework to assessing a student’s aptitude for self-directed research.Footnote1 In addition to these stated purposes, doctoral exams appear to serve other purposes, which include preparing graduate students to eventually become career academics. Two decades ago, CitationHeidi Estrem and Brad Lucas recounted the historical development of comprehensive examinations in rhetoric and composition PhD programs, largely drawing attention to nebulous ambiguity in how the exams were administered and assessed. Importantly, their report catalogued the range of exam formats and practices and purposes, revealing a wide array of test structures.

In this essay I describe the results of an IRB reviewed survey that reflects responses from 40 PhD programs in rhetoric and composition describing a more current account of formats and purposes of doctoral examinations. Here I use the term “doctoral examinations” (abbreviated doctoral exams) to refer to exams taken between coursework and dissertation (e.g. preliminary exams, qualifying exams, comprehensive exams, field exams, core exams, content exams, and concentration exams). By asking how doctoral exams come to be the way they are, this study also seeks to investigate discourses and practices of disciplinarity in rhetoric and composition itself. Where CitationEstrem and Lucas’ critical interrogation of doctoral exam practices points to some of the pitfalls that come with curricular pluralism, I work from their study and the findings of this follow-up survey to argue that the curricular pluralism of doctoral exams is heteropraxial and can legitimately serve different purposes and functions in different programs. Further, from this new round of programmatic survey findings I share in the discussion section of this essay, a heuristic model designed for PhD programs to use reflectively to self-assess and consider their exam practices.

CitationEstrem and Lucas established a benchmark for systematic survey of comprehensive exams in rhetoric and composition. Their work identifies doctoral examinations as a focus of disciplinary and curricular consideration. While previous studies link core coursework and dissertation topics to professionalization, CitationEstrem and Lucas inquire about the purposes for doctoral exams, identifying “Critical Thinking, Expert Knowledge, Research/Teaching Ability” as their major categories (405). CitationEstrem and Lucas’ study of doctoral exams has been featured in several dissertations. From professionalization and the integration of technology in exams (CitationHurley), to the exam preparation processes for online students (CitationThomas), to doctoral processes in other disciplines such as counselor education (CitationKostohryz), criminal justice (CitationSchafer and Giblin) and nursing (CitationMawn and Goldberg), CitationEstrem and Lucas’ survey of doctoral exams has received scholarly attention across disciplines. Further, their survey opens door to a variety of research methods for studying doctoral exams including “microethnographic” studies of writers across disciplines in a single institution or (CitationGonzález) or intuitional ethnographic methods enabling analysis of students and faculty across several institutions (CitationLaFrance).

The plurality of formats and programmatic approaches to doctoral exams calls for a research methodology capable of accounting for variation and difference, while at the same time recognizing trends and patterns. For situations involving polyvalent work, like doctoral exams, a boundary object framework provides a vocabulary for recognizing “how work becomes visible or invisible and then how negotiations about this status are structured” (CitationStar and Strauss 354). Central to my analysis of current exam practices, heteropraxis is a term rooted in Star’s vocabulary of boundary objects which can be understood as differences in action or practice that come about intentionally, possibly due to differences in theory or circumstance. In this essay I describe doctoral exams as heteropraxial to better understand the connections between intents, contexts, and exam practices as they appear in different forms. As CitationSusan Leigh Star describes, an important dimension of systems of standardization and measurement is the implication for variation, especially in function or use, of the standards, especially according to region, local constraints, beliefs (385).

In asking about doctoral exam formats, possible changes, the relationship between exams and disciplinarity, and identifying trends in historical development of exams in particular programs, this study identifies several key findings. While various exam formats might serve a wide variety of purposes, several survey respondents identified accessibility barriers and unique challenges of exam formats presented faced by students whose first written language is not English. Rather than presenting a list of specific action items PhD programs should take to correct their exam structures, I advance the findings in a heuristic format deigned to invite readers to consider exam structures and other alternatives that might be available to achieve the same goals. While CitationEstrem and Lucas’ pivotal work on doctoral exams identifies many of the most significant challenges and limitations that come with heteropraxial exam practices, this study points to the variety of ways programs can fit exam formats to the local needs of their students and program in ways that are accessible and equitable.

Methods

The underlying research methodology for this survey is institutional ethnography, a theory of investigation, first developed in sociology by Dorothy CitationSmith as a person-centered methodology in contrast to strictly structural methodologies. In rhetoric and composition, it has been used most recently to examine perspectives of administrators and staff in a writing center (CitationLaFrance and Nicolas), as well as the origin of undergraduate curriculum and the role of institutional documentation in recording labor and expertise among faculty and administrative staff (CitationLaFrance). Institutional ethnography is well suited for the study of doctoral because these exams occur within institutional contexts in higher education. Further, institutional ethnography developed as a practice for understanding the interdependencies of institutional work processes affords both a holistic and granular perspective of doctoral exams as they involve a substantial amount work from faculty, students, and staff.

Importantly, rather than providing answers to problems, institutional ethnography leads to more complete and robust understandings of research problematics since “the problematic is generated from the data” (CitationRankin 3). The interplay between posing research questions and developing problematics exposes complexity in ways that point to broader questions of institutional rhetoric while mapping a plurality of curricular practices.

Research Questions and Problematics

This investigation began with three research questions:

  1. What are the purposes of doctoral exams in rhetoric and composition?

  2. How do doctoral examinations in rhetoric and composition assess and promote disciplinary competence or expertise in the graduate students who take them?

  3. How do doctoral examinations reflect and reinforce programmatic and departmental views of rhetoric and composition as a discipline?

While the results of the study point toward provisional answers to these questions, a pilot study for this project suggested that while research motivated by these questions could be productive, they were more likely to identify unforeseen complexity that would generate further questions. As a result, while the survey was designed to gather data that could lead to informed responses to the questions, it was also designed with open ended response options that would allow the complexity of doctoral exam practices (including those unforeseen by the research questions) to be identified. Institutional ethnography recognizes that responses to research questions, especially those that complicate and unsettle preliminary expectations, are more valuable than answers to research questions. Research questions are a productive beginning for problematics, but rather than turning to research to identify a defensible answer to the questions, problematics use research questions to identify points in institutional practice, like doctoral examinations, that seem commonplace and interrogate them. As I started thinking about the wide range of responses from program directors, I realized the data was opening a valuable perspective on heteropraxis, varieties of practical pedagogical application, which I frame heuristically in this article.

Within the methodology of institutional ethnography, a complex problem like the variety of doctoral exam formats can be framed as a research problematic or a “problematic of an inquiry” (CitationCampbell & Gregor 47) which is less a problem to be solved and more of a productive tension that helps researchers notice “questions that may not have been posed or a set of puzzles that do not yet exist in the form of puzzles but are ‘latent’ in the actualities of the experienced world” (D. E. CitationSmith 91). In most programs, doctoral exam processes surface on a regular interval, either annually or as students advance to that stage of the program—they are perennial as an institutional practice, but the meaning(s) of this practice across the discipline are seemingly latent. Rather than ending with a solution to a problem, this approach productively complicates research questions. CitationMarjorie DeVault and Liza McCoy describe research problematics as “grabbing a ball of string, finding a thread, and then pulling it out,” (755). In this case, identifying the preliminary research questions was akin to finding the thread and using the survey to invite respondent feedback was analogous to pulling out the research problematics. While the survey I describe in this article provides a basis for preliminary responses to these questions, I argue that the plurality of exam approaches identified though the survey are better understood through a research problematics perspective and the survey question formats, as methods, were designed to advance those problematics.

That all PhD programs who responded to this survey administer some kind of doctoral exam is evidence of their curricular stability. That each exam includes different tasks and sequences of events and seems to serve different disciplinary purposes suggests they are adaptable. This survey, following in a tradition of programmatic and disciplinary study in rhetoric and composition, functions as disciplinary autoethnography which reflectively seeks to better understand the changing complexity the field while simultaneously modeling the use of empirical research methods in writing studies. As CitationBrown et al. write in their 1999 survey, one purpose of the study was to “provide the discipline itself with a tool for reflection, assessment, and purposeful goal-setting” (223). One overarching narrative developed by these studies is a historical account of “growth” and “consolidation” as programs across the field begin to gain recognition and settle into maturity, while at the same time pointing to the centrality of professionalization as both a theme and outcome for the exams themselves.

The survey instrument and distribution methods were reviewed by the IRB at my institution (#2020-780). Questions were formatted as multiple choice selections with most including an option for custom text entry, a design that sought to allow respondents options to either quickly click through the survey or to provide longer qualitative responses. No questions were required and respondents to end the survey at any time. I clustered questions into four thematic blocks:

  1. Exam Format (and Changes)

  2. Historical and Long-Term Projected Changes to Exam

  3. Exam Purposes and View of Discipline

  4. How Exam Structures are Developed

The survey included 17 questions in total, and since this survey was directed to PhD programs in rhetoric and composition, I used the publicly facing directory list of the Consortium of Doctoral Programs in Rhetoric and Composition (“CitationCCCC Doctoral Consortium”) to distribute a private link to the survey. Rather than sending one mass email to all 81 contacts on the list, I prepared a standardized invitation, but emailed each contact individually with a personalized salutation line to solicit greater participation. The survey was open between September 14th, 2020 and October 1st, 2020 and was distributed during that time frame to avoid adding additional inconvenience to program administrators during the first week(s) of the academic year. Of the total 81 programs contacted in the survey distribution, 40 responded (49.38% response rate).

Results and Analysis

Exam Formats, Goals, and (Possible) Changes

Two decades ago, CitationEstrem and Lucas’ observational survey found that not only were exams administered using a wide range of formats across programs, but that the means of administering and the exams and evaluating their success was equally varied and often nebulous. They critique ambiguity and unevenness in assessment, holding that “the comprehensive exam most often posits a student self as one that has sense of freedom and agency, but ultimately remains bound by disciplinary hegemony,” (407) and for a discipline as fluid as rhetoric and composition, this hegemony is hardly static or consistent. While I agree with the critical points CitationEstrem and Lucas observe about doctoral exams in general, the survey I present seeks to understand the variations in exam formats and evaluation methods as a pluralistic expression of disciplinary practice, standards that could be described as “heteropraxial,” appearing as different practices according to local constraints and servicing different purposes, but nonetheless representing a comparable standard (385). Furthermore, in addition to asking program directors about exam practices in actual use in their programs, this survey sought to create a listening space to learn about possible and potential changes to exams that were being considered, affording just at must attention to possible future practices as those currently established.

A primary goal of this survey was to gain a better sense of the actual formats of doctoral exams currently being used in PhD programs in rhetoric and composition. Since the survey was distributed in September 2020, amid the COVID-19 pandemic, this block of questions also asked respondents about short term or temporary changes to exam structures as a response to the pandemic. Surprisingly, the vast majority (21 of 37) indicated there were no plans for changes to exam processes with only 5 indicating there would be changes and 11 responding “maybe” (see ). These responses suggest most of the programs represented in this survey planned to continue using their existing formats for doctoral exams.

Table 1. Number of PhD Programs considering modifications to their doctoral exam process within 12 months of October 2020.

The first question of the survey was designed to capture a snapshot of all current formats for doctoral examinations. In asking “What format(s) does the doctoral exam in your program follow?,” the question allowed respondents to select all applicable formats from a list or to write in responses in field labelled “other.” The complete count of responses is displayed in . The formats listed as selectable options were developed from exams formats described CitationEstrem and Lucas’ study (403) as well as preliminary observations of program websites.

Table 2. Formats of Doctoral Exams in Rhetoric and Composition PhD Programs.

Most respondents selected more than one exam format, with the combined responses of a written exam (either prepared by the student or the program) and an oral exam/defense appearing the vast majority (29 of 40) of responses. Institutional ethnography uses the term “social coordination” to refer to work processes in institutional contexts that “lend value to particular modes of doing, knowing, and being” (CitationLaFrance 39). In the case of doctoral exams in specific programs adopting multiple constituent parts, the organizational structures and processes that mediate the constituent exam parts play just as much a part in the curriculum as the exam exercises themselves. Consider, for example, the dynamics between oral and written elements of exams. When students first write and then orally defend their exam content, social coordination between exam parts situates the writing a primary, rhetoric that the candidate must be able to defend verbally. Arranged differently, a PhD program might ask students to develop a reading list in coordination with a faculty member or committee, followed by a written exam, thus positioning the oral negotiation of the reading list as the primary activity. Across all cases that include more than one exam format, a further question remains: how are the exam components ordered and what assumptions about competence or performance are tacitly asserted through the ordering of exam parts? In most cases, it seems programmatic use of multiple constituent exam parts lend themselves to an interdependent narrative of understanding, but these broader goals are not always stated.

The custom text entry field labeled “other” was also used frequently by respondents, with 17 custom text entries (see ). Among a variety of clarifications about exam formats, three specific themes surfaced repeatedly. Respondents mentioned the exam as 1) scaffolding toward dissertation; 2) offering various options for to students to complete the exam; and 3) being formed in consultation between faculty and students. Responses to follow-up questions about planned changes in the upcoming academic year were few because few programs reported actual changes planned for the following 12 months. Reports of hypothetical changes and changes in the longer term, as posed by the following question, elicited far more follow-up responses.

Table 3. Open-Ended Text Entries Responding to “What format does the doctoral exam in your program follow?”

The relationship of the exam to other parts of the graduate curriculum came further into focus through the text entry responses outlined in . Most notably, some programs directly include the dissertation prospectus while others specifically describe the exam as work that is distinct from, but perhaps leading to, the dissertation prospectus. One respondent writes about this ambiguous distinction between curricular components as they were “not sure whether to check prospectus, as that is required for candidacy but is also considered separate from the exam.” While exams on their own do not usually advance a student to candidacy, they can be part of the dissertation prospectus process which usually does. In advancing a heteroparaxial view of doctoral exams, I content differences in practice can reasonably serve different institutional needs.

The arrangement and interplay of various doctoral exam processes also point to the role of faculty as mentors in the process. Faculty might mentor students directly through the exam process by working with them to develop a reading list or offering advice about how to navigate an oral defense. Further, faculty could view the exam process itself as an opportunity to provide integrated professional mentoring to prepare students for writing dissertations and entering the academic job market (CitationMoeggenberg), a view reflected in some programs’ exam approaches in either oral defense (preparation for interviews) or through the development of scholarship and a focused research agenda.

Historical and Long-Term Projected Changes to Exam

The following set of questions expanded the temporal scope by asking about any changes made to exam formats in the last ten years and any changes which might be considered for the future. While institutional ethnography seeks to answer the question of “how things come to be,” the future-oriented questions I included in the survey were designed to extend this analysis to include “how things could come to be.” These questions sought to gain a better sense of the histories and trajectories of exam format development, in part, seeking to address the guiding institutional ethnographic question of how things come to be. Two themes, accessibility limitations and barriers and unique challenges faced by non-native writers of English, were identified through custom text-entry responses to survey questions about potential changes to exam processes. In considering the ways these cases problematize conventional exam structures, I suggest not only ways that programs can make exams more accessible, but also how these responses point to structural presuppositions about ability and language as well as my privilege which became highlighted though these research methods.

Accessible Exam Design

One question asked: “Is your program considering any future changes to the exam at this time?” By emphasizing “considering” rather than “planning” this question stepped back from the certainties of intuitional commitment, in favor of gleaning ideas and aspirations. While only a few respondents indicated they were definitely considering changes (3 of 37) or probably considering changes (6 of 37), 17 respondents included descriptions of changes being considered, several of which acknowledged challenges for creating fair exam formats and processes. I used a first-cycle “descriptive coding” approach (CitationSaldaña 102) to thematically categorize qualitative responses to this question along the following eight thematic categories: major changes in exam form, reading list, time, accessibility, non-native writers of English and structural inequality, collaboration between graduate students and faculty, oral defense/exam, and assessment. Textual examples and frequency counts of these themes are displayed in .

Table 4. Code Book for Qualitative Text Responses Describing Changes to Exams Being Considered by Programs.

Some responses explained the trajectory of possible change by weighing the considerations and values that brought about existing formatting and might lead to future changes. As one respondent explained, weighing these changes is hardly easy and notions of fairness become complicated: “With the large range of abilities and learning styles, it's a challenge to develop an exam that is equally accessible and fair to everyone. While this respondent points to the dilemma that designing a more accessible exam format is challenging because a change that seems to alleviate one accessibility pinch point could cause a different one. At the same time exam processes must be accessible to graduate students who take them, the process of designing and evaluating the exams also needs to be accessible for faculty members who administer the exams. To the extent that exams are a form of educational technology, this study responds to CitationSushil K. Oswal’s call for ethnographic work in documenting the use and retrofit of technology in learning and working environments (55). The heuristic questions I pose in the concluding discussion of this essay include prompts that directly ask programs to reflect on the accessibility of their exam process, and my hope is that though reflection, discussion and documentation, accessibility will become a more central consideration for exam design.

This response regarding accessible exam design points to a number of suggestions exam designers and administrators could consider. Accessibility in exam design can be developed both as general guiding practice and as specific needs arise. Accessible document interfaces are a starting point, but exam processes should also be considered. While most exams are stressful, access barriers should not be a source of stress. Timed exams might motivate some students or appear as a manageable stress factor for others, but for some students, timed exams can become an access barrier. From the standpoint of assessment, access barriers are problematic because they render the exam useless. If an exam is inaccessible, there is no way of knowing if a student’s poor performance is due to not knowing the material or the inaccessible assessment format preventing them from demonstrating their knowledge.

Enacting accessible design is not always straightforward or easy and significant ambiguity around who is responsible for developing accessible accommodations can present challenges. Many disabilities are not visible and might not be recognized unless a person discloses. Accommodation letters are a minimal threshold for accessibility; however, accessible curriculum is served best through proactive work that “troubleshoots before the trouble” (CitationRamler 2). If an accessibility accommodation has been made in the past, perhaps multiple times, would it not make sense to incorporate those changes into the regular format of the exam process? If an accessibility accommodation can be reasonably normalized as part of the standard exam format without losing any of the essential dimensions, why not make a change across the board so exams more accessible for everyone?

Accessibility barriers to exams can develop for many reasons, but for non-native English writers, a specific set of challenges were identified through the survey. Timed test taking constrains are a particular challenge as non-native English writers might understand concepts and be able to analyze and synthesize them fully well but might need additional time to do so.

Non-native English Writers

Another striking response to the survey question about prospective changes starts by identifying accommodating non-native English writers additional test time before posing a series of probing and open-ended questions aimed at accommodation and equity more broadly: “Should that accommodation be standardized? Are there better ways of handling exams for our increasing number of international graduate students and international scholars?” This response points to two common challenges faced by programs. First, in the spirit of fairness, how should exam structures consider the reality that many PhD students are non-native writers of English? Implicit in this question is the understanding that when exams test for outcomes, their design involves linguistic bias that is not an indicator of a student’s scholarly ability. Second, this response asks a challenging question about standardization. It seems the respondent is unsure about the individualized discretion granted to advisors the current form provides and asks if standardization would be an appropriate accommodation or if there is a more equitable approach.

While challenges with doctoral exams faced by non-native English writers appear similar to the previously mentioned set of accessibility concerns, I would like to assert that multilingualism is certainly not a disability (if anything, it is the opposite). Writing studies scholarship has, for decades, grappled with questions of English pedagogy and linguistic diversity, punctuated by pivotal works including CCCC’s position statement, “Students’ Right to Their Own Language,” (CitationConference on College Composition and Communication), A. Suresh Cangarajah’s ethnographic work with multilingual writers CitationAsao Inoue’s 2019 CCCC Chair’s Address on White Language Supremacy, and April CitationBaker-Bell’s analysis of Anti-Black Linguistic Racism. Doctoral examination processes are a flashpoint site in graduate curriculum where tensions between multilingualism and traditionalist notions of academic rigor and notions of language norms come to a head.

Related to the pressures of timed exams for non-native English writers, survey respondents also wrote about the importance of incorporating revision into their exam structures. One respondent articulated a view that the revision process itself should be a more integral part of the exam for students: “I would like to see the opportunity to revise. We do give students a chance to revise, if necessary, but I'd like to see it as a part of the process, so the pressure is lessened.” Of course, revision is widely considered to be an important and integral part of any other form of writing so why would omitting the opportunity to revise be an important part of a doctoral exam process?

As a form of enacted reflection, revision can also be a means for better understanding the relationship between social positions, research methods, and privileged perspectives. As a multiracial (temporarily) able-bodied man whose first language is English, I constructed the survey questions without specific regard for the experiences of graduate students with disabilities and non-native writers of English, and these experiences were only made apparent through the text entry options in survey questions. A research paradigm originating from a privileged and partial perspective, no matter how sophisticated or carefully crafted, is predisposed to miss or overlook marginalized experiences. This is an important reason why I present my suggestions for improving exam processes in the form of an open-ended heuristic framed by discussion rather than as a list of best practices or normative assertions.

Maintaining relevance and currency necessitates exam processes undergo some periodical change or review. Do these changes take place in response to perceived needs or job market demands? Do these changes precipitate naturally based on what each program recognizes it can realistically manage? Are changes developed proactively to offer nudges and new directions to developing scholars (CitationThaler and Sunstein)? Ultimately, this question invites prospective responses about change and points to the liminal space between theoretical and conceptual aspirations on one hand and substantive commitment to action on the other.

Exam Purposes and View of Discipline

Connections between doctoral exams and notions of disciplinarity in rhetoric and composition were the focus of another set of survey questions that specifically asked respondents to identify the extent to which the exam reflects or promotes particular disciplinary understandings. From a list of six possible purposes and a text entry field allowing custom entry of non-specified responses, survey respondents resoundingly indicated “preparation for independent scholarship,” as a purpose of the exam (34 of 37), which is reflected elsewhere in the survey as respondents wrote about using exams to scaffold graduate student progress toward dissertation research or publishable articles.

This question permitted respondents to select as many purposes as they wished; however, selection was not designed to identify hierarchies or ranked orders of exam purposes. Most respondents selected multiple purposes from the established list, while four respondents selected all listed purposes and one respondent selected all listed purposes and included further elaboration in the custom text entry. Responses to this question indicate doctoral exams in rhetoric and composition achieve a plurality of purposes within program and across the field. On the face of it, this multitude of purposes can be dizzying and we might ask, “If we do not know why we are administering comprehensive exams, or whether they actually prove a candidate’s understanding of the field or their ability to produce quality scholarship, then what is their function?” (A. W. CitationSmith 1154). This connection between function (what exams do) and purpose (what exams aim to achieve) is not always transparent, and while the results enumerated in this this survey are not complete, they do describe a wide range of possible purposes.

In addition to preparation for independent scholarship, “Certification of disciplinary competence or expertise,” and “ability to construct a focused reading list” were also identified as common purposes for exams. Of the custom text entries, several indicated a direct link between the doctoral exam and dissertation, for example:

“Our exams are keyed to the dissertation project explicitly,” and “Our exams are focused on the student's dissertation project.”

One important, and perhaps commonly underestimated dimension of preparation for future work is the sense of empowerment students develop as a result of the exam process:

It's empowering. Students see that they are making connections and creating those connections (rather than being handed them by a professor). Done well, the process of reading for the exam is empowering and proves to the candidate that they really do know a lot!

Meanwhile, a few other custom entries make explicit connections to professionalization linking doctoral examinations to “preparation for the academic job market.” While curricular and scholarly progress often surface as the stated purposes of examinations, the undercurrent of professionalization (as mentioned by Pierce and Enos, 206) is ubiquitous. Taken together, exams can allow multiple views of the discipline, prepare students to be scholars, empower confidence, and serve as excellent preparation for employment. The fact that exams serve different purposes in different programs points to different roles doctoral exams might play in curriculum.

This question block also advanced CitationEstrem and Lucas’ original list of theorized purposes for exams, by asking respondents to provide a ranked order for “critical thinking; expert knowledge; research ability; and teaching ability.” While “expert knowledge” surfaced as the most commonly prioritized purpose for exams, “critical thinking” and ‘research ability” followed closely. While these three purposes were loosely ranked, in terms of ordered priority, “teaching ability” was overwhelmingly marked as the lowest priority for purposes of the doctoral exam (See ). Critical Thinking, Expert Knowledge, and Research Ability are closely clustered as equally important while Teaching Ability was consistently marked as least important. Despite “teaching ability being marked as least important, a growing number of PhD programs include pedagogical components in exam, a phenomenon that would benefit from further research. Beyond academic applications, the PhD degree is applicable in many non-academic employment tracks and students who enter PhD programs in rhetoric and composition with the intent of pursing non-academic work could be better served by exam processes that take non-academic career trajectories into consideration.

Table 5. Four Goals of Doctoral Exams.

Two closed-ended questions asked about views regarding disciplinarity and the institutional contexts within which exams are developed and administered. 29 of 37 respondents expressed some degree of agreement that “the doctoral exam in our program reflects a programmatic or departmental view of rhetoric and composition as a discipline.” Moreover 32 of those 37 respondents expressed some degree of agreement that “The doctoral exam in our program is designed to allow students to develop and articulate their own view of rhetoric and composition as a discipline." Taken together, responses to the two survey questions suggest respondents value the polyvalence, or multiplicity of potential meanings and relationships, which arise between doctoral exams and disciplinary views.

Understood together, responses to the questions about departmental views of the discipline and students forming their own views of the discipline suggest programs generally believe exams can simultaneously reflect a departmental view of the discipline while also allowing students to articulate their own. This point is made even more clearly when responses from individual respondents are compared across the two questions, noting that in the vast majority of cases, respondents who answered in agreement to some extent in the first question followed with a response, in agreement with the second (25 out of 28). Understood together, responses to these questions express the polyvalent role doctoral exams play in advancing disciplinary views of rhetoric and composition, and this institutional accommodation of plurality is an example of how boundary objects afford flexibility within a recognizable framework.

Instead, this recognition of the polyvalent relationship between exams and disciplinarity renders the tension and stakes more transparent, thus enabling administrators to make more informed decisions. In the following discission, I present the key finding of this survey in form of open-ended questions posed heuristically as a tool that program administrators could use to inform their decision-making process.

What are Doctoral Exams For? Temporal Boundary Objects and a Heuristic for Navigating Heteropraxis

CitationEstrem and Lucas understand the plurality of exams as an entryway into critique and they make many excellent points especially focusing on the ambiguity of exam assessment. Here I posit that pluralities of practice could be an advantage to the discipline, to programs, and to students, provided there are reasons. The following set of questions are framed heuristically to help readers articulate these reasons for themselves. Drawing from CitationJones and Walton’s use of heuristic as design guidelines intended to serve as both a tool for evaluation and pedagogical application (259), I pose these questions to spur programmatic reflection and recognition that exam processes can often be changed if they are not achieving their intended goals for students, faculty, or programs. Rather than prescribe a specific set of exam practices, these questions, derived from the analysis of results in this study, are in intended to support reasoned difference—heteropraxis. In the same way that CitationJones and Walton’s heuristic for instructors to use narrative to enact social justice in technical and professional classrooms is “purposely conceptual and brief, leaving room for instructors to adapt the exercises for their own use,” the questions that comprise the following heuristic are also intentionally open-ended, seeking more to invite narrative description rather than to assert specific normative stances (259).

  1. What is the current exam practice? This is a simple description of the process. How would you describe exams to a prospective student?

  2. Where are the details of the exam process recorded in writing? This question focuses on informational access. The specification for “in writing” ensures that information is equally available to everyone.

  3. Are all details about the process available to current and prospective PhD students? This question is related to questions 1 and 2 but is designed to draw out any points of reliance on a hidden curriculum (CitationAcker). This question can also be taken to apply to exam questions themselves—are students allowed to read any exam questions before the exam or are they secret?

  4. How did our current exam format start? Historical genesis can matter more in some programs and less in others. If the history of the exam is unknown or incomplete, this might indicate that it needs to be carefully considered (see question 7).

  5. Has the exam format ever been revised or modified? Changes generally happen for a reason. If there have been changes, when did they happen and why? Where they temporary adjustments or were they more permanent? Have there ever been any exceptions to the exam format?

  6. What role does the exam play in the program’s current doctoral curriculum? This question interrogates function and aims to identify goals that are documented overtly as well unintended consequences and serendipitous side effects. If the program is situated in a department, college, or school with constraints on exams, this would be a good place to note them.

  7. Why is the program interested in continuing forward with the status quo or why is the program considering changes? This question is designed to draw out reasons, rationalizations, and justifications for practice. Could these reasons be comfortably shared with graduate students? Could they be shared on the program website? If there are reasons for a justification not being public (private reasons), what are those private reasons?

  8. What kinds of accessibility challenges does our exam present to students? All exam formats present accessibility barriers. What kinds of accessibility concerns have been raised or accommodated in the past? How do students make their accessibility needs known?

  9. How does the exam process work for multilingual students? Do multilingual students work through the exam process without any more difficulty than monolingual peers? Do multilingual students have a more difficult time passing?

  10. Recognizing exams invariably cause stress and exacerbate vulnerability, what support systems are in place for students? What support systems does the program or faculty in the program explicitly provide? What support systems does the institution provide? Do these systems work?

The main question underlying all the questions in this heuristic can be summarized as “what are doctoral exams for?” Hinging on the notion of heteropraxis, this study intensifies the multitude of exam practices as serving many different possible purposes. For a smaller PhD program with few faculty and graduate students, the exam process could primarily serve as a structuralized form of mentoring through collaborative reading list construction and a conversational oral defense. A program that situates their exam as a culminating experience of coursework might use the exam primarily as a marker to indicate that students are ready to progress to more self-directed research. While each reader and program is likely to answer these questions differently, the heuristic is also designed to elicit narratives and descriptions in such a way that readers recognize ways in which they have agency to shift the context of doctoral exams, to alter their formats or purposes, or to consider ways in which the same intended educational outcomes can be achieved through more humane and accessible means.

Disclosure Statement

No potential conflict of interest was reported by the authors.

Additional information

Notes on contributors

Ryan Michael Murphy

Ryan Michael Murphy is an assistant professor of business communication in the Martin V. Smith School of Business and Economics at California State University Channel Islands. He earned his Ph.D. in Rhetoric and Composition from Purdue University. His research interests include institutional analysis, business communication, and cultural and critical rhetorics.

Notes

1 I would like to thank Heidi McKee and Duane Roen for providing thoughtful and constructive feedback during the review process, and to the editorial team at RR for guiding this article to publication. Patricia Sullivan, Irwin Weiser, and Michael Salvo helped shape this article from the beginning, and I am especially grateful for the program directors who responded to the survey.

Works Cited

  • Acker, Sandra. “The Hidden Curriculum of Dissertation Advising.” The Hidden Curriculum in Higher Education, edited by Eric Margolis, Routledge, 2001, pp. 61–77.
  • Baker-Bell, April. Linguistic Justice: Black Language, Literacy, Identity, and Pedagogy. Routledge, 2020.
  • Brown, Stuart C., et al. “The Arrival of Rhetoric in the Twenty‐First Century: The 1999 Survey of Doctoral Programs in Rhetoric.” Rhetoric Review, vol. 18, no. 2, Mar. 2000, pp. 233–42.
  • Campbell, Marie, and Frances Gregor. Mapping Social Relations: A Primer in Doing Institutional Ethnography. Altamira Press, 2004.
  • Canagarajah, A. Suresh. “Autoethnography in the Study of Multilingual Writers.” Writing Studies Research in Practice: Methods and Methodologies, edited by Lee Nickoson and Mary P. Sheridan, Southern Illinois University Press, 2012, pp. 113–24.
  • “CCCC Doctoral Consortium.” The Consortium of Doctoral Programs in Rhetoric and Composition, 19 Oct. 2014. Web
  • Conference on College Composition and Communication. Student’s Right to Their Own Language. Conference on College Composition and Communication, Fall 1974. Web
  • Devault, Marjorie L., and Liza McCoy. “Institutional Ethnography: Using Interviews to Investigate Ruling Relations.” Institutional Ethnography as Practice, edited by Dorothy E. Smith, Rowman & Littlefield Publishers, Inc., 2006, pp. 15–44.
  • Estrem, Heidi, and Brad E. Lucas. “Embedded Traditions, Uneven Reform: The Place of the Comprehensive Exam in Composition and Rhetoric PhD Programs.” Rhetoric Review, vol. 22, no. 4, Oct. 2003, pp. 396–416. Crossref, Web
  • González, Angela Marta. Shaping the Thesis and Dissertation: Case Studies of Writers Across the Curriculum. Dissertation: Texas Christian University, Aug. 2007.
  • Hurley, Meredith Graupner. Remediating the Professionalization of Doctoral Students in Rhetoric and Composition. Dissertation, Bowling Green State University, Dec. 2010.
  • Inoue, Asao B. 2019 CCCC Chair’s Address: How Do We Language So People Stop Killing Each Other, or What Do We Do about White Language Supremacy? Pittsburgh, PA.
  • Jones, Natasha N., and Rebecca Walton. “Using Narratives to Foster Critical Thinking About Diversity and Social Justice.” Key Theoretical Frameworks: Teaching Technical Communication in the Twenty-First Century, edited by Angela M. Haas and Michelle F. Eble, Utah State University Press, 2018, pp. 241–67.
  • Kostohryz, Katie. The Comprehensive Examination in Counsellor Education Doctoral Programs: A Study of Faculty’s Perceived Purposes. Dissertation, The Ohio State University, Aug. 2011.
  • LaFrance, Michelle. Institutional Ethnography: A Theory of Practice for Writing Studies Researchers. Utah State University Press, 2019.
  • LaFrance, Michelle, and Melissa Nicolas. “Institutional Ethnography as Materialist Framework for Writing Program Research and the Faculty-Staff Work Standpoints Project.” College Composition and Communication, vol. 66, no. 1, 2012, p. 21.
  • Mawn, Barbara E., and Shari Goldberg. “Trends in the Nursing Doctoral Comprehensive Examination Process: A National Survey.” Journal of Professional Nursing, vol. 28, no. 3, May 2012, pp. 156–62. ScienceDirect, Web
  • Moeggenberg, Zarah C. “Job Market Mentoring in Rhetoric and Composition and Technical Communication.” Rhetoric Review, vol. 41, no. 4, Oct. 2022, pp. 297–315. Web
  • Oswal, Sushil K. “Exploring Accessibility as a Potential Area of Research for Technical Communication: A Modest Proposal.” Communication Design Quarterly, vol. 1, no. 4, Aug. 2013, pp. 50–60. Web
  • Peirce, Karen P., and Theresa Jarnagin Enos. “How Seriously Are We Taking Professionalization? A Report on Graduate Curricula in Rhetoric and Composition.” Rhetoric Review, vol. 25, no. 2, Apr. 2006, pp. 204–10.
  • Ramler, Mari E. “Queer Usability.” Technical Communication Quarterly, vol. 30, no. 4, Oct. 2020, pp. 1–14. Taylor and Francis + NEJM, Web
  • Rankin, Janet. “Conducting Analysis in Institutional Ethnography: Guidance and Cautions.” International Journal of Qualitative Methods, Oct. 2017. Sage CA: Los Angeles, CA, Web
  • Saldaña, Johnny. The Coding Manual for Qualitative Researchers. 3E [Third edition], SAGE, 2016.
  • Schafer, Joseph A., and Matthew J. Giblin. “Doctoral Comprehensive Exams: Standardization, Customization, and Everywhere in Between.” Journal of Criminal Justice Education, vol. 19, no. 2, July 2008, pp. 275–89. Taylor and Francis + NEJM, Web.
  • Smith, Allegra W. “Language and Access: World Englishes, Universal Design for Learning, and Writing Pedagogy.” Journal of Global Literacies, Technologies, and Emerging Pedagogies, vol. V, no. II, Oct. 2020, pp. 1144–61.
  • Smith, Dorothy E. Writing the Social: Critique, Theory, and Investigations. University of Toronto Press, 1999.
  • Star, Susan Leigh. “The Ethnography of Infrastructure.” American Behavioral Scientist, vol. 43, no. 3, Nov. 1999, pp. 377–91. SAGE Journals, Web.
  • Star, Susan Leigh, and Anselm Strauss. “Layers of Silence, Arenas of Voice: The Ecology of Visible and Invisible Work.” Boundary Objects and Beyond, edited by Geoffrey C Bowker et al., MIT Press, 2015, pp. 351–73.
  • Thaler, Richard H., and Cass R. Sunstein. Nudge: Improving Decisions about Health, Wealth, and Happiness. Rev. and Expanded ed, Penguin Books, 2009.
  • Thomas, Sonya C. Self-Efficacy and Preparation of Scholarly Writing: Online Doctoral Coursework to Comprehensive Examination - A Mixed Method Study. Dissertation, Capella University, Mar. 2013.