908
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Conference Rubric Development for STEM Librarians’ Publications

ORCID Icon, ORCID Icon, , ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon show all

ABSTRACT

Librarians within the Engineering Libraries Division (ELD) annually publish conference papers for the American Society for Engineering Education (ASEE). The existing ASEE rubric was not sufficient for our members, so we developed a new rubric as a charged committee for this task. We briefly discuss the sparse literature in this area, focusing on the use of rubrics and the rationale behind them. Due to this lack of literature, our committee primarily utilized additional sources such as rubrics found from other professional organizations in STEM and library fields. Our rubric is designed to encourage substantive feedback and growth of authors during the process, while clarifying the expectations for submissions. This rubric consists of overall guidance and specific needs, with flexibility for the different research methods and applications expected (i.e. work-in-progress/completed research, quantitative/qualitative, etc.). We implemented this rubric successfully for the 2021 conference cycle, but will further refine it as needed, based on feedback following future conferences. With scarce literature on conference peer review, we hope by sharing our work, others may also consider and improve their organizations’ processes.

Introduction

The Engineering Libraries Division (ELD) Publications Committee is the committee that reviews all abstracts and papers submitted to ELD within the American Society of Engineering Education (ASEE) for presentation at our annual conferences. In a typical year, close to forty papers may be submitted from among more than 200 ELD members and the overall ASEE membership. Each submitted paper is reviewed by three people from the pool of about thirty ELD volunteers. Papers may be traditional research, case studies, works-in-progress, and more with quantitative or qualitative approaches. Co-chairs review the abstracts and committee members review the papers, using general guidance from ELD and ASEE’s rubric for “Best Paper” eligibility to evaluate them (see Appendix A: ASEE Draft Conference Paper Rubric). This ASEE rubric has been traditionally used to focus on originality and quality for all submissions, with both reviewers and authors applying it to promote a fair and consistent evaluation of engineering education research.

However, in early 2020, the Co-chairs of the ELD Publications Committee determined, after a virtual meeting with reviewers, that ASEE’s rubric was insufficient for library research evaluation. Past conferences have seen issues such as inconsistencies in scoring across reviewers, and have observed a wide range of quality in author submissions. In evaluating post-conference reviewer feedback, the ELD Publications Committee identified that there was a reviewer perception of a significant degree of variation in the quality of papers being submitted. Authors similarly found reviewer comments to be inconsistent and of little help in revising their papers. Furthermore, there was a lack of clarity on the scope as well as structure and language of the standard ASEE rubric, which could not easily be applied by librarians to their standard research foci compared to traditional engineering education research. A possible new rubric could supplement and strengthen the assessment already in place, not replace it. Hence, in 2020 the ELD Publications Committee solicited interested committee members, and formed a rubric development subcommittee.

In working through this update to the peer review process, one of the largest considerations, besides designing for librarian research, was to capture the diversity in engineering librarians’ background and experience with publishing. These diverse backgrounds also make it more difficult to establish a cohesive and clear rubric or other processes. For example, a STEM librarian with a PhD in a technical field could have a different outlook on a review compared to a STEM librarian with a humanities undergraduate degree and the traditional terminal library master’s degree (MLIS or MLS). Also, ELD frequently has first time authors and reviewers participating in this process, with inexperienced authors in particular being intimidated by the process of paper submission. Overall, this means more care must be taken in developing the peer review process so that it is effective as a learning ground for authors and reviewers alike.

In this paper, we discuss how we updated the peer review process for the Engineering Libraries Division of ASEE, specifically through the creation of a new rubric, replacing ELD’s general guidance and supplementing ASEE’s rubric. A brief literature review of the peer review process is also presented, with emphasis on its relation to conference publications in different fields. We highlight our discussions as a committee that led to ensuring fairness and transparency for authors in the rubric development process. Finally, we present the outcome of this work, our new rubric, which focuses on specific criteria and assessment to ensure submission and acceptance of consistently high quality of papers.

Background

Peer review dominates the review and evaluation of scientific and academic research (Schröter, Coryn, and Montrosse Citation2008). While in many fields, graduate students are given ample experience and mentored through the process of peer review, librarians receive little, or no formal training on how to review a document (Akers Citation2017). Librarians additionally may only be familiar with it in the context of evaluating manuscripts for journal publication, without exposure to application for academic conferences. Though similarities exist between the two, one significant difference is the time constraints inherent in conferences. This limits the opportunity for the improvement, revision, and resubmission, so proposals are often simply accepted or rejected (Schröter, Coryn, and Montrosse Citation2008). Additionally, the publications that come out of conferences can sometimes be considered “grey literature,” which does not always follow the same type of review process.

One frequent criticism of the peer review process for conference submissions is the lack of inter-rater reliability and varying quality of accepted proposals (Anderson Citation2009; Deveugele and Silverman Citation2017). To address this issue, the use of rubrics has been suggested and promoted as a solution. Rubrics enhance the objectivity of reviewers by providing uniform criteria against which proposals can be measured (Orozco, Barreras, and Hicks Citation2021). They also provide benefits to authors; Howard, Abel, and Madigan (Citation2021) developed a rubric for a nursing conference and provided it to both reviewers and authors. According to them, authors received higher scores after implementation of the rubric, suggesting a benefit to authors, while reviewers’ scores had less of a spread, suggesting increased agreement on scores among reviewers.

In addition to the general support for the use of rubrics, other commentators advocate for rubrics to be specifically tailored for the types of submissions expected. For example, articles related to education and pedagogy should be judged differently than articles on pure or applied research, as should articles presenting qualitative versus quantitative research (Jordan et al. Citation2021). In addition, specialized rubrics have also been developed for the improvement of faculty presentations (Hayne and McDaniel Citation2013), the assessment of clinical vignettes (Newsom et al. Citation2012), and the evaluation of student presentations (Larkin Citation2014).

Commentators in the literature have also offered ample advice on how conference reviewers should act, and the best practices they should follow, when reviewing proposed conference papers (Beckers, Fossum, and Kaefer Citation2018; Bernstein Citation2008; Lubienski Citation2020). Beckers, Fossum, and Kaefer (Citation2018) provide a series of questions that reviewers should consider when reviewing and, similarly, although not applied specifically to conferences, Audunson (Citation2004) presents a series of questions for considering the rigor of research in library and information sciences. In both cases, the series of questions were detailed enough that they could form the basis of an evaluation rubric.

Despite these attempts to categorize the best practices of conference reviewers, peer reviewers’ comments have been observed to be generally unhelpful and even inconsistent (Dobele Citation2015). Gardner et al. (Citation2012) believe that reviewers’ comments should be part of an academic conversation that leads to the inclusion of better presentations and papers. Even where rubrics are presented, reviewers do not refer back to them as frequently as expected (Jolly et al. Citation2011). For rubrics to be successful in the conference setting, reviewers should critically read the rubrics and provide comments and feedback within the framework of the rubric (Orozco, Barreras, and Hicks Citation2021).

Considering the literature, we can say that a rubric will likely provide the most utility in a conference setting where: the rubric is tailored toward the specific subject of submissions, the rubric is equally available to authors and reviewers, and reviewers understand the rubric to the extent that they can make constructive comments and suggestions within the framework of the rubric.

Rubric development

Overall design considerations

As discussed by Dawson (Citation2017), rubrics can take on a variety of formats, but generally include: a) criteria to evaluate a particular work, b) a gradation of quality levels with definitions for each, and c) a strategy for scoring. Selection of these and other details that comprise a rubric depends on the purpose and intended use of the instrument. Therefore, in developing the ELD rubric, several decisions were required relating to its scope and purpose, that would in turn be used to inform its overall format. In the following sections we detail our considerations and discussions for the rubric development, and the rubric itself can be found in Appendix B: ELD Conference Papers Rubric.

Role of rubric

As previously stated, in the context of peer review, rubrics are developed to establish standards by which papers are judged. Articulation of these standards is intended to assist reviewers in maintaining more consistency in judging submissions, and thereby improve the quality of the paper and/or the information presented at the conference, while reducing frustration experienced by authors relating to inconsistent reviews. In other words, a rubric should clearly reflect what is expected of an author.

A rubric can be designed to provide minimum pass-fail criteria, or can further identify levels of excellence that can be used to distinguish between higher or lower quality work. In our application, the ASEE conference is a “publish to present” event, so all presentations, including podium and poster presentations, are accompanied by a paper that is deposited into ASEE’s document repository, which is discoverable on the open web. Thus, a primary objective of our peer review committee was to identify work that was of suitable quality for publication. With this primary objective in mind, further distinctions of quality were judged to be unimportant. While papers at our conference can be further nominated for special distinctions, including “best paper” and “best diversity paper,” such nominations followed a separate process and did not justify adding complexity to the rubric.

This decision impacted both the structure of the rubric and its scoring strategy. Since the rubric was not intended to “grade” any submission, no scoring mechanism was integrated or implied. Structurally, the single-point rubric, as highlighted in the Cult of Pedagogy blog post garnered a lot of interest from the committee (Gonzales Citation2015). This type of rubric, with only a single column of criteria, was appealing to members since it did not give several performance levels to choose from, something common to many other rubrics. We concluded that this format would be more helpful for reviewers because it would give them a concise list of levels of excellence from which to choose. The format also would provide an option for them to identify and comment on key problem areas, as opposed to requiring them to come up with their own list of expectations, or forcing them to choose from a very long list of descriptions.

Role of reviewer

While an important aim of the reviewer is to help refine research outputs prior to presentation and publication, equally important in our estimation, is the role of helping develop research and publication talent from within the ranks of the association membership. This is particularly true in the context of association-sponsored conferences. In our “publish to present” environment, authors are given an opportunity to go back and improve their work, which is not necessarily true with other rubrics as not all conferences incorporate a peer-review process.

An author’s expectation of reviewers is that they provide substantive comments which could be used to improve the author’s writing, and not just terse statements of acceptance or shortcomings. In other words, while summative decisions are ultimately made regarding what will be accepted, authors expect reviewers to provide formative information to guide improvement prior to final acceptance decisions. In the work of building talent, this is useful to both papers that are accepted and those that are rejected. In some cases the rejection can force the author(s) to look elsewhere for publication. However, in this case, one desired aim of the committee was to encourage reviewers to provide thoughtful feedback to authors, which would encourage them to resubmit their work the following year.

Usability

Returning to the primary purpose of the rubric, the reviewer is first tasked with assessing whether a submitted paper meets a well-defined and consistent standard. In order that this primary role might be clear, and not confused by the presence of alternate examples, the committee adopted a two-part format where the first part comprises clear statements of the components of the standard, and the second introduces examples for reviewers interested in further detail to help prompt more useful review comments.

Additionally, the committee discussed whether to develop a rubric in the form of a simple checklist, or to develop one that offers more guidance, or even instruction. Though both of these options offer a systematic way to evaluate submissions, it was noted that using a checklist may encourage reviewers toward a more perfunctory evaluation approach, without offering any meaningful insight, especially in instances where an author’s work is rejected. The objective to encourage reviewers to add comments that the authors would find useful prompted the decision to avoid a simple checklist.

So, we expanded the single-point rubric to include descriptive examples as opposed to gradations of quality levels. This is designed to prompt reviewers to look for, and highlight, elements within the reviewed papers that exemplify different categories of quality, forcing them to provide meaningful feedback to authors. This was done by creating examples of elements within papers that “Need Improvement” and “Excellent.” This led to an overall rubric structure including pass-fail criteria with concrete examples on either side of the minimum standard.

Rubric content

In order to inform which criteria should be included in the rubric, multiple sources were consulted. The committee started with documents already available to ASEE/ELD members, including documents entitled “Guidance for peer reviewers,” “Annual conference reviewer requirements,” and “Author Guidelines” updated and provided yearly by ASEE on the annual conference webpages. Committee members consulted other rubrics, author guidelines, and criteria for assessing the quality of manuscripts from reputable publishers and other professional associations, highlighting the weaknesses and strengths of each.

We then discussed potential criteria considering the objectives of the ASEE and the particular needs of the ELD. One such consideration that was important for us to remember was that, because of ASEE’s particular “publish to present” model, the papers were ultimately going to be published as-is in ASEE’s repository. Similarly, the breadth of possible submission types in a conference setting was an important factor to recognize. In addition to traditional research articles and case studies (as is the implicit purview of the ASEE rubric), other formats interesting to practitioners in the field, including manuscripts of works-in-progress and proofs-of-concept, are often submitted. The current ASEE rubric also specifically includes “engineering education” relevance as a key content section, which needs to be adapted to engineering librarianship. This requires a rubric that is broad enough to address differences in content. Because of this variety, the committee decided that elements that may not be present in all types of papers be excluded, while we focused on including ones that were common to all types.

Criteria for paper evaluation

Sufficient criteria were included to help evaluation of key content elements of the paper as well as the writing style, although it was agreed that content was more important than language. To further emphasize this, the rubric was divided into two main sections: the “Paper Content Areas,” and the “Structure & Language Areas,” with the content section presented first.

Categories of focus in the “Paper Content Areas” included originality, methodological rigor, integrity between data and conclusions, and an adequate explanation of relevance to the field. For example, reviewers are prompted by the rubric to consider biases and whether statements are supported by methodology, data analysis, or reported outcomes. Also significant is the inclusion of a statement that “reporting of inconclusive and/or negative outcomes can be instructive,” which seeks to promote sharing of both positive and negative results from which society members may benefit.

The Structure and Language section was guided by a recognition of diversity in writing style and language ability of authors. Thus, writing standards include acceptance of various styles, while at the same time providing criteria for ensuring comprehensibility of the papers. These criteria draw on principles associated with clarity and logic, with a primary consideration of effectively leading the reader to the message of the paper.

In setting the standards for the rubric, it was imperative that these were set at the minimum acceptable level, broad enough to accommodate the variety of submissions that are commonly received, and clear enough to mitigate any misunderstanding of what the expectations are. In order to help reviewers (and authors) in their task, examples were given for what part of a paper met or did not meet any of the expectations laid out in the rubric.

Visibility/transparency of rubric

ASEE consists of over 40 Divisions, of which each may or may not make reviewer and/or author guidelines available to the wider ASEE audience or the nonmember community, but are generally found on division webpages for the conference such as for ELD (Engineering Libraries Division Citation2022). These can be extremely general, often in the form of a prompt to consider whether the topic falls into a relevant area, with a bulleted list of recommended areas of interest; or a general prompt to evaluate quality, such as this one from the Liberal Education/Engineering & Society (LEES) Division in 2021:

Does the draft hold promise for a quality paper, including locating the paper’s topic in the relevant literature? If it does not hold promise for a quality paper, please indicate why (briefly). … If you have suggestions that would help the author(s) produce a quality paper, please make them. These might include suggestions about sources to consult or the kinds of evidence that would be relevant to supporting the paper’s claims.

Or reviewer guidance may be in the form of short bullet points in categories to be considered, such as originality, relevance, and structure, per the Engineering Ethics Division. Sometimes divisions also provide different information from year to year as can be seen from the 2019 LEES Call for Papers, which provides little more author guidance than an explanation of the “publish to present” ASEE conference model, and the statement that, “Papers published through LEES in the conference proceedings are typically 10–15 pages long and include a substantial literature review” (Liberal Education/Engineering & Society Division Citation2019).

The ELD consideration of refining research outputs and developing talent thus gives rise to other considerations affecting our design, namely, how to make the rubric transparent and most useful to authors. As stated above, Howard, Abel, and Madigan (Citation2021) found benefits in paper quality when authors were able to see the rubric. Thus, we determined to make the rubric visible to authors prior to submitting their conference paper drafts so that they could be guided regarding expectations, best practices, and writing standards. This decision dictated a format that could be easily read and applied; a bulleted list, organized by key writing considerations was chosen to satisfy this design requirement. This decision influenced the tone and completeness of writing, in that the audience for the rubric included both reviewer and author. For example, the opening instructions in the document speak both to author and reviewer.

Rubric application

The rubric was designed to only be used for the initial draft review, similarly to other divisions. Following the rubric, reviewers then give formative feedback in the online system referring to the rubric guidance, and decide on three possible outcomes for a paper: accept, accept with revisions, or reject. If accepting with revisions, then the rubric can be applied again for the second review, which is only an accept/reject decision for the reviewer. Once a paper is accepted with no revisions needed, ASEE-wide guidance is then considered if a paper is to be nominated for an award. Otherwise, the review ends after application of the ELD rubric through the maximum of two review cycles.

Implementation

The Rubric Sub-Committee developed the “ELD Rubric for Conference Papers” over a few months in autumn of 2020. The final version was submitted to the Co-Chairs of the ELD Publications Committee in November 2020, in plenty of time for integration into the paper reviewing process at both draft and final version stages for the summer 2021 conference (i.e., for use after the abstract review stage). It was suggested that a training or orientation session should be offered for reviewers, to introduce them to the rubric and its intended purpose and application. This hour-long remote session took place in late January 2021. Authors were reminded prior to the paper submission deadline about the new division-level rubric, pointing them toward the statement: “This rubric is intended to assist authors and reviewers in ELD throughout the process of preparing submissions for the ASEE Annual Meeting, but is not exhaustive. This rubric supplements existing guidance from ASEE & ELD (ELD 2021 Author Guidelines).”

The roll-out of the new division-level paper evaluation rubric was not as robust as would have been preferred, for two primary reasons: 1) the Covid-19 pandemic, and 2) early stages of ASEE’s migration from one paper management system to another, to be “turned on” between the summer 2021 conference and a new conference paper cycle beginning in autumn 2021 with the Call for Abstracts for 2022. While it was hoped that pandemic social distancing precautions would no longer be necessary by the original June 2021 conference date, by early spring, the event had been postponed to July and then shifted to all-virtual for a second year; paper final version submission and reviewer recommendation dates were delayed accordingly. Amidst the uncertainty of the conference date and modality, reminders to authors and reviewers to apply the new rubric tended to be less noticed, although many did still utilize the new rubric. Uncertainty about the features and functionality of the new paper management system, and whether those would impact on how, or even if, a division-level rubric could be applied to the evaluation and feedback process going into the 2022 conference cycle, further reduced the emphasis on requiring the rubric’s use during spring 2021.

Following the Summer 2021 conference, anecdotal feedback was gathered from among authors and reviewers interested in collaborating on this paper, regarding experiences using the rubric in either capacity. All feedback was positive, appreciative of having the additional information in the form of a new rubric, making the peer review process smoother for authors and reviewers alike. No specific improvements were identified from this anecdotal feedback. Over the next few cycles, it will be possible to more systematically gather feedback to determine whether improvements could be made to the rubric to achieve desired outcomes.

Conclusion

The rubric development for ELD presented in this paper for our peer review process was prompted by our division’s leadership and dissatisfaction by authors and reviewers alike. With a diverse population to serve who have various backgrounds and levels of experience, we needed more guidance to facilitate an effective peer review process. ELD additionally especially seeks to be supportive of new authors and reviewers, something which was not fully accomplished in the old process. Our new rubric makes clear priorities for conference papers, setting expectations and giving all clear directions to follow for the variety of papers we receive each year.

In reflecting on this process and searching the literature, we found little on updating peer review processes. Partially this may be due to peer review being “closed” and not discussed widely, and/or due to assumptions that graduate school training allows authors (and reviewers) to know the “standard” in their field. The lack of literature in this area may also be due to organizations not changing their processes or evaluating them at regular intervals to see if they still serve their community.

By sharing our process and new rubric, we seek to alleviate this scarcity and bring more discussion of peer review for conferences into the literature. We encourage readers to consider their organizations and if their needs are being met in the peer review process: has your peer review process been evaluated recently? Does it serve all populations in your community (graduate students and professionals, practitioners and researchers)? Improving peer review processes assists all in reducing review cycle hurdles and more importantly, encouraging more diverse submissions for every conference, alleviating confusion and indecision for newer writers.

Supplemental material

Supplemental Material

Download PDF (1.1 MB)

Supplemental Material

Download PDF (347 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/0194262X.2022.2067931

References

  • Akers, K. G. 2017. Being critical and constructive: A guide to peer reviewing for librarians. Journal of the Medical Library Association 105 (1):1–3. doi:10.5195/jmla.2017.100.
  • Anderson, T. 2009. Conference reviewing considered harmful. ACM SIGOPS Operating Systems Review 43 (2):108–16. doi:10.1145/1531793.1531815.
  • Audunson, R. 2004. Is that really so? Some guidelines when evaluating research. In IFLA Conference Proceedings, 1–11. Buenos Aires, Argentina.
  • Beckers, G. M. A., M. Fossum, and M. Kaefer. 2018. How to review an abstract for a scientific meeting. Journal of Pediatric Urology 14 (1):71–72. doi:10.1016/j.jpurol.2017.11.007.
  • Bernstein, M. 2008. Eastgate systems Inc. Reviewing Conference Papers. Accessed October 13, 2021. https://www.markbernstein.org/elements/Reviewing.pdf
  • Dawson, P. 2017. Assessment rubrics: Towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education 42 (3):347–60. doi:10.1080/02602938.2015.1111294.
  • Deveugele, M., and J. Silverman. 2017. Peer-Review for selection of oral presentations for conferences: Are we reliable? Patient Education and Counseling 100 (11):2147–50. doi:10.1016/j.pec.2017.06.007.
  • Dobele, A. R. 2015. Assessing the quality of feedback in the peer-review process. Higher Education Research & Development 34 (5):853–68. doi:10.1080/07294360.2015.1011086.
  • Engineering Libraries Division. 2022. Conference Information. Engineering Library Division, ASEE. Accessed February 3, 2022. https://sites.asee.org/eld/conference-info/
  • Gardner, A., K. Willey, L. Jolly, and G. Tibbits. 2012. Peering at the peer review process for conference submissions. In 2012 Frontiers in Education Conference Proceedings, 1–6. Seattle, Washington. doi:10.1109/FIE.2012.6462393.
  • Gonzales, J. 2015. Meet the single point rubric. Cult of Pedagogy. Accessed March 8, 2022. https://www.cultofpedagogy.com/single-point-rubric/
  • Hayne, A. N., and G. S. McDaniel. 2013. Presentation rubric: Improving faculty professional presentations: Presentation rubric. Nursing Forum 48 (4):289–94. doi:10.1111/nuf.12043.
  • Howard, M. S., S. E. Abel, and E. A. Madigan. 2021. Communicating expectations: Developing a rubric for peer reviewers. The Journal of Continuing Education in Nursing 52 (2):64–66. doi:10.3928/00220124-20210114-04.
  • Jolly, L., K. Willey, G. Tibbits, and A. Gardner. 2011. Conference, reviews and conversations about improving engineering education. In Proceedings of the Research in Engineering Education Symposium 2011, 834–40. Madrid, Spain. https://opus.lib.uts.edu.au/handle/10453/19220
  • Jordan, J., L. R. Hopson, C. Molins, S. K. Bentley, N. M. Deiorio, S. A. Santen, L. M. Yarris, W. C. Coates, and M. A. Gisondi. 2021. Leveling the field: Development of reliable scoring rubrics for quantitative and qualitative medical education research abstracts. AEM Education and Training 5 (4):e10654. doi:10.1002/aet2.10654.
  • Larkin, T. L. 2014. The student conference: A model of authentic assessment. International Journal of Engineering Pedagogy (IJEP) 4 (2):36–46. doi:10.3991/ijep.v4i2.3445.
  • Liberal Education/Engineering & Society Division. 2019. Liberal Education/Engineering & Society Division (LEES) division call for papers 2019. Accessed February 3, 2022. https://sites.asee.org/lees/annual-conference/2019-conference/
  • Lubienski, S. T. 2020. How to review conference proposals (and Why you should bother). Educational Researcher 49 (1):64–67. doi:10.3102/0013189X19890332.
  • Newsom, J., C. A. Estrada, D. Panisko, and L. Willett. 2012. Selecting the best clinical vignettes for academic meetings: Should the scoring tool criteria be modified? Journal of General Internal Medicine 27 (2):202–06. doi:10.1007/s11606-011-1879-2.
  • Orozco, G. S., R. R. Barreras, and R. W. Hicks. 2021. Addressing the gap, advancing the knowledge: Guidance for the abstract reviewer. AORN Journal 114 (4):319–26. doi:10.1002/aorn.13497.
  • Schröter, D. C., C. L. S. Coryn, and B. Montrosse. 2008. Peer review of submissions to the Annual American Evaluation Association conference by the graduate student & New Evaluators Topical Interest Group. Journal of MultiDisciplinary Evaluation 5 (9):25–40.