792
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Isn’t it about time to meet DORA?

Given that DORA is rarely mentioned in scholarly journals devoted to the information systems (IS) field, it is likely safe to say that most contributors to, and users of, this and other IS journals are unfamiliar with DORA. Although the same holds for other functional areas of business, the situation is quite different in scientific and medical disciplines. Just maybe, these disciplines are onto something – something that could benefit ongoing development and recognition of IS as a discipline. So, what is DORA? Should we care? Why?

The answers are important for shaping the development/future of IS as a scholarly discipline, encouraging/facilitating innovation in IS research, allowing/fostering research liberty for young IS scholars, and spurring a productive academic career that avoids disillusionment.

Before addressing the questions, think about the state of approaches to evaluating researchers for merit, promotion, tenure, or funding purposes. Disregarding intrusions of organizational politics and personal relationships (which can be quite vexing to those on the short end of power), it seems typical to focus on where a researcher has published his or her work. It is most common for evaluation to focus on placement of articles in journals. This is usually the case for IS, other business disciplines, physical sciences, live sciences, and medical fields.

Simply put, the value of a publication is considered to depend primarily on the journal in which it appears. In other words, the value of a gift is considered to depend on judgments about the wrapping that contains it, rather than the nature or utility of the gift itself (Zhang, Rousseau, and Sivertsen Citation2017). If it has a particular wrapping, then it must certainly be of the highest value. It follows that, if packaged in a different type of wrapping, that very same gift must be less valuable. Adopting such a method for evaluating a researcher’s work means that an article is judged to have less/more merit if it appears in one journal rather than another. Being in one journal versus another somehow diminishes/improves an article’s merit. The article is perceived as becoming imbued with a halo that a journal exudes, and that halo is seen as defining the article’s value.

Adopting the halo method of research evaluation transforms the problem from one of assessing the merit of individual articles to the task of assessing the merit of individual journals (i.e., the strengths of their halos; the shininess/aesthetics/approval signaled by the wrapping papers). To support the efforts of evaluators, various approaches have been advanced/adopted for settling on the relative merits of journals in a field. Ultimately, it is the evaluator (or evaluator’s superiors) who selects what journal evaluation approach will be used in a halo exercise. Approaches range from largely subjective to largely objective. Comparative examples of various evaluation approaches can be found in a series of quizzes that are applied to the context of IS journals (Chen and Holsapple Citation2013).

In the subjective case, the degree of strength attributed to a journal’s halo is determined by the evaluator’s (superior’s) vantage point, perspective, interpretations, preconceptions, training, biases, values, and so forth. Examples include relying on tradition, accepting pronouncements by others, or tailoring that accounts for journals that specialize in topics of particular emphasis by an institution or funding agency.

In an effort to mitigate drawbacks of subjective methods, more data-driven methods to assess journal merit have been devised, applying various techniques to various kinds of data sets in order to produce:

  • a numeric rating for each journal of interest, reflecting the strength of its halo

  • a classification of journals into tiers, where the journals in a tier have comparable halos, but differ notably from those in other tiers.

Among the various techniques for assigning a number, or relative position, to the members of some set of journals, perhaps the most prominently used is the so-called “journal impact factor” (JIF). Introduced over 50 years ago by Eugene Garfield, as an input for librarians’ decisions about journal subscriptions (especially in live science fields), its application has morphed into something quite different. In many quarters, a journal’s JIF has come to be assigned as the value/merit of each and every individual article appearing within a journal’s pages. Despite Garfield’s repeated objections to using JIFs as halo measures to define the value of individual research (e.g., Garfield Citation1996), the practice has emerged to become quite widespread. The IS field has not been immune.

As IS researchers, we aspire to rigor and relevance in what we produce. Ironically, this seems not to be the case when it comes to evaluating IS researchers or their products. Using JIFs to operationalize halo assessments of the merit of an individual’s research products is neither rigorous nor relevant. Yet, researchers are pressured to publish in journals with so-called “impact factors” that exceed some prescribed level. The presumptions are that a JIF is a meaningful indicator of a journal’s “impact” or “quality” and that the journal’s impact factor is somehow a useful gauge of the “impact” or “quality” of an article within its pages. As explained in a wealth of scientometric research literature (e.g., Seglen Citation1998), the application of JIF in a halo method is highly problematic. Nevertheless, in many quarters, such literature seems to be unknown or, at least, ignored. An evaluator (or superiors) finds it easier and faster to blindly go with a JIF-based halo method, without regard to its flaws or serious consideration of its deleterious effects on individual researchers and their disciplines.

Jeremy Berg, Editor-in-Chief of Science journals points out that despite a:

… lack of discriminating power, JIFs are sometimes (ab)used to judge individual papers or scientists in some institutions around the world. The presumption is that comparison of the JIF of the journal in which a given paper appears with that from other journals or against some other standard provides substantial insight about the impact of that paper. Given the analysis of the citation distributions, this presumption is clearly invalid as a matter of mathematical fact. Thus, JIFs should not be a component of key decision processes such as faculty recruitment or promotion. This concern is independent of other criticisms regarding the robustness of JIFs…. (Berg Citation2016)

Further, we can ask, if:

… using JIFs to assess faculty is excluded, how should one’s publications be judged? If a numerical metric is desired, the number of citations for a paper can be useful. This is a more direct measure of impact. More subjective measures can also be very important. Opinions about the impact of particular papers or a body of work rendered by qualified scientists in the same or similar fields have traditionally played an important role, and this should continue, ideally with appropriate consideration of potential sources of conscious or unconscious bias from these referees. Who knows? Perhaps those charged with making these important decisions should read the papers themselves, assuming that their area of expertise is close enough to the field under consideration. (Berg Citation2016)

It is this state of affairs across many disciplines in the physical and live sciences that led to the introduction of DORA: the 2012 San Francisco Declaration on Research Assessment (see http://www.ascb.org/dora/). Its core position is “Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.” DORA is an initiative of, and endorsed by, hundreds of scientific organizations around the world. For further detail about DORA, see the fascinating blog of Professor Stephen Curry (Imperial College, London), who chairs the DORA Steering Committee (Curry Citation2012)

Among the DORA signatories, rejecting journal halo methods in research/researcher evaluation processes, there is a wide range of organizations including the:

American Association for the Advancement of Science, European Association of Science Editors, American Society for Cell Biology, European Mathematical Society, Australian Academy of Science, Gordon and Betty Moore Foundation, Wellcome Trust, Howard Hughes Medical Institute, King’s College London, European Association for Cancer Research, European Association of Social Anthropologists, Japanese Biochemical Society, German Life Sciences Association, French Sociological Association, Italian Association of Psychology, Royal Norwegian Society of Sciences and Letters, Open Access Scholarly Publishers Association, and over 400 more.

Plus over 10,000 individuals affiliated with such universities as:

MIT, Oxford, Yale, Michigan, Cambridge, Stanford, Duke, Helsinki, Graz, Pennsylvania, California-Berkeley, Johns Hopkins, Jyväskylä, Harvard, Bern, Queensland, Illinois, Oslo, Utrecht, Max Planck Institute, Geneva, Edinburgh, Salzburg, Wisconsin, Penn State, Dartmouth, Copenhagen, Heidelberg, North Carolina, Vienna, British Columbia, Florida, Einstein Institute, Princeton, Pierre et Marie Curie, Erasmus, Washington, Georgia, Cornell, Kepler, Politecnica de Valencia, Technion, Sydney, Groningen, Lausanne, Stockholm, Lisbon, Toronto, Columbia, Politecnico di Milano, Pittsburgh, Virginia, Emory, Minnesota.

In announcing its endorsement of DORA, Nature (Citation2017) explains that “… skewed distribution of a journal’s citation statistics (by a few very highly cited papers) undermines any fundamental usefulness of the impact factor, and the belief that a researcher’s strengths can be measured by such a statistic is self-evidently absurd.”

According to the giant global publisher, Springer (Citation2018):

The journal impact factor… has frequently been misused. Impact factors are used as a proxy for the quality of a researcher, which can, therefore, influence how a researcher is promoted or whether or not they receive a grant. This is outside the scope of the impact factor—a researcher can publish quality work anywhere, not just in a journal with a high impact factor… Recently, SpringerOpen and BioMed Central signed the DORA….they, too, will do their part to reduce the reliance on impact factor as a promotional tool… the process will be gradual as signing the DORA affects more than 300 journals.

Aside from identifying what to avoid, the Declaration also calls for improved (or best) practices for evaluating scholarly research outputs, and promotion of such practices. Schmid (Citation2017) contends that, by raising awareness of the problem, DORA has stimulated important discussions and:

changes in policy and the design and application of best practices to more accurately assess the quality and impact of our research…. researchers, funding organizations, and academic institutions are developing new, more effective means of assessing the quality of an individual’s research contributions and more rapid and efficient ways to communicate our findings. Whether directly or indirectly attributable to this heightened awareness, several positive changes have occurred over the past 5 years.

She provides a list of specific examples and advises that “….we must continue to question our methods, advocate for sound decision making, protest malpractice, and commit our efforts toward ensuring the integrity and accuracy of research assessment.”

Interestingly, IS-specific organizations and business schools (which inevitably evaluate IS research/researchers) are absent from the long list of DORA endorsers. If discussions of DORA’s positions exist within these communities, they are not found in our journals or conferences. The central theme of this brief editorial is that it is time for the IS community to meet DORA, have a dialog about endorsement, and respond to its call for better ways to evaluate scholarly research outputs.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.