2,178
Views
8
CrossRef citations to date
0
Altmetric
Editorial Essay

Bibliometrics and the study of religion\s1

Scholars in the humanities are increasingly exposed to the use of bibliometrics for evaluating and ranking scholars, publications, and publication venues. Those working in the social sciences, and more so in the natural sciences, have been actively involved with such issues for some time. The first section of this essay suggests that bibliometric measures are inherently biased against work in the study of religion\s, and the humanities and social sciences more generally. In part this reflects a basic tension between the objects of bibliometric methods and the goals of academic scholarship. At the same time, I make a case for the limited value of bibliometrics in making quantitative comparisons within and across clearly delimited disciplinary contexts. In that light, the second section of the essay offers some quantitative data on journals in our corner of academia. These data offer some interesting observations, and, at the same time, they reinforce the need for scholars of religion\s to pay more attention to the value and limitations – the benefits and risks – of bibliometrics.

Bibliometrics is the comparative quantitative analysis of (primarily academic) publications. A wide range of things can be measured: e.g., numbers of documents, pages, or citations; types of publications; numbers and national origin of (co-) authors; number and type of items checked out by library patrons; length of titles in articles; number and size of footnotes in texts; number of downloads from a journal website; number of relevant ‘likes’ on Facebook or mentions on Twitter, etc. Bibliometrics has many applications: e.g., assisting libraries to prioritize acquisitions; providing data for historiography and other studies of the history of scholarship; helping researchers to obtain more (potentially) relevant information in their areas of specialization; and offering limited measures of the relative impact or influence of publications and scholars for the purpose of assessing or ranking these.Footnote2 The latter function has become dominant over the last decades, and the citation count – the number of times that a given publication is cited in others – has become the dominant metric in this process.

The increasing prominence of quantitative assessment of scholarship reflects economic and institutional pressures, especially attempts to rationalize the funding of research and of appointment, tenure, and promotion processes. For example, a recent article in the field of social work proposes ‘a new tool for constructing cases for tenure, promotion, and other professional decisions,’ based on prioritizing high-impact journals, as measured by citation counts (Hodge and Lacasse Citation2011).

My argument is not that bibliometrics is flawed but rather that ideology shapes its evaluative uses in a manner biased against the humanities and, to a lesser extent, the social sciences. Bibliometrics is a fascinating, complex, nuanced, and mature academic field. If, from the early origins of that field, bibliometric specialists had focused primarily on assessing scholarship in the humanities – if the metrics used as putative correlates for scholarly ‘quality’ were derived from more than a century of research and discussion of the best ways to assess what, say, scholars of religion\s do as researchers – then natural scientists would be right to find today's dominant metrics problematic for assessing their work. Given the nature of relations between academia and certain economic, political, and administrative ideologies, the reverse happens to be the case.

A brief overview of bibliometrics

Pioneering works in bibliometrics began to appear in the late 19th and early 20th centuries. Shepard's Citations, published since 1873, listed citations of US legal cases (Garfield Citation1955, 108). Alphonse de Candolle's Histoire des sciences et des savants depuis deux siècles … , published in 1885, used a number of quantitative indicators to compare scientists in 14 European countries and the United States (Szabó Citation1985). Psychologist James McKeen Cattell's biographical directory, American Men of Science, first published in 1906, allowed him to publish statistics on the number and productivity of scientists (Godin Citation2006, 109–111). In 1917, Francis J. Cole and Nellie B. Eales published a rich quantitative analysis of the literature on comparative anatomy from 1543 to 1860, analyzing the growth of the field, national contributions, and correlations with such factors as the influence of key figures and the rise of scientific societies (Cole and Eales Citation1917; see also De Bellis Citation2009, 6). Other early studies aimed to help librarians provide better information and collections of resources (for early examples see Allen Citation1929; Brodman Citation1944; Gross and Gross Citation1927; Gross and Woodford Citation1931; Hackh Citation1936; Henkle Citation1938). In 1934, Paul Otlet – co-founder of the International Institute of Bibliography (1895) – envisioned a distinct field of ‘bibliology’ that would collect and analyze quantitative measures of published output and impact (De Bellis Citation2009, 9–10).

Bibliometrics became more explicitly evaluative in the post-WWII period, reflecting the growing scope and importance of science as well as the perceived need to manage and direct it in light of policy goals (De Bellis Citation2009, 10–17). The Royal Society of Great Britain held a Scientific Information Conference in London in 1948, ‘to examine the possibility of improvement in existing methods of collection, indexing, and distribution of scientific literature, and for the extension of existing abstracting services’ (McNinch Citation1949, 136). Two important Soviet scientometric programs were founded in the late 1950s; contemporary attempts by the US to increase research and development were initially concerned more with funding than with assessing outcomes (De Bellis Citation2009, 12–14).

Eugene Garfield, a New Yorker trained in structural linguistics, was a key figure in the shift to evaluative bibliometrics (De Bellis Citation2009, 32–39). He began working in 1951 on the possibility of using computers to digitize medical indices. He soon published a seminal article, laying foundations for important later developments in light of a term that he coined, ‘impact factor’ (Garfield Citation1955; see Garfield Citation2006, 90). Garfield's initial motivation reflected values internal to scholarship: ‘I propose a bibliographic system for science literature that can eliminate the uncritical citation of fraudulent, incomplete, or obsolete data by making it possible for the conscientious scholar to be aware of criticisms of earlier papers’ (Garfield Citation1955, 108). However, he soon turned his attention to the evaluative possibilities of bibliometrics. He founded, in 1960, the Institute for Scientific Information (ISI) (renaming and reorienting an earlier organization), and he began a series of pilot projects, including a citation analysis of the Old Testament. ISI began publishing the Science Citation Index (SCI) in 1963, the Social Sciences Citation Index (SSCI) in 1972, and the Arts & Humanities Citation Index (A&HCI) in 1978. ISI was acquired by Thomson in 1992, and Thomson Reuters' Web of Science™ (WoS, formerly Web of Knowledge) is a direct descendent of Garfield's projects.

Garfield and Irving Shur invented the journal impact factor (JIF) in the 1960s as a means of choosing journals for inclusion in the SCI, and it was made commercially available in 1975 (De Bellis Citation2009, 185–196). The JIF is a number that is calculated for some journals (not applicable to individual articles or scholars); it is the average number of citations per article over the previous two years.Footnote3 As a result of these sorts of developments, the JIF and other bibliometric measures have proven less important for doing scholarship than for assessing and micromanaging it through linking such measures to funding and professional advancement.

Granted the value of assessing scholarship as well as individual scholars and their publications, the methodological challenge is to select and operationalize relevant criteria. There is much to be said for the traditional view that years of work in a specialized area lead to a certain practical wisdom, a well-honed sense of sources, figures, arguments, rhetoric, and other factors that allow experienced scholars to distinguish between top-notch and mediocre work. However, leaving aside the fact that even senior scholars sometimes disagree in making such judgments, such assessment is not the sort of thing that university administrators and other interested onlookers are likely to understand, trust, or perhaps even respect. For better or for worse, bibliometrics promises a more transparent and universal method of assessing individual researchers, publications, academic units, institutions, publication venues, academic disciplines, and even national research production (Leydesdorff Citation1998, 7–8).

Citation counts are the most prominent tool for attempting to transcend subjective bases for assessing scholarship. To a large extent, this is because citation counts are easier to obtain and use than the alternatives, given that large citation indices are readily available. However, the prominence of citation counts also reflects the fact that other theoretically relevant aspects of the research process are much harder to operationalize.

Broadly, we can distinguish four different aspects of the research process: inputs; activity; outputs; and outcomes (Adams Citation2009). Measuring input (in the sense of the material and institutional factors that make scholarship possible and productive) is difficult and unreliable. For example, department size has been taken as a strong correlate of a range factors (e.g., funding, staff, and resources) that supposedly result in quantity and quality of research. However, evidence from both the sciences and the humanities shows no correlation between department size and research output (Hemlin and Gustafsson Citation1996, 430; Hicks and Skea Citation1989). Research funding is another input allegedly correlated with the quality and quantity of scholarship. In something of a circular manner, it is often seen as both a reward for and a predictor of high-quality scholarly outcomes. However the correlation between funding and outcomes is also questionable: 'Grant acquisition is based on a Matthew effectFootnote4 by rewarding the richly funded researchers and hindering entry or continuous funding for others. For these reasons it must also be doubted that external funding per se is a useful performance indicator’ (Laudel Citation2006, 375). Moreover, the relationship between available funding and top-notch work varies enormously from field to field: philosophers do not require access to particle accelerators to produce groundbreaking research. Funding support for scholars of religion\s also varies widely depending on the type of research that is performed: e.g., work with texts available in one's office vs. fieldwork on another continent.

The second dimension, research activity, is also difficult, if not impossible, to measure. This reflects the variety and nature of activities that constitute academic research. Scholars in different disciplines and sub-disciplines study different things in different ways; the commonalities that are allegedly represented by citation counts do not outweigh the differences. It is similarly difficult to measure outputs and outcomes, because they take a variety of forms, because their appearance and impact is often delayed for many years, because a given outcome might be the result of many different research processes, and because all these factors and others vary greatly across and even within disciplines.

Citation counts have come to stand in, through a sort of methodological reductionism, for the myriad factors that distinguish especially respected, valuable, seminal, groundbreaking, original, innovative, and influential scholarship from work that is less so. It might seem obvious that the peer-review process itself should be taken as the primary measure of quality: the simple fact that a book, a chapter, or an article has been accepted for publication by a reputable academic publisher or peer-reviewed academic journal can count as an indicator of quality (see Engler and Stausberg Citation2010; Stausberg and Engler Citation2013). However, additional criteria are often emphasized, including volume and diversity of output (by type of publication, venue, or subject), total or average number of citations or other references (including, increasingly, social-media mentions), number of uncited publications, quality of publication venues (thus replicating methodological challenges at another level), and, most commonly, JIF and other metrics. Assessing volume of output is problematic because the criteria of quality and quantity are, at best, unrelated and, at worst, inversely related. Diversity is also difficult to assess, and its valuation varies greatly by field and by generation, not to mention running against the ongoing trend toward increasing specialization. The remaining criteria tend to break down to citation counts. Not surprisingly, especially given the wealth of data available from the main citation indices, the citation count has become the most important bibliometric measure for assessing scholarship.

Situating the academic citation itself in its broader context is a useful preliminary to discussing its value for assessing scholarship and scholars. Eugene Garfield's insight that citation analysis has historiographic value was a valuable one. Bibliometrics can serve not only to assess research output but can inform research itself. This includes studies of the historical development of fields of knowledge and of the growth of networks of scholars, as well as a variety of more specific claims. The following list (ordered alphabetically by author) offers a sample of claims made in articles published in Scientometrics, the foundational specialist journal in the field, begun in 1978 (De Bellis Citation2009, 15):

  • with the exception of astronomy and physics, there is little support for the common view that multi-authored articles are cited more than single-authored articles (Bridgstock Citation1991);

  • creationists and defenders of evolutionary theory form, predictably, independent networks, with very comparable structures centered on the publications and citations of opinion leaders, but with an interesting phenomenon of timing: the peak number of creationist publications occurred at the same time as the minimum number of evolutionist publications (Cantú and Ausloos Citation2009);

  • articles in the fields of complementary and alternative medicine increased dramatically between 1996 and 2005, falling off slightly after that point, primarily in journals within those areas but with a significant increase of representation in mainstream medical journals (Danell and Danell Citation2009);

  • the example of chemistry in Marseilles, France, in the 1980s illustrates that bibliometrics can track local/regional developments in a scholarly field, correlating these with policy developments (Dou, Quoniam, and Hassanaly Citation1991);

  • bibliometrics offers a window on political developments e.g., on ‘ethnic cleansing,’ in the case of a marked decline in Croat scientists publishing in Serbia between 1985 and 1996 (Lewison and Igic Citation1999);

  • the rise of Islamic fundamentalism and creationism has had no discernible effect on scientific output in eight Islamic countries (Pouris Citation2007);

  • on average, article titles in the humanities are less ‘informative’ than those in the natural and social sciences (based on the number of ‘substantive’ words, i.e., excluding articles, prepositions, conjunctions, pronouns, and auxiliary verbs) (Yitzhaki Citation1997).

Citation analysis generally takes for granted that the citation is an index, in a semiotic sense. That is, the citation is held to signify the quality of scholarly outcomes, by virtue of a causal relation to the impact or influence of publications: publications cited more often are presumed to be more influential, because a citation itself is taken as a sign that that original publication was read and influenced on the author of the later publication that cited it. However, the assumption that citations are directly related to impact is not a simple one, especially in the case of the humanities. In the first place, although there is some empirical evidence for a causal relation between citations and impact in the natural sciences, little research has been done on this issue in the humanities (Finkenstaedt Citation1990, 414). More fundamentally, the citation is explanandum as well as explanans (Leydesdorff Citation1998). The significance of citations is unclear: cited works are used in a wide variety of ways, impacting the research of others trivially or significantly depending on the case; a citation can index a single fact, a complex claim, an argument, or a broad theoretical framework, among other things (Amsterdamska and Leydesdorff Citation1989). Perhaps most dramatically, one author can cite another explicitly to disagree with a questionable interpretation, to distance new research from older, or, more generally, to negate the value of a given publication. Self-citations and pro forma citations in literature reviews also introduce complicating factors, as these correlate less obviously to actual influence. In general, a given citation can represent an affirmation of influence and impact, or a mechanical reference that has no correlation with these, or an explicit denial of these. Of course, specialists in bibliometrics attempt to correct for such factors in order to maximize the value of citation counts (Garfield Citation1979). However, even granted more nuanced metrics, this divergence in the significance of citations points to a fundamental disjunction between traditional values of scholarship – prioritizing ‘good’ scholarship – and the citation count as a marker of attention of any sort, whether positive, negative, or neutral.

Citations are not objective, value-neutral, ahistorical signs. The history of citation practices shows great variability: 18th-century science relied heavily upon personal communications; 19th-century references were mainly to authors, not works; citations by article began to increase dramatically after 1910 (Leydesdorff Citation1998, 8–12). The emergence of citation analysis – as a reflexive practice in the generation of knowledge – presupposes a degree of transparency and linearity that reflects a narrow, even naive, conception of the nature of academic knowledge. Most obviously, the basic assumption behind citation counts is that scholarly knowledge progresses in an evolutionary manner, with later work grounded on and emerging from earlier work, minimizing the role of innovation and originality:

Citation analysis reconstructs scientific development in an evolutionary mode with reference to scientific developments which are themselves being continuously reconstructed. All the empirical sciences reflexively rewrite their histories in the light of new evidence. The implied evolutionary understanding of science is reinforced by citation analysis. (Leydesdorff Citation1998, 15)

More fundamentally, evaluative bibliometrics is rooted in a statistical approach to quantifying productivity that is itself a creature of history, having gone through a fundamental shift toward economic thinking in the last 150 years:

The concept [of productivity] came from (social) scientists and their efforts to promote the progress of civilization and the advancement of science. With time, the concept of productivity moved from a conception centered on the science system itself, or the reproduction of men of science and their outputs, to a conception where economic considerations external to the system took pre-eminence. … First used in a purely academic context, the concept of scientific productivity made its way into institutions and politics, owing to the demand of ‘organizations’ for efficiency. (Godin Citation2009, 548, 573)

The historical dimension of citation counts is inseparably linked to certain forms of economic and organizational ideology.

Once we consider the citation as something to be explained, both historically and ideologically, the problematic relation between the study of religion\s and bibliometrics is cast in a starker light. This and other such measures are of questionable value in assessing research in the field, at least as generally used. They generally stem from the analysis of work in the natural and medical sciences and are fine-tuned to capture the characteristics of those research cultures, which are quite different than those of the social sciences and, even more so, the humanities. As a result, they embody discipline-specific assumptions foreign to the study of religion\s and many other fields. For example, a basic premise is that knowledge progresses primarily on the basis of previous work. This is less the case in the humanities, where innovation, novelty, and individualism of research are valued more highly (Garfield Citation1980, 43). Normative uses of bibliometric measures presume certain views of the relation between knowledge, political processes inside and outside the university system, and economic structures and ideological systems more generally.

Different academic disciplines have different cultures of publication: the number of citations in natural science and many social-science publications (e.g., psychology and sociology) is much higher than that in humanities publications (see ). Scholars of religion\s, like most researchers in the humanities, publish almost exclusively single-author works; in the natural sciences, and to a lesser extent the social sciences, multi-author works are the norm (Garfield Citation1980, 56). Patterns of collaboration in the natural and some social sciences – including the prominence of supervisors receiving author credit on the work of their graduate students – can lead to higher citation counts for individual scholars. Of course, these biases are unevenly distributed even among humanities and social-science disciplines, which vary in the forms of their research processes and output (Nederhof et al. Citation1989, 433). As scholars of religion\s know well, a similar variation often occurs even within a single discipline. Another complicating factor is the dramatic growth of explicitly inter- and multi-disciplinary work (Braun and Schubert Citation2007).

The increasing prominence of open-access publication in some disciplines introduces another source of variation: articles published in open-access journals have a citation advantage with respect to those published in traditional journals (Norris, Oppenheim, and Rowland Citation2008); ‘There is a clear correlation between the number of times an article is cited and the probability that the article is online’ (Lawrence Citation2001, 521; see De Bellis Citation2009, 291–300). Citations of web-based resources constitute a complex and increasingly important aspect of bibliometrics (Yang, Qiu, and Xiong Citation2010). This is associated with the growing prominence of ‘altmetrics’: e.g., downloads, article views, mentions in social media, etc.Footnote5 To bring this point closer to home, many more readers download specific articles published in Religion when the publisher happens to provide free access to those articles on the journal website. Those articles appear higher on the list of ‘most read’ articles than they would if they had not been made freely available: does this translate into greater influence or not, and how could that be assessed independently?

Citation counts are dependent on the quality of citation indices, a fact that introduces additional limitations and biases. A significant source of bias stems from the fact that, in the humanities, monographs are often considered more important than articles (Garfield Citation1980, 43; Heinzkill Citation1980; Nederhof et al. Citation1989). Research suggests that the share of book publications in the humanities may even be increasing, counting both monographs and collections (Engels, Ossenblok, and Spruyt Citation2012). The three dominant bibliometric tools are very limited in this way. Google Scholar has growing coverage of monographs, making it by far the best of the three, though a still very-far-from-perfect tool for comparison across disciplines. Elsevier's Scopus relies exclusively on journals.Footnote6 WoS is also limited to journals, with even more limited coverage outside the natural and medical sciences. The most striking development in coverage by these citation indices has been the phenomenal retroactive expansion of Google Scholar, though that service still lacks many of the features of the more established databases: e.g., range of metadata; types of search functions; filtering of self-citations; and processes for allowing third-party access to the data (de Winter, Zadpoor, and Dodou Citation2014).Footnote7

In all three of the major citation indices, scholarship in English is vastly overrepresented, counting against scholars who publish and who are cited in non-English languages. A 2006 study found a 20–25 percent over-representation of English-language journals in the Thomson Scientific databases, when compared to the Ulrich global serials directory (Archambault et al. Citation2006). Given its generally greater local and regional focus, scholarship in the humanities and social sciences is more likely to be multi-lingual than scholarship in the natural sciences (Archambault and Vignola-Gagné Citation2004; Nederhof et al. Citation1989).Footnote8 Issues of language aside, researchers in developing countries face a serious disadvantage, because their national journals are often not represented in citation indices (Arunachalam and Manorama Citation1989). Of course, this is changing as the ISI's indices and other databases continue a trend, begun in the 1990s, to slowly shift their focus from American and European journals (Shelton, Foland, and Gorelskyy Citation2009).

Citation counts generally specify a time frame, outside of which citations are simply ignored. Scopus looks at citations from 1996 in calculating certain metrics. One of the more egregious sources of distortion in bibliometrics stems from the unjustified emphasis often placed on the journal impact factor (JIF). Specifically, the JIF takes account of citations of all articles in a given journal within the last two years. The JIF's two-year window for citations is clearly more appropriate for some disciplines than others. Even in the natural sciences, the peak of citation in some disciplines occurs after the two-year mark. The humanities in particular rely less on recently published journal articles (Heinzkill Citation1980). More generally, humanities disciplines have much greater recourse to older, classic, and even ancient works (Garfield Citation1980, 42).

Another problem with the JIF is the prominence of the Matthew effect (see n. 4 above). Certain journals receive more citations for apparently no other reason than the fact that they receive more citations. A recent study looked at over 4500 pairs of ‘duplicate’ articles published in journals with different impact factors: ‘By construction, these pairs of papers are of the same “quality” and any significant difference in citations to duplicates must be attributed to the journal itself’ (Larivière and Gingras Citation2010, 425).Footnote9 On average, the versions published in the journals with a higher JIF were cited twice as often. Though, ultimately, the JIF of a given journal is a function of the most cited of its individual articles (Garfield Citation1973), there is clearly a feedback loop in the opposite direction.Footnote10

Experts in bibliometrics attempt to correct, of course, for differences in factors such as discipline and scholarly trajectories. Various types of statistically normalized algorithms are used to attempt to create a common ground for comparison. Equally obviously, these approaches face challenges, not least the difficulty of choosing an appropriate level of normalization (e.g., should one prioritize categories of publication venue, broad academic areas, or narrow fields of specialization?) (Adams Citation2009, 24, 29–30). In addition, the quality of work of an individual researcher must be measured relative to some larger group, yet the choice of population will have a strong impact on the resulting assessment: department, faculty, or institutional colleagues; members of a sub-field, field, or broad academic area; scholars in all fields at a national, regional, or global level, etc. Bibliometric measures can be used to make comparisons at all these and other levels. However, the mathematical task of making comparisons across time and across academic disciplines requires that we privilege some level of comparison in order to normalize the data. The humanities are often disadvantaged by the choices made in this process.

There are, of course, a wide variety of bibliometric measures, many of these being proprietary (and worth substantial sums of money to the companies who own popular modes of quantifying the intangible values of scholarship). A partial list of other measures, apart from a straightforward summative citation count and ISI's JIF, includes the following:

  • h-index (the highest number of publications – by a scholar, in a journal, in a department, in a discipline, etc., – that have been cited that same number of times)Footnote11;

  • eigenfactor (which excludes self-citations and uses network theory to weight citations by the prestige of the journals in which citations appear);

  • immediacy index (another ISI metric which focuses on citations in the year of publication);

  • cited half life (another ISI metric, of more descriptive than evaluative value: the median age of cited articles in a given year, i.e., a number of 5 would indicate that half the citations made this year of all articles ever published in a given journal are less than five years old and half more than five years old);

  • SCImago Journal Rank (SJR) (based on Scopus data and weighting journals like the eigenfactor);

  • Scopus' Source Normalized Impact per Paper (SNIP) (weighting citations relative to the total number of citations in a subject field, which is problematic where a field, like the study of religion\s, is not recognized by Scopus).

The expert literature on bibliometrics is replete with arguments over the advantages and disadvantages, strength and weaknesses, of these and other metrics.

In sum, citation counts and other bibliometric measures address only one dimension of scholarly work, outcomes, and they offer only a limited, imprecise, and in some respects distorted measure of quality in that limited area. In addition, as generally used, they are biased in favor of the natural sciences and against the humanities, with a correlated tension among the social sciences. Those who consider bibliometrics to offer a reliable, transparent, objective, or accurate measure of the quality of research and scholarship are simply mistaken. Discussion of bibliometric measures are abundant, yet research has so far failed to address a key issue: ‘Whereas all ranking systems implicitly assume that rankings-based competition motivates academics to produce more and better scholarship, no one knows if such narrowly defined competition actually fosters or inhibits good scholarship’ (Adler and Harzing Citation2009, 92). One key reason why scholars of religion\s should learn more about bibliometrics and its uses is simply because administrators and policy makers are both attracted to these sorts of quantitative measures and apparently little interested in the knowledge that would be required to use them properly, i.e., with an eye to the distortions and limitations of bibliometric measures as they are used to assess the research productivity of university faculty in so many different fields.

Scholars of religion\s should be more aware of the extent to which our professional lives are increasingly assessed using bibliometrics, and we should ask whether the resulting measures are accurate and fair as generally used. There are strong reasons to answer ‘no,’ insofar as metrics optimized for assessing work in the natural sciences are applied to work in the social sciences and humanities (see van Leeuwen Citation2006, 151–152). Some argue that the use of bibliometric measures to assess scholarship has gone too far, even in the natural sciences:

Measurement of scientific productivity is difficult. The measures used … are crude. But these measures are now so universally adopted that they determine most things that matter: tenure or unemployment, a postdoctoral grant or none, success or failure. As a result, scientists have been forced to downgrade their primary aim from making discoveries to publishing as many papers as possible – and trying to work them into high impact-factor journals. Consequently, scientific behaviour has become distorted and the utility, quality, and objectivity of articles have deteriorated. Changes … are urgently needed … (Lawrence Citation2008, 1; see Adler and Harzing Citation2009)

Empirical research supports the claim that, when subject to bibliometric investigations, scholars do in fact alter their publication practices (Michels and Schmoch Citation2014). When quantitative metrics are over-emphasized as a measure of quality, important aspects of scholarly work can suffer: e.g., small academic conferences, where important networking takes place, can come under pressure as participants prioritize the quest to publish in high-impact journals or, at least, attendance at higher-profile conferences (Henderson, Shurville, and Fernstrom Citation2009).

Several distinctive characteristics of the humanities call into question the application of a questionable model for assessing academic production in the sciences. In an article specifically addressing the place of evaluative bibliometrics in the humanities, Anton Nederhof (Citation2006) lists several of these characteristics (some already noted above):

  • Greater national and regional distinctiveness (resulting in more citations of work closer to home, though this applies to some disciplines more than others [see Nederhof et al. Citation1989, 433]);

  • Greater emphasis on books (de-emphasizing citation counts limited to journals);

  • Slower pace of theoretical development (emphasizing the value of citing older, ‘classic,’ works);

  • Less reliance on team research and multi-author publications (resulting in fewer publications per scholar and less distribution of citations of a single publication to many scholars);

  • Greater proportion of publications for a general audience (and so not indexed in the main databases and not cited as scholarship) (see Nederhof et al. Citation1989, 427–428).

In broad terms, there are two main problems with bibliometric measures. They do not measure as much as many people think: their correlation with quality of scholarship is not strict. And they have limited value for comparing scholarship across disciplinary boundaries, especially across broader meta-field boundaries such as that between the natural sciences and the humanities. Great care is required when generalizing.

Bibliometrics is probably the most useful of a number of variables that could feasibly be used to create a metric of some aspect of research performance. … These data have characteristics (particularly in terms of the publication and citation cultures of different fields), which means that they must be interpreted and analyzed with caution. They need to be normalized to account for year and discipline and the fact that their distribution is skewed. (Adams Citation2009, 22)

The key question is whether scholarship in the humanities – and at the ‘softer’ end of the social-science spectrum – will be stretched on a procrustean bed of inappropriate metrics or whether more nuanced and appropriate measures will be used (Finkenstaedt Citation1990, 410). In light of the increasing recognition that standard citation metrics are unsatisfactory for the humanities, more nuanced measures are being proposed: e.g., widening the net beyond the usual sources of citations, looking at lifetime citation data, taking account of library holdings, and including productivity indicators such as pages published per year (Linmans Citation2010). A key aspect of developing better metrics for the humanities and social sciences is the development of more comprehensive bibliographic coverage in existing or new databases, and a prerequisite to this is a fuller understanding of the publication channels used in the relevant fields (Sivertsen and Larsen Citation2012).

Given these sorts of weaknesses and limitations, why would anyone give any weight at all to bibliometric measures in assessing scholarship, especially in the humanities? Nederhof's conclusion (2006) is that many existing bibliometric measures can be useful for assessing scholarship in the humanities and social sciences, but that these must be extended and used with greater sensitivity to the specific characteristics of research in distinct disciplines. Despite their limitations and biases, citation counts and other quantitative measures of scholarship have their value and their place, for certain purposes, within narrowly delimited areas of comparison. More specifically, using quantitative data to make comparisons within the study of religion\s itself can offer limited support for carefully qualified claims. In this light, the following section of the paper looks at selective quantitative data in order to draw out some general points about journals in our discipline.

Bibliometrics and journals in study of religion\s

Quantitative indicators underline certain significant features of the academic study of religion\s. The story they tell is partial, limited, and not necessarily reliable. They are dependent upon the quality of the underlying citation indices, and they embody distorting assumptions when applied without due attention to distinct disciplinary cultures. However, the numbers make some important points, or at least offer empirical evidence for general impressions that many of us may already have.

As noted above, the three most prominent tools for providing bibliometric data – Google Scholar, Scopus, and the WoS – embody certain assumptions that lead to biases against work in the humanities. More fundamentally, their coverage of publications directly relevant to the study of religion\s is limited. This is primarily due to their focus on the natural and medical sciences. However, it also reflects, in part, the smaller-scale and less professional nature of many humanities journals. For example, journal impact factors are not calculated for journals that publish their issues late. With little risk of overstating the case, industrial measures of productivity are not well suited for artisan work.

In addition, ‘religion’ or ‘religious studies’ does not constitute a classification in Scopus or WoS. This makes it difficult to normalize data with reference to our discipline. Google Scholar has ‘Religion’ as a category, but the list mixes the study of religion\s and theology. This is not a trivial issue of intra-disciplinary boundary policing. Including theology drags the numbers down for the study of religion\s. If we are going to use bibliometrics to assess journals and scholars by field, we need to get the fields right. An inspection of the 20 ‘Religion’ journals with the highest citation counts shows that the top social-scientific journal (JSSR) has an h5-index of 24; the highest general study of religion\s journal (Religion) has an h5-index of 12; the highest theology journals, both with social-scientific leanings (Journal of Psychology and Theology and Pastoral Psychology), have an h5-index of 9; no straight-ahead theology journals make the list at all.Footnote12 One of my main claims in this essay is that bibliometrics is valuable, but only when the metrics used are sensitive to disciplinary differences. This presupposes the accurate identification of disciplines. The big citation indices would apparently benefit from consulting more with disciplinary experts, before proceeding to lump apples and oranges together.

The three main databases differ widely among themselves.Footnote13 Google Scholar has the broadest coverage. Scopus is also very usefully inclusive and offers a much wider variety of tools for comparing journals and visualizing the results. WoS, on the other hand, has limited data on journals in the study of religion\s. As a database, WoS includes a large number of journals in the humanities and social sciences, including as it does ISI's three large citation indices, the SCI, SSCI, and A&HCI. However, only journals that are assigned impact factors (JIFs) return results in the Journal Citation Reports (JCR). That is, useful quantitative data for comparing journals is limited to that shorter list of journals. The following is the complete list of relevant journals that I have been able to find in JCR: the British Journal of Religious Education; International Journal for the Psychology of Religion; Journal of Psychology and Theology; Journal for the Scientific Study of Religion; Review of Religious Research; Social Compass; Sociology of Religion; and Zygon. This severely limits the value of WoS for comparative analyses of journals in our discipline. Taken as a group, the three big databases have significant gaps. Some journals are ignored by all three: e.g., Journal for Cultural and Religious Theory; Journal of Religion & Film;Footnote14 Journal of Religion & Society; Journal of Ritual Studies (notwithstanding a single article in Scopus); Marburg Journal of Religion; Revue d'histoire et de philosophie religieuses (notwithstanding four articles in Scopus); Studi e materiali di storia delle religioni; and Zeitschrift für Religionswissenschaft. Depending on the database that is used, various, sometimes very important, study-of-religion\s journals may simply be left out of bibliometric analyses.

The limited inclusion of non-English-language journals exacerbates this problem. In part, this reflects the fact that the citation databases prioritize, as a matter of policy, the inclusion of journals with high citation rates, and these are primarily English-language journals (see ). To give an example, a search of the main Brazilian journals in the study of religion\s finds only two, both covered exclusively by Google Scholar: Estudos de Religião and Horizonte. The following are ignored by all three of Google Scholar, Scopus, and JCR: Ciencias Sociales y Religión/Ciências Sociais e Religião; Comunicações do ISER; Debates do NER; Numen [Juiz de Fora]; PLURA: Revista de Estudos de Religião, Religião e Sociedade; and Revista de Estudos da Religião (Rever). At the same time, several broader Brazilian social-science journals are covered by both Google Scholar and Scopus: e.g., Estudos Avançados; Novos Estudos Cebrap; and the Revista Brasileira de Ciências Sociais. The limitation of the main citation indices to largely English-language journals has led to some efforts to create alternative, regionally oriented indices (Nederhof Citation2006, 91). That said, national coverage of journals in Scopus and WoS can be more complete, e.g., in Slovenia (Bartol et al. Citation2014). More specifically, recent policy changes at the large databases have resulted in greater inclusion of journals from Latin America and the Caribbean, though primarily in the natural sciences: e.g., WoS increased its coverage from 69 journals to 248 between 2006 and 2009 (Collazo-Reyes Citation2014). Coverage in Scopus remains much better than in WoS (Santa and Herrero-Solana Citation2010). (See for data on citation counts by language of journal.)

Despite these limitations, bibliometrics can still provide very useful information on different aspects of journals in and directly related to the study of religion\s. Comparing the national origins of authors published in a selection of journals offers a useful overview of trends in internationalization. provides percentage figures for contributions from a range of countries for 15 journals between 1996 and 2013. (See the Appendix for title abbreviations.) These journals represent an informal sample only. To make the table less cluttered, only countries providing at least 1 percent of the (presumably corresponding) authors for these journals are included (the same holds for the means by country). Authors from a wide range of other countries published in these journals, but those countries did not make the 1 percent cut-off. In general, journals without complete data for that period were omitted in order to preserve a more equitable basis for comparison. Data is incomplete for different reasons. Some journals have significant gaps in their coverage (e.g., Scopus has no data for Numen from 1996–2006, nor for Zeitschrift für Religions- und Geistesgeschichte (ZRGG) from 2000–5); these were omitted (country data for ZRGG was additionally incomplete). Some journals lack earlier data (e.g., Scopus data begins in 2000 for Archives de sciences sociales des religions, in 2001 for Revue d'Histoire Ecclesiastique (ending in 2012), in 2002 for The Journal of Feminist Studies in Religion (missing 2009 and with sparse country information), and in 2009 for Nova Religio; the first two were included in so as to include French journals (acceptable given the presentation of percentages, not absolute values); the latter two were omitted. I omitted journals founded more recently than 1996, e.g., Material Religion; The Journal of Religion in Europe; and Culture & Religion. Of course, these decisions necessarily omit interesting details, as for example the notably international, primarily European, nature of Numen and the parochially German nature of ZRGG.

Table 1. Country of origin of published documents (by corresponding author) in selected journals by percentage (showing only country and mean values of 1 percent or over).

Three interesting points emerge from the data in , keeping in mind that the findings have little value in absolute terms (the figures are completely dependent upon the particular sample of journals; only the relative prominence of different countries has some suggestive significance). First, given the disproportionate presence of English-language journals, it is not surprising that the top three countries – averaged across all this particular sample of journals – were the USA (50%), Canada (10%), and the UK (9.7%). Canada and the USA are the only two countries represented in all these journals (in absolute terms, not just at the 1 percent level or higher). (This reflects more general patterns: in 2000, for example, the USA was responsible for 55.5% of scholarly articles globally, followed by the UK at 13.8% and Canada at 5.8% [Godin Citation2002, 5].) France was the fourth most prominent country in our sample (7.2%), reflecting in large part the inclusion of two French journals.

Second, the two ‘national’ North American journals are notably parochial, with a high percentage of scholars from their own region. A full 82.3% of documents published in the Journal of the American Academy of Religion (JAAR) were by Americans and 6.4% by Canadians, with only two other countries represented above the 1% level. In the Canadian journal, Studies in Religion/Sciences religieuses (SR), 79.3% of contributions were by Canadians and 9.8% by Americans, with only two other countries represented above the 1% level. This parochialism appears much less in European journals: e.g., Archives de sciences sociales des religions (from 2000–13) had 57% French authors, but at least 1% of the journal's authors submitted from each of ten other countries; Revue d'Histoire Ecclesiastique has 26% from France, with nine other countries represented at the 1% level or higher. Method and Theory in the Study of Religion (MTSR) is markedly more international than JAAR or SR, despite its ‘North American’ status as the unofficial publication venue of the North American Association for the Study of Religion (NAASR): 59% Americans; 10.9% Canadians; and ten other countries represented above the 1% level.

Third, different journals have different ‘personalities’ in terms of the countries and regions from which they tend to draw authors. A number of countries are represented in only one or two journals (something also true in the fuller data, including more countries with numbers under 1 percent). This sort of idosyncratic connection between journals and scholars from different countries reflects a variety of factors. Sometimes a clear causal connection is present: e.g., the presence of Brazilian scholars in Archives de sciences sociales des religions and Social Compass reflects in part the central role that French scholars, especially Roger Bastide, played in the establishment and consolidation of the social-scientific study of religion at the Universidade de São Paulo. The national origin of the journal editors is an important factor. In other cases, the reasons are less easy to pin down: e.g., over the past years this journal, Religion, has been honoured to publish the work a number of Israeli scholars studying religion in Israel.

As a corollary of this emphasis on the inclusion of English-language journals in the main citation indices, citation counts for journals published in other languages are markedly lower than for those published in English (). In part this reflects the status of English as the lingua franca of global academia.Footnote15 Scholars from non-English-speaking countries often seek to publish their work in English-language journals. Of course, they do so in part because of the higher citation counts for these journals, which could result in a feedback effect, further dampening citation counts for journals published in other languages. This is especially the case in the sciences: all 20 of the top 20 English-language journals are in the natural or medical sciences. This is not the case for the lists in other languages.

Table 2. Range and mean h5-indices for the 20 ‘top publications’ in selected languages, in descending order by mean.

Scopus also provides information on the types of documents published in a given journal. We performed a direct count of documents in five journals over a three-year period ().Footnote16 The total document counts for the three-year period in Scopus were lower, sometimes significantly. In three of the five cases, the agreement was good (lower by only 1.2%, 1.5%, and 4.8%). In two cases, there was a significant divergence: over the three-year period, the Scopus figures were lower by 23 documents (25.3%) for MTSR and 17 documents (16.0%) for Religion. The extent to which this sort of divergence might extend beyond our small sample remains an open question. Some of the difference in figures in can be accounted for by different ways of categorizing documents. (Scopus does not index book reviews; many of the 86 documents categorized as ‘reviews’ in Scopus are likely accounted for by review essays and contributions to review symposia, which we counted as articles.) The two counts are the same in both number and categorization in some years, e.g., for 2011 in Numen, and the same in number, though different in categorization, in others, e.g., for 2011 in History of Religions. Here, the issue is clearly one of categorization and nothing more. However, categorization differences are not sufficient to explain the divergence in absolute counts, especially in the cases of MTSR and Religion. The ‘TOT’ figures given for the Scopus data in are not the sum of the three categories of documents listed (‘Articles,’ ‘Editorials,’ and ‘Reviews’); they are the total ‘Document results’ returned for the year, including other types of documents. These totals should thus include all citeable document types, regardless of whether the Scopus categorization matches ours. The divergence thus remains mysterious and somewhat troubling. It is one thing to have the same total number of items distributed differently by category; it is something else to omit a significant number of documents. If these missing items, all very citable publications, were simply omitted from Scopus, this would be a significant problem, raising the possibility that Scopus citation counts for affected journals could be significantly lower than the real values. If the missing items were included in the citation count and omitted only in the document-type data, this would be of relatively little significance. Even if there were an outright error in total document counts, this would have no effect on most, but not all, citation metrics.Footnote17

Table 3. Publications in selected SoR journals by type (2011–2013). Columns of Scopus data indicated by ‘(Sco)’.

provides Scopus data on absolute numbers and percentages of articles, reviews, and editorials for all the journals in for which complete data was available from 1996–2013 (this criterion was required to maintain comparability between absolute numbers of documents published). These figures must be taken with a grain of salt as they may be low to varying extents if the results of our manual count were to hold more widely (). However, their general distribution allows for some useful observations all the same.

Table 4. Documents by type in selected journals 1996–2013, arranged by total number of items in descending order.

Three points emerge from , even granted this qualification. First, journals vary enormously in the number of documents that they publish (which will impact the numbers of pages published).Footnote18 In part, this clearly reflects the number of submissions received and the number of issues published annually. Interestingly, there is no straightforward corellation between the number of items published and the impact of journals as measured by citation counts (see ), nor between items pubished and the age of journals. Second, the top journals, by number of articles published, are not generalist journals in the study of religion\s, but are in the social sciences, or philosophy/science studies (Zygon) or, at least partly, in theology (JAAR). As we will see below, the same general pattern is visible when ranking journals by citation counts (). Third, articles constitute the majority of items published. This must be qualified in light of the fact that Scopus simply ignores book reviews. What it categorizes as ‘reviews’ would appear to be review essays and contributions to review symposia. Accepting that categorization and adding back in the actual book reviews, our three-year count of documents in five journals shows roughly equal numbers of articles and reviews (see n. 17). Fourth, journals vary considerably in the number of editorials that they publish.

Table 5. Citation rankings of selected journals

The heart of bibliometrics, as used to assess scholarship, is the citation count. In order to illustrate the value of such numbers for making limited comparisons within a discipline, supplies data comparing a set of journals in and related to the study of religion\s.Footnote19 The metric used here is the h-index, a prominent measure of impact that goes beyond simply adding up citations to give a sense of how many influential, i.e., often-cited, articles have been published in a journal.Footnote20 The top 100 journals across all disciplines each have an h5-index of 102 or higher, as ranked by Google Scholar.Footnote21 All but three are in the natural sciences and medicine.Footnote22 The top-ranked journal in the world is Nature with an h5-index of 349. PLoS One, the major open-access journal in the natural sciences, has an h5-index of 131.

Four points emerge from . First, as a general rule, articles published in journals dedicated to the study of religion\s are not cited as much as are those in some other disciplines (see ). The highest h5-index for any journal devoted to the study of religion is 24. The vast majority has an h5-index of below 10. Second, the journals with the highest h5- and h-indices are all psychology/sociology of religion journals. This reflects differences between the social sciences and humanities, e.g., citation cultures, patterns of co-authorship, etc. Third, Religion has the largest number of often-cited articles among general journals in the study of religion\s. Fourth, this sort of citation data can reveal patterns that raise questions worth further investigation. For example, the much higher h- than h5-indices in the cases of the Journal of Contemporary Religion, the Journal of Religion in Africa, and History of Religions suggest that the impact of these journals has diminished, when we compare the previous five years to the full range of data extending back beyond that period. That is, it appears to be the case that these journals used to get more citations than they have been getting in the last five years. At the same time, the fact that the Google Scholar citation base is larger than that of Scopus, e.g., including citations from books, which represent an especially relevant source in the humanities, leaves any such conclusions tentative.

Table 6. Range and mean h5-indices for the 20 ‘top publications’ in selected disciplines, in descending order by mean.

Google Scholar publishes lists of the 20 ‘top publications,’ as measured by h5-index, by field (). This allows us to compare the study of religion\s, in very broad terms, to other disciplines. The h5-index of journals in the study of religion\s is quite comparable to that in history and English literatureFootnote23; numbers for journals in the social sciences are higher, with those in the natural sciences higher again (). The mean h5-index for the top 20 publications in ‘Social Sciences (General)’ is 38 and for those in ‘Humanities, Literature & Arts (General)’ it is 17.5. This general trend might also offer an explanation for more localized differences: e.g., why journals in the study of religion\s (a discipline that straddles to some extent the humanities/social sciences divide) tend to have a higher h5-index than history, English, or film; why journals in the study of religion\s have a lower h5-index than journals in gender studies (which leans even more toward the social sciences); and why sociology journals tend to have a higher h5-index than anthropology journals. Arguably, philosophy journals tend to a higher h5-index than journals in the study of religion\s due to the culture of argumentation in that field, in which scholars develop their positions in explicit contradistinction to those of others in the discipline. , along with such interpretive points, underlines the need to interpret citation counts in the context of a variety of factors that shape disciplinary identities.

Conclusion

It is easy to understand why citation counts are emphasized as an effective means of comparing academic work across disciplines and around the world. The numbers are easily available and they bear at least some degree of causal relation to intangible scholarly values such as quality and influence. As always, however, the devil is in the details.

Citation counts are seldom used in their brute form, nor should they be. They are elaborated into a variety of complex metrics – JIF, h-index, SNIP, eigenfactor, etc. – in an attempt to take account of various supervening factors. Many of these metrics were developed explicitly for assessing work in the natural sciences. For that reason and others, they tend to embody various assumptions that result in a bias against the humanities and, to a lesser extent, the social sciences. Clearly, it is inappropriate to assess scholarly outputs in the humanities on the basis of quantitative measures developed for assessing work in radically different fields. Bibliometric measures can offer valuable, if partial, windows onto scholarly production in different disciplines, but only if the specificities of those disciplines are taken into account.

Academic disciplines have distinct social and cultural features (Engler and Stausberg Citation2011, 129–134). No quantitative metric, taken in isolation, can ever capture these subtle but essential differences. It's easy to compare apples and oranges; both are, after all, types of fruit; comparing grapples and foragers is a subtler task. In addition, our comparative analysis of documents in a small sample of journals () suggests that – in at least the case of one type of Scopus data, for at least some journals, and for reasons that are not entirely clear – some published and citable documents are missing from key citation indices. These points all underline the wisdom of taking bibliometric measures into account as only one factor among others in assessing scholarship.

Those who take it upon themselves to make comparative assessments of scholarship should avoid the temptation to become so entranced with the slickness of their measures and scales that they overlook relevant differences between the things being compared. This is not to suggest that bibliometric measures are inherently flawed: far from it. Scholars of bibliometrics have provided a wide range of sensitive metrics that take account of a variety of relevant factors, including types of publications, disciplinary differences, and career trajectories of individual scholars. A large part of the problem is the fact that those who use such measures in administrative contexts tend to use weak metrics (like the JIF) in a blunt manner (comparing too broadly across disciplinary boundaries), rather than paying attention to the state of the art in bibliometrics. The field is extremely important and its many metrics, with their modifications and qualifications, can be of great use for assessing scholarship. Bibliometric specialists who work on issues of disciplinary difference should be brought in to help orient the work of committees and administrators who take on the complex task of assessing scholarship. But that is not enough. When making comparisons across disciplines and cultures, quantitative measures must be supplemented by more contextualized qualitative evidence of scholarly excellence.

At the same time, we must go beyond naive assertions that quantitative indicators cannot measure ‘quality’ of scholarship. They cannot tell the whole story, but this is not a reason to ignore the benefits that they can bring to the table. There are three reasons to recognize the value – when properly delimited and contextualized – of bibliometric measures for assessing scholarship in the study of religion\s.

First, the difference between articles that we know to have been cited some number of times and those that, as far as we can tell, have never been cited at all is not insignificant. We have clear evidence that the former have been noticed, read, and have made at least some sort of contribution to debates on the issues that they address. With the latter we have, at best, a question mark. Of course, too much weight is being placed on citation counts by people who should know better. But this doesn't change the fact that citations have a certain limited value as indicators of the extent that scholarly knowledge lives up to its potential to inform networks of conversation and argument.

Second, we cannot presume that those who read our publications are ‘like us,’ that they share our sense of what constitutes value in scholarship. It is undeniable that discipline-specific wisdom is central to assessment of the value of scholarship, that the internalized sense of criteria of excellence that scholars develop through the long apprenticeship that leads to specialization in a given field cannot be simply replaced by citation counts or other quantitative measures. However, the value of knowledge is not limited to its being savored by connoisseurs. It is useful to have some way to assess the fuller extent to which publications are read and used in different contexts and manners. If used with sensitivity to disciplinary differences, citation counts promise a good balance between accessible data and likely correlation to those less directly measurable issues. However, perhaps not surprisingly given the perennial emphasis on the natural and medical sciences, it still remains unclear just what metrics would be more appropriate for assessing scholarship in the humanities (Linmans Citation2010).

Third, the institutional and administrative realities within which we work demand comparative assessment and accountability. Bibliometric measures are here to stay, so let's make the best of them. It is better to try to understand – and to argue for more appropriate uses of – quantitative metrics than to turn our backs on them in ignorance.

Steven Engler is Professor of Religious Studies at Mount Royal University in Calgary, Affiliate Professor of Religion at Concordia University in Montréal and Professor Colaborador at the Pontifícia Universidade Católica de São Paulo (PUC-SP), Brazil. He is co-editor, with Michael Stausberg, of this journal and of The Routledge Handbook of Research Methods in the Study of Religion (Routledge, 2011). He has published widely on religions in Brazil and theories of religion.

Notes

2The term ‘bibliometrics’ was coined by Alan Pritchard (Citation1969). Related terms are ‘scientometrics’ (measures of information categorized as ‘science’), ‘informetrics’ (quantitative measures of information in any form) and concepts for measures of information accessible by the Internet, e.g., ‘webometrics,’ ‘netometrics,’ ‘cybermetrics,’ and, more recently and influentially, ‘altmetrics’.

3The JIF for a given journal is calculated as a ratio, A/B, where A = the total number of times that articles in that journal were cited in indexed journals (including that journal itself) over the previous two years and B = the total number of citable documents published in that journal over that same time period. Book reviews are not included.

4Robert K. Merton argued that what he dubbed the ‘Matthew effect’ is prominent in science: ‘the accruing of greater increments of recognition for particular scientific contributions to scientists of considerable repute and the withholding of such recognition from scientists who have not yet made their mark’ (Merton Citation1968, 58). The effect has been found at the level of countries, institutions, journals, and researchers (Larivière and Gingras Citation2010). It is a prominent source of bias toward well-funded fields when using the JIF, for example (see De Bellis Citation2009, 189–190). Merton's reference was to Matthew 25:29 (see 13:12): ‘For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath’ (KJV).As a letter to Science noted, Merton could equally have cited Mark 4:25, Luke 8:18 or Luke 19:26; and ‘since Mark unquestionably published first, it would be more in accord with scientific practice to have named it the “Mark effect”’ (Geilker Citation1968).

5‘Altmetrics expand our view of what impact looks like, but also of what's making the impact. This matters because expressions of scholarship are becoming more diverse. … [T]hat dog-eared (but uncited) article that used to live on a shelf now lives in Mendeley, CiteULike, or Zotero–where we can see and count it’ <http://altmetrics.org/manifesto/>.

6The SCImago Journal & Country Rank portal <http://www.scimagojr.com/> uses Scopus data. A bibliometric overview of 218 journals in or related to the study of religion\s is available at http://is.gd/vuaqDZ. For a much shorter list on Google Scholar see http://is.gd/ekuMgN

7At least one sub-panel of the UK's Research Excellence Framework had originally planned to use Google Scholar data in addition to Scopus data for assessing scholarship: ‘Unfortunately, following discussions with Google Scholar, it has not been possible to agree a [sic] suitable process for bulk access to their citation information, due to arrangements that Google Scholar have in place with publishers.’ ‘Sub-panel 11: Citation data’ (Accessed 7 Feb. 2014), http://www.ref.ac.uk/subguide/citationdata/googlescholar/

8The local focus of scholarship in the Humanities varies greatly, of course, according to discipline and other factors (see Nederhof et al. Citation1989, 433).

9Duplicate papers are ‘those that are published in two different journals and have the following metadata in common: (1) the exact same title, (2) the same first author, and (3) the same number of cited references’ (Larivière and Gingras Citation2010, 425). As the authors note, ‘the existence of such duplicate papers raises important ethical questions’ (2010, 425). The number of duplicate publications has been increasing (Errami and Garner Citation2008). This development rightly concerns publishers and editors, especially in the natural sciences.

10There are, of course, a number of other problems with the JIF (Adams Citation2009: 26, 31; De Bellis Citation2009: 187–196). It relies on the quality of the databases, which have imitations, gaps, and biases. It includes self-citations: authors citing their own work, and articles in a given journal citing other articles in that same journal. It fails to take account of variations in citation practices between fields or in prestige between journals. It reflects primarily the number of citations of certain highly cited articles, which makes it weak as a measure of journal quality, underlining the relative merit of article-based metrics. Finally, as is the case with many of the new ‘altmetrics,’ such as Twitter mentions or Facebook ‘likes’ (which can be bought), the JIF can be easily gamed by the unscrupulous, for example where editors publish editorials citing or review essays of works published in their own journal or place pressure on prospective authors to cite articles from that journal.

11For example, Google Scholar gives Sigmund Freud's h-index as 247, meaning that 247 of his publications, including translations, have each received at least 247 citations. Some of his works will have received more citations, and some less. To put things in perspective, my h-index as calculated by Google Scholar is 7: respectable for a scholar of religion\s, not for a physicist or an economist. This measure varies by field and increases over and after the lifetime of a scholar, making it of dubious value for assessing individual scholars. It has more value in comparing journals, at least those of a similar age within a given field. A large number of variations on the simple h-index have been proposed to address these sorts of limitations.

13I searched for each journal by title, and often also by ISSN, in each of the three databases. Repeated searches, on different days, returned the same results.

14For an overview of this journal, see Blizeka, Desmaraisa, and Burkea (Citation2011).

15See the English as a Lingua Franca in Academic Settings (ELFA) project at the University of Helsinki: http://www.helsinki.fi/englanti/elfa/

16Our editorial assistant, Knut Auckland, and I each performed, independently, a manual document count using the online table of contents for each issue of the five journals for 2011–2013. We counted book reviews, editorials, and articles (including review articles, review symposia contributions, and introductions to special issues). I then omitted book reviews from the final comparative data (i.e., from ) as these are not indexed in Scopus.

17I contacted Andrew Plume, Elsevier's Director of Scientometrics & Market Analysis in Research & Academic Relations by email (8 Feb. 2014). He judges that the divergence is due to ‘differences in document type definitions,’ and he clarifies the process by which these definitions are arrived at: ‘Scopus is built from datafeeds of published contents provided by the publisher in standardised XML formats.’ The root of the problem would thus seem to lie with the data provided by the publishers of these journals.

18The size of journals, in terms of numbers of items published, is an important factor in assessing the growth of disciplines. A longitudinal study of a large sample of journals across a wide variety of disciplines indicated that the number of articles published per journal increased dramatically between 1950 and 1970s, tapering off and evening declining in many cases by the 1980s (Archibald and Line Citation1991).

19Readers of this essay will, of course, have different opinions about what journals could have been included in this sample. The selection is meant to illustrate certain points, not to imply any differences between journals included and not included, especially not that these are the top, best, or most important journals. A different sample would have served as well. This is just one possible set, within the constraint that each journal in the Table had to be covered by both Google Scholar and Scopus.

20A journal with an index of h has published h papers each of which has been cited at least h times. The h5-index is the same, but limited to articles published in the last five years. H-indices for individual scholars are prominent in bibliometrics (for limitations see n. 11). This metric can also be used to compare departments, disciplines and even nations (for the latter see, e.g., Santa and Herrero-Solana Citation2010, 25).

22The exceptions are The American Economic Review, Review of Financial Studies and The Journal of Finance, with h5 indices of 124, 111, and 103 and ranked respectively 53, 75, and 97 out of 100.

23This might seem surprising in light of the fact that these are much larger disciplines. In fact, the size of a discipline is little correlated with average number of citations per article, because the larger number of citations is shared by a larger number of articles; however, larger fields are characterized to a great extent by single articles with particular large citation counts, what Garfield calls ‘super-cited’ articles (Garfield Citation2006, 91).

References

  • Adams, Jonathan. 2009. “The Use of Bibliometrics to Measure Research Quality in UK Higher Education Institutions.” Archivum Immunologiae et Therapiae Experimentalis 57: 19–32. doi:10.1007/s00005-009-0003-3 doi: 10.1007/s00005-009-0003-3
  • Adler, Nancy J., and Anne-Wil Harzing. 2009. “When Knowledge Wins: Transcending the Sense and Nonsense of Academic Rankings.” Academy of Management Learning & Education 8 (1): 72–95. doi: 10.5465/AMLE.2009.37012181
  • Allen, E. S. 1929. “Periodicals for Mathematicians.” Science 70 (1825): 592–594. doi: 10.1126/science.70.1825.592
  • Amsterdamska, Olga, and L. Leydesdorff. 1989. “Citations: Indicators of Significance?.” Scientometrics 15 (5–6): 449–471. doi: 10.1007/BF02017065
  • Archambault, Éric, and Étienne Vignola-Gagné. 2004. The Use of Bibliometrics in the Social Sciences and Humanities. Montreal: SSHRCC/Science-Metrix.
  • Archambault, Éric, Étienne Vignola-Gagne, Grégoire Côté, Vincent Larivière, and Yves Gingras. 2006. “Benchmarking Scientific Output in the Social Sciences and Humanities: The Limits of Existing Databases.” Scientometrics 68 (3): 329–342. doi: 10.1007/s11192-006-0115-z
  • Archibald, G., and M. B. Line. 1991. “The Size and Growth of Serial Literature 1950–1987, in Terms of the Number of Articles per Serial.” Scientometrics 20 (1): 173–196. doi: 10.1007/BF02018154
  • Arunachalam, S., and K. Manorama. 1989. “Are Citation-Based Quantitative Techniques Adequate for Measuring Science on the Periphery?.” Scientometrics 15 (5–6): 393–408. doi: 10.1007/BF02017061
  • Bartol, Tomaz, Gordana Budimir, Doris Dekleva-Smrekar, Miro Pusnik, and Primoz Juznic. 2014. “Assessment of Research Fields in Scopus and Web of Science in the View of National Research Evaluation in Slovenia.” Scientometrics 98 (2): 1491–1504. doi:10.1007/s11192-013-1148-8 doi: 10.1007/s11192-013-1148-8
  • Blizeka, William L., Michele Marie Desmaraisa, and Ronald R. Burkea. 2011. “Religion and Film Studies through the Journal of Religion and Film.” Religion 41 (3): 471–485. doi:10.1080/0048721X.2011.590698 doi: 10.1080/0048721X.2011.590698
  • Braun, Tibor, and András Schubert. 2007. “The Growth of Research on Inter- and Multidisciplinarity in Science and Social Science Papers, 1975–2006.” Scientometrics 73 (3): 345–351. doi:10.1007/s11192-007-1933-3 doi: 10.1007/s11192-007-1933-3
  • Bridgstock, M. 1991. “The Quality of Single and Multiple Authored Papers; An Unresolved Problem.” Scientometrics 21 (1): 37–48. doi: 10.1007/BF02019181
  • Brodman, Estelle. 1944. “Choosing Physiology Journals.” Bulletin of the Medical Library Association 32 (4): 479–483.
  • Cantú, Anselmo Garcia, and Marcel Ausloos. 2009. “Organizational and Dynamical Aspects of a Small Network with Two Distinct Communities: Neo-creationists vs. Evolution Defenders.” Scientometrics 80 (2): 457–472. doi:10.1007/s11192-008-2065-0 doi: 10.1007/s11192-008-2065-0
  • Cole, Francis Joseph, and Nellie B. Eales. 1917. “The History of Comparative Anatomy. Part I: A Statistical Analysis of the Literature.” Science Progress 11 (43): 578–596.
  • Collazo-Reyes, Francisco. 2014. “Growth of the Number of Indexed Journals of Latin America and the Caribbean: The Effect on the Impact of Each Country.” Scientometrics 98 (1): 197–209. doi:10.1007/s11192-013-1036-2 doi: 10.1007/s11192-013-1036-2
  • Danell, Jenny-Ann Brodin, and Rickard Danell. 2009. “Publication Activity in Complementary and Alternative Medicine.” Scientometrics 80 (2): 539–551. doi:10.1007/s11192-008-2078-8 doi: 10.1007/s11192-008-2078-8
  • De Bellis, Nicola. 2009. Bibliometrics and Citation Analysis: From the Science Citation Index to Cybermetrics. Lanham, MD: Scarecrow Press.
  • Dou, H., L. Quoniam, and Parina Hassanaly. 1991. “The Scientific Dynamics of a City: A Study of Chemistry in Marseille from 1981 to the Present.” Scientometrics 22 (1): 83–93. doi: 10.1007/BF02019276
  • Engels, Tim C. E., Truyken L. B. Ossenblok, and Eric H. J. Spruyt. 2012. “Changing Publication Patterns in the Social Sciences and Humanities, 2000–2009.” Scientometrics 93 (2): 373–390. doi:10.1007/s11192-012-0680-2 doi: 10.1007/s11192-012-0680-2
  • Engler, Steven, and Michael Stausberg. 2010. “Acknowledging Peer Review.” Religion 40 (3): 147–151. doi:10.1016/j.religion.2010.04.001 doi: 10.1016/j.religion.2010.04.001
  • Engler, Steven, and Michael Stausberg. 2011. “Crisis and Creativity: Opportunities and Threats in the Global Study of Religion\s.” Religion 41 (2): 127–143. doi:10.1080/0048721X.2011.591209 doi: 10.1080/0048721X.2011.591209
  • Errami, Mounir, and Harold Garner. 2008. “A Tale of Two Citations.” Nature 451 (7177): 397–399. doi: 10.1038/451397a
  • Finkenstaedt, T. 1990. “Measuring Research Performance in the Humanities.” Scientometrics 19 (5–6): 409–417. doi: 10.1007/BF02020703
  • Garfield, Eugene. 1955. “Citation Indexes for Science: A New Dimension in Documentation through Association of Ideas.” Science 122 (3159): 108–111. doi: 10.1126/science.122.3159.108
  • Garfield, Eugene. 1973. “Citation Impact Depends Upon the Paper, not the Journal! Don't Count on Citation by Association.” Current Contents 22: 5–6.
  • Garfield, Eugene. 1979. “Is Citation Analysis a Legitimate Evaluation Tool?.” Scientometrics 1 (4): 359–375. doi: 10.1007/BF02019306
  • Garfield, Eugene. 1980. “Is Information Retrieval in the Arts and Humanities Inherently Different from That in Science? The Effect That ISI®’s Citation Index for the Arts and Humanities Is Expected to Have on Future Scholarship.” The Library Quarterly 50 (1): 40–57. doi: 10.1086/629874
  • Garfield, Eugene. 2006. “The History and Meaning of the Journal Impact Factor.” JAMA: Journal of the American Medical Association 295 (1): 90–93. doi: 10.1001/jama.295.1.90
  • Geilker, Charles D. 1968. “Matthew, Mark, or Luke Effect.” Science 159 (3820): 1185. doi: 10.1126/science.159.3820.1185-b
  • Godin, Benoît. 2002. The Social Sciences in Canada: What Can We Learn From Bibliometrics? Working paper no. 1. Québec: Institut national de la recherche scientifique.
  • Godin, Benoît. 2006. “On the Origins of Bibliometrics.” Scientometrics 68 (1): 109–133. doi: 10.1007/s11192-006-0086-0
  • Godin, Benoît. 2009. “The Value of Science: Changing Conceptions of Scientific Productivity, 1869 to circa 1970.” Social Science Information 48 (4): 547–586. doi:10.1177/0539018409344475 doi: 10.1177/0539018409344475
  • Gross, P. L. K., and E. M. Gross. 1927. “College Libraries and Chemical Education.” Science (n.s.) 66 (1713): 385–389. doi: 10.1126/science.66.1713.385
  • Gross, P. L. K., and A. O. Woodford. 1931. “Serial Literature Used by American Geologists.” Science 73 (1903): 660–664. doi: 10.1126/science.73.1903.660
  • Hackh, Ingo. 1936. “The Periodicals Useful in the Dental Library.” Bulletin of the Medical Library Association 25 (1–2): 109–112.
  • Heinzkill, Richard. 1980. “Characteristics of References in Selected Scholarly English Literary Journals.” The Library Quarterly 50 (3): 352–365. doi: 10.1086/600992
  • Hemlin, S., and M. Gustafsson. 1996. “Research Production in the Arts and Humanities: A Questionnaire Study of Factors Influencing Research Performance.” Scientometrics 37 (3): 417–432. doi: 10.1007/BF02019256
  • Henderson, Michael, Simon Shurville, and Ken Fernstrom. 2009. “The Quantitative Crunch: The Impact of Bibliometric Research Quality Assessment Exercises on Academic Development at Small Conferences.” Campus-Wide Information Systems 26 (3): 149–167. doi:10.1108/10650740910967348 doi: 10.1108/10650740910967348
  • Henkle, H. H. 1938. “The Periodical Literature of Biochemistry.” Bulletin of the Medical Library Association 27 (2): 139–147.
  • Hicks, Diana M., and James E. F. Skea. 1989. “Is Big Really Better?.” Physics World 2 (12): 31–34.
  • Hodge, David R., and Jeffrey R. Lacasse. 2011. “Ranking Disciplinary Journals with the Google Scholar H-Index: A New Tool for Constructing Cases for Tenure, Promotion, and Other Professional Decisions.” Journal of Social Work Education 47 (3): 579–596. doi:10.5175/JSWE.2011.201000024 doi: 10.5175/JSWE.2011.201000024
  • Larivière, Vincent, and Yves Gingras. 2010. “The Impact Factor's Matthew Effect: A Natural Experiment in Bibliometrics.” Journal of the American Society for Information Science and Technology 61 (2): 424–427. doi:10.1002/asi.21232
  • Laudel, Grit. 2006. “The ‘Quality Myth’: Promoting and Hindering Conditions for Acquiring Research Funds.” Higher Education 52 (3): 375–403. doi:10.1007/s10734-004-6414-5 doi: 10.1007/s10734-004-6414-5
  • Lawrence, Steve. 2001. “Free Online Availability Substantially Increases a Paper's Impact.” Nature 411 (6837): 521. doi: 10.1038/35079151
  • Lawrence, Peter A. 2008. “Lost in Publication: How Measurement Harms Science.” Ethics in Science and Environmental Politics 8: 9–11. doi:10.3354/esep00079 doi: 10.3354/esep00079
  • van Leeuwen, Thed. 2006. “The Application of Bibliometric Analyses in the Evaluation of Social Science Research. Who Benefits from It, and Why It Is Still Feasible.” Scientometrics 66 (1): 133–154. doi: 10.1007/s11192-006-0010-7
  • Lewison, G., and R. Igic. 1999. “Yugoslav Politics, ‘Ethnic Cleansing’ and Co-Authorship in Science.” Scientometrics 44 (2): 183–192. doi: 10.1007/BF02457379
  • Leydesdorff, L. 1998. “Theories of Citation?.” Scientometrics 43 (1): 5–25. doi: 10.1007/BF02458391
  • Linmans, A. J. M. 2010. “Why with Bibliometrics the Humanities Does Not Need to Be the Weakest Link: Indicators for Research Evaluation Based on Citations, Library Holdings, and Productivity Measures.” Scientometrics 83 (2): 337–354. doi:10.1007/s11192-009-0088-9 doi: 10.1007/s11192-009-0088-9
  • McNinch, J. H. 1949. “The Royal Society Scientific Information Conference, London, June 21–July 2, 1948.” Bulleting of the Medical Library Association 37 (2): 136–141.
  • Merton, Robert K. 1968. “The Matthew Effect in Science: The reward and communication systems of science are considered.” Science 159 (3810): 56–63. doi: 10.1126/science.159.3810.56
  • Michels, Carolin, and Ulrich Schmoch. 2014. “Impact of Bibliometric Studies on the Publication Behaviour of Authors.” Scientometrics 98 (1): 369–385. doi:10.1007/s11192-013-1015-7 doi: 10.1007/s11192-013-1015-7
  • Nederhof, A. J. 2006. “Bibliometric Monitoring of Research Performance in the Social Sciences and the Humanities: A Review.” Scientometrics 66 (1): 81–100. doi: 10.1007/s11192-006-0007-2
  • Nederhof, A. J., R. A. Zwaan, R. E. De Bruin, and P. J. Dekker. 1989. “Assessing the Usefulness of Bibliometric Indicators for the Humanities and the Social and Behavioural Sciences: A Comparative Study.” Scientometrics 15 (5–6): 423–435. doi: 10.1007/BF02017063
  • Norris, Michael, Charles Oppenheim, and Fytton Rowland. 2008. “The Citation Advantage of Open-access Articles.” Journal of the American Society for Information Science and Technology 59 (12): 1963–1972. doi: 10.1002/asi.20898
  • Pouris, Anastassios. 2007. “Is Fundamentalism a Threat to Science? Evidence from Scientometrics.” Scientometrics 71 (2): 329–338. doi:10.1007/s11192-007-1673-4 doi: 10.1007/s11192-007-1673-4
  • Pritchard, Alan. 1969. “Statistical Bibliography or Bibliometrics?.” Journal of Documentation 25 (4): 348–349.
  • Santa, Samaly, and Victor Herrero-Solana. 2010. “Cobertura de la ciencia de America Latina y el Caribe en Scopus vs Web of Science.” Investigacion Bibliotecologica 24 (52): 13–27.
  • Shelton, Robert D., Patricia Foland, and Roman Gorelskyy. 2009. “Do New SCI Journals Have a Different National Bias?.” Scientometrics 79 (2): 351–363. doi:10.1007/s11192-009-0423-1 doi: 10.1007/s11192-009-0423-1
  • Sivertsen, Gunnar, and Birger Larsen. 2012. “Comprehensive Bibliographic Coverage of the Social Sciences and Humanities in a Citation Index: An Empirical Analysis of the Potential.” Scientometrics 91 (2): 567–575. doi:10.1007/s11192-011-0615-3 doi: 10.1007/s11192-011-0615-3
  • Stausberg, Michael. 2010. “Prospects in Theories of Religion: Introductory Observations.” Method and Theory in the Study of Religion 22 (4): 223–238. doi:10.1163/157006810X531021 doi: 10.1163/157006810X531021
  • Stausberg, Michael, and Steven Engler. 2013. “Acknowledging our Referees (with Selected Review Statistics).” Religion 43 (4): 457–462. doi:10.1080/0048721X.2013.837664 doi: 10.1080/0048721X.2013.837664
  • Szabó, A. T. 1985. “Alphonse de Candolle's Early Scientometrics (1883, 1885) with References to Recent Trends in the Field (1978–1983).” Scientometrics 8 (1–2): 13–33. doi: 10.1007/BF02025219
  • de Winter, Joost C. F., Amir A. Zadpoor, and Dimitra Dodou. 2014. “The Expansion of Google Scholar versus Web of Science: A Longitudinal Study.” Scientometrics 98: 1547–1565. doi:10.1007/s11192-013-1089-2 doi: 10.1007/s11192-013-1089-2
  • Yang, Siluo, Junping Qiu, and Zunyan Xiong. 2010. “An Empirical Study on the Utilization of Web Academic Resources in Humanities and Social Sciences Based on Web Citations.” Scientometrics 84 (1): 1–19. doi:10.1007/s11192-009-0142-7 doi: 10.1007/s11192-009-0142-7
  • Yitzhaki, M. 1997. “Variation in Informativity of Titles of Research Papers in Selected Humanities Journals: A Comparative Study.” Scientometrics 38 (2): 219–229. doi: 10.1007/BF02457410

Appendix: Abbreviations of journal titles

ASSR =

Archives de sciences sociales des religions (Éditions de l'EHESS; ISSN 0003-9659)

C&R =

Culture and Religion (Taylor & Francis; ISSN 1475-5610)

HR =

History of Religions (University of Chicago Press; ISSN 0018-2710)

IJPR =

International Journal for the Psychology of Religion (Taylor & Francis; ISSN 1050-8619)

IR =

Implicit Religion (Equinox Press; ISSN 1463-9955)

JAAR =

Journal of the American Academy of Religion (Oxford University Press; ISSN 0002-7189)

JCR =

Journal of Contemporary Religion (Taylor & Francis; ISSN 1353-7903)

JR =

Journal of Religion (University of Chicago Press; ISSN 0022-4189)

JRA =

Journal of Religion in Africa (Brill; ISSN 0022-4200)

JRE =

Journal of Religious Ethics (Wiley; ISSN 0384-9694)

JREu =

Journal of Religion in Europe (Brill; ISSN 1874-8910)

JRH =

Journal of Religion and Health (Springer; ISSN 0022-4197)

JSSR =

Journal for the Scientific Study of Religion (Wiley; ISSN 0021-8294)

MHRC =

Mental Health, Religion & Culture (Taylor & Francis; ISSN 1367-4676)

MTSR =

Method and Theory in the Study of Religion (Brill; ISSN 0943-3058)

NR =

Nova Religio: The Journal of Alternative and Emergent Religions (University of California Press; ISSN 1092-6690)

Num =

Numen (Brill; ISSN 0029-5973)

Rel =

Religion (Taylor & Francis; ISSN 0048-721X)

RHE =

Revue d'Histoire Ecclesiastique (Université Catholique de Louvain; ISSN 0035-2381)

RS =

Religious Studies (Cambridge University Press; ISSN 0034-4125)

SCom =

Social Compass (SAGE; ISSN 0037-7686)

SocR =

Sociology of Religion (Oxford University Press; ISSN 1069-4404)

SR =

Studies in Religion/Sciences religieuses (SAGE; ISSN 0008-4298)

Zyg =

Zygon (Wiley; ISSN 0591-2385)

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.