418
Views
0
CrossRef citations to date
0
Altmetric
Editorial

The error bars on impact

Pages 47-48 | Published online: 13 Aug 2009

The impact of a scientist is increasingly judged by quantitative metrics. Three of the most important are number of papers, number of citations, and h-factors. While much has been written on how useful such metrics actually are, far less appreciated is the fact that their measurement is fundamentally error-prone.

The standard databases on which most people rely to supply these metrics are neither complete nor consistent. The policy of both Medline and Thomson Reuters (who own Web of Science) is that they will generally not index journal issues prior to when the journal was first received by them, which may be many years after the journal commenced publication. This means that no papers in Network volumes 1–8 are listed in Pubmed, and no papers in volume 1 are listed in the Web of Science. Similarly volumes 1–5 of Neural Computation are not listed in Pubmed, and volumes 1–3 are not listed in the Web of Science. Google Scholar is more inclusive, but still by no means comprehensive.

These databases are thus, by their own choice, incomplete. They do not always provide a full record of an author's journal publications, nor does Web of Science always accurately report h-factors. These databases are of course free to have whatever policies they like: the problem is when people assume they are complete when evaluating scientists.

A wider issue is the definitional problem of assigning sharp boundaries to what are really continuous categories of article types. Some conferences in some disciplines are far more selective in what they publish than many journals, yet those papers are often regarded as in an inferior category to journal papers. On the other hand, a conference may have an arrangement with a journal to publish accepted papers, so that these papers end up being categorized as journal articles even though the conference might have had an acceptance rate approaching 100%. Similarly a journal might publish a special issue for which the rigor of review is more similar to that of a typical book chapter than a typical journal article.

Besides impacting on the number of journal articles a scientist is deemed to have published this also impacts on their citations, since usually only citations in journal articles (and not in book chapters and conference proceedings) are normally counted in h-factor calculations. While the Web of Science has now started including some data from conference proceedings, this of course will not be backdated.

Crucially, the size of these discrepencies is discipline-dependent. The issues raised above may be insignificant for many areas of experimental biology, but they are certainly important for computational neuroscience. The standard databases will always provide an underestimate of impact, but the size of the error is likely to be much larger for a typical computational neuroscientist as compared to a typical experimental neuroscience.

In summary, besides appreciating that publication and citation metrics are not the unique measure of a scientist's success, it should be more widely understood that the actual measurement of these numbers is plagued by systematic errors.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.