887
Views
0
CrossRef citations to date
0
Altmetric
Editorial

The assessment of productivity in biomedical research

Pages 631-633 | Received 10 May 2016, Accepted 01 Jun 2016, Published online: 20 Sep 2016

Measuring scientific impact and relevance is not an easy endeavor. Unfortunately, publications alone are an insufficient element for the estimation of success in scientific research activities for the general public, research centers, academic institutions and funding agencies. Shrinking biomedical research funding, together with a growing demand by stakeholders to report tangible and meaningful outcomes, demand alternative methods that more concretely quantify the impact of scientific research on knowledge, dissemination, healthcare professional uptake and public health outcomes. But, measuring these parameters is not easy and it is subject to a great deal of subjectivity. It is for this reason that I will focus my views on scientific productivity mainly related to published research.

Biomedical research is a wide research field including biochemistry, nutrition, pharmacology and many other disciplines that interact with one another to yield data that are related to Medicine. Ever since scientific funding agencies – that deal with Biomedical Research – started requesting some procedure to measure a return on their investment, administrators have produced different ways to measure scientists' performance in terms of productivity. Its evaluation normally includes the outcome in terms of published – often citations – research. This already represents a problem since some scientists concentrate more on patent filing or clinical trials rather than just in publishing per se. A proper outcome should also include these but the lack of “impact” measurements on them excludes this possibility. A similar situation is found in relation with books – either as an author or an Editor – since no evaluating scale has been defined.

Citations also overlook other important components of a scientist’s contribution. As mentioned before, patents filed clinical trial participation, invited conferences and workshops, publications in newspapers and general public journals, books, student supervising, etc. However, a comprehensive index of scientific output will depend to a large extent on citation data, and therefore, I will concentrate my discussion of outputs on citations-related measurements.

The value of published research – indexed publications – can be a total measure – which would cover the lifetime of the scientist – or referred to a period of research. Often, in order to assess success, different parameters have been considered. Thus, total publication number or total citations are often taken into consideration. However, total publication number does not necessarily reflect scientific quality, while total citation number can be disproportionately biased by involvement in a single publication of major influence (i.e. methodological papers providing successful new techniques), or having a lot of publications with few citations each. However, this is subject of a controversy since, while a large number of publications linked to a large number of citations indicates both quality and continuity in the field, just having a few publications with a large number of citations may indicate a major breakthrough in the field. In spite of this, it is obvious that the number of citations is preferred over total number of publications since it somehow reflects a qualitative measure. The impact factor (IF) defined by Garfield (Citation1–3) is a measure that reflects the average number of citations to recent articles published in one particular journal. The impact factor can be used to provide a gross approximation of the prestige of journals in which individuals have been published. Total IF – obtained by adding of the IFs of the different articles of a scientist – can also be used instead of total number of citations. The validity of the impact factor, as a measure of journal importance, is questioned because of the effects of policies that editors may adopt to boost their impact factor. In addition, the field of particular research may have an influence on the total IF measurements since some fields only have journals with lower impact factors.

However, to overcome these problems, the so-called h index was introduced. “A scientist”, states Hirsch (Citation4), “has index h if h of his or her Np papers have at least h citations each and the other (Nph) papers have ≤ h citations each”. For instance, an author with an h index of twenty has twenty publications that have been cited at least twenty times. The h index has been widely adopted since its introduction in 2005 by major funding agencies, universities and research centers. The h index can be easily assessed using citation databases. Subscription-based databases such as Scopus and the Web of Knowledge provide automated calculators. In addition the Harzing's Publish or Perish program (Citation5) calculates the h index based on Google Scholar entries. The h index provides simultaneous measures of both output quality and quantity. There are many alternatives to the h number. Thus, the v index (Citation6) which includes the proportion of time devoted to research to normalize for clinical academicians – or those with heavy teaching loads – who may devote only 40 to 50% of their time to research; the Absolute index (Ab index) (Citation7) takes into account the impact of research findings while weighting the actual extent of the work and intellectual contributions of the researcher; and the hi-5 index (Citation8) which is the h index over a five year period, to name a few. It has to be stated that none of these is a measure of actual productivity.

Another problem associated with publications is that of the number of authors.

A misleading problem with citations is that databases count all citations equally, regardless whether the publication has one author or hundreds of authors, This provide middle authors with a much greater impact than they should have. Thus, it becomes obvious that the individual value for an author cannot be the same if he is the only one or if it is shared with 30 more authors. Similarly, the order in the author listing is very important; clearly, and depending on the laboratory policy, it is not the same to be the first or last author than the rest. The country where the study is undertaken is also very important, since research undertaken in Europe cannot be compared with that undertaken, for instance, in South America (Citation9).

In addition to citations, there are other types of article-level metrics. These are based on both usage and public engagement as an indicator of how research is shared and cited in bibliographic databases, or saved in online reference managers (Citation10). These metrics can serve as additional measures of impact to citations, allowing to highlight examples of scholarly output and reaching beyond the traditional peer-reviewed journal article. They include online views and downloads, mentions of a work in social network sites such as Twitter of Facebook, or bookmarks to a work from online reference managers such as Mendeley recommended. However, article-level metrics are not as freely available as citations and this represents an additional problem.

We often hear sentences like. Dr. Smith is very “productive” referring to his or her contribution to Biomedical Research. Productivity is an average measure of efficiency of production. It can be expressed as the ratio of outputs to inputs used in the production process, i.e. output per unit of input. As previously stated to calculate productivity we need to take into account not only the outputs but also the inputs. The input component is faithfully represented by sum of the funds obtained to carry out research. These funds can be for personnel or laboratory disposition. However, similar problems, as encountered with outputs, are found. Thus, it is not the same to be the Principal Investigator (PI) of a grant or just one of the participants or an external collaborator. Similarly, it is not the same to take into account funds provided by regulated agencies such as research organizations, universities than by pharmaceutical companies. In fact, these can actually cause problems when they try to accommodate scientific findings to meet their needs, instead of just following research where it leads. In addition, companies may not address the right questions to determine if a drug will really help patients before committing to clinical trials. Another major complication is that different fields of biomedical research require different levels of funding. For example, molecular biology, high resolution imaging and clinical trials need much higher amounts of funding (for instrumentation and/or personnel) than epidemiology or history of medicine. Accordingly, input/output needs to be compared for similar areas of research. Also, since different countries have different models of research support, one would need to know whether research funds cover all the salary of the researchers, as often in the USA, or whether they are, as often in Europe, subsidized heavily by the employing university or research institution (Citation11).

I have experienced many times that when evaluating a scientist’s contribution, the total output is added to the total input since they both represent positive entries. To obtain a large grant is certainly meritorious and so it is to have a high h number; however, I personally think that the simple addition of the two factors does not reflect productivity at all.

Productivity should be expressed as a ratio of outcome/income. For instance, if a scientist has an H number of 55, and his total input has been 2 million €, his productivity would be 2750% – expressed in percentage–whereas another with an H number of 90 and a total input of 10 million €would be 900%. This reflects the real productivity (H fractions per euro or dollar), the success in performing quality research per money unit.

Disclosure statement

The author reports no declarations of interest.

References

  • Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci USA. 2005;102:16569–72.
  • Garfield E. Citation indexes for science; a new dimension in documentation through association of ideas. Science. 1955;122:108–11.
  • Garfield E. The evolution of the science citation index. Int Microbiol. 2007;10:65–9.
  • Garfield E, Sher I. New factors in the evaluation of scientific literature through citation indexing. Am Document. 1963;14:195–201.
  • Harzing AW. (A preliminary test of Google Scholar as a source for citation data: A longitudinal study of Nobel Prize winners. Scientometrics. 2013;93:1057–75.
  • Sheridan DJ. Reforming research in the NHS. BMJ 2005;331:1339–40.
  • Biswal AK. An absolute index (Ab-index) to measure a researcher's useful contributions and productivity. PLoS One 2013;8:e84334.
  • Hunt GE. McGregor IS, Malhi GS. Give me a hi-5! An additional version of the h-index. Aust NZ J Pschiatry 2013;47:119–23.
  • Rahman M, Fukui T. Biomedical research productivity: factors across the countries. Aust Int J Technol Assess Health Care 2003;19:249–52.
  • Lin J, Fenner M. Altmetrics in evolution: Defining and redefining the ontology of article level metrics. Inform Std Quart 2013;25:20–6.
  • Moed HF, Halevi G. Multidimensional assessment of scholarly research impact. J Assoc Inform Sci Technol 2015;66:1988–2002.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.