264
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Don’t be afraid of optometrist research rankings

Soon after commencing my academic career in the 1970s, I began looking for optometry role models who represented research excellence. This turned out to be quite a challenge. I had to scour the literature manually, by searching through journal volumes in the library and photocopying articles relevant to my own research, which I stored in lever arch files (concepts that would be foreign to younger researchers today). I could deduce to some degree who was publishing extensively in my narrow field of research – corneal oxygenation during contact lens wear – and who seemed to be doing important work in this area. However, the ‘big picture’ was lacking. I could not really gain an objective insight into who the leaders were in optometry research generally, who were the most prolific authors, and whose work was cited the most.

The only other clues as to the identity of optometrists undertaking impactful research was the quality of lectures delivered by research optometrists at conferences, and noting who had been given research awards and prizes. These measures were largely anecdotal, but it was all we had aside from manual literature searches. When I started supervising higher degree students in the 1980s, I recall having numerous discussions with them about academic achievement and expected norms for publication output by postgraduate and early-career researchers. I coined a somewhat arbitrary yardstick to incentivise my students, declaring that if they aspire to research excellence, they should aim to publish 30 papers by age 30 (the ‘30 by 30’ rule). At the very least, this provided a useful focal point for discussion.

The birth of citation analysis

All of this changed around the turn of the century. Although citation analysis had been proposed almost half a century earlier,Citation1,Citation2 counting citations had to be performed manually and was laborious. Early reports focused on the general principles of citation analysis, and by way of example, assessed the publications of Nobel winning scientists to demonstrate their high rate of citations, and by implication, research impact.Citation2

Developments in digital computing and information technology ushered in the creation of publication databases, with Web of Science (Clarivate) becoming available in 1997, and Google Scholar (Google) and Scopus (Elsevier) coming on-line in 2004. Around this time journals began to be published in digital format, and back-issues of many journals were being scanned and digitised. It was now possible to undertake electronic searches of the number of papers any author had published, and the number of times their work had been cited.

A major innovation was the proposal in 2005 by Jorge E Hirsch,Citation3 of a single numeric that combines the notions of quantity (number of papers published) and quality (number of citations to papers). His idea was encapsulated in the abstract of his celebrated paper, which reads ‘I propose the index h, defined as the number of papers with citation number > h, as a useful index to characterise the scientific output of a researcher’.Citation3 Thus, for example, a h-index of 30 would mean that a person had published 30 papers which had been cited at least 30 times. Despite some scepticism, the h-index quickly became established as a universally-accepted measure of research impact, and this remains the case today.

Development of optometrist research rankings

The proposal of HirschCitation3 inspired me to revisit my failed quest some 20 years previously to try and identify leading research optometrists. So, six years after HirschCitation3 published his seminal paper, I teamed up with Noel Brennan to publish a citation analysis of Australia-trained optometrists, resulting in a list of the top 50 Australian optometrists in rank order of h-index.Citation4 I revisited this listing in 2020, and noticed a significant shift in the rankings.Citation5

Soon after publication of my 2020 reprise article,Citation5 I was approached by colleagues from Canada, the UK and USA who had read the article and proposed a project to list the top research optometrists in the world, ranked by h-index. And so it was, around three years ago – together with fellow optometrists Lyndon Jones (Canada), Philip Morgan (UK) and Jason Nichols (USA), and engineer and computer programmer George Morgan (UK) – that the Global Optometrist Top 200 Research Ranking (T200) was published.Citation6 Publication metrics, derived from the Scopus database, were used to determine and list the top 200 optometrists in the world, ranked by h-index, as well as associated bibliometric data for each individual.

When writing our paper, it was quickly realised that (a) despite extensive searching, we may not have captured all highly-published optometrists around the world who should be in the T200, (b) the rankings of the T200 would certainly be dynamic and would change frequently as more work was published and cited, and (c) early-career, rapidly-publishing optometry academics would continue to develop high enough h-indices to be included in the T200. Accordingly, we developed a companion website (www.optomrankings.com) that presents the T200 as an ongoing, dynamic resource which lists the T200 and automatically updates every 24 hours.

We also included a facility on the T200 website to allow any person to alert us of optometrists missing from the T200, but who have a h-index that would place them in the T200. Since launching the T200 website three years ago, we have received 170 notifications via that portal, resulting in about 50 optometrists being added to the master listing, 15 of whom now appear in the T200. We are confident that the current T200 (www.optomrankings.com) is comprehensive and fully up-to-date; it certainly looks somewhat different to the original published version.Citation6

The original T200 paper,Citation6 and two further editorialsCitation7,Citation8 derived from the T200 paper, have generated massive interest. For example, the T200 paperCitation6 has been viewed 6,023 times on the website of Clinical and Experimental Optometry (where the paper was published), and the two associated editorials have been viewed 13,229Citation7 and 6,788Citation8 times; these papersCitation6–8 constitute three of the four most viewed papers in the journal over the past 3 years (a typical paper might have 20 views). The T200 website (www.optomrankings.com) has been visited an astonishing 34,414 times.

Optometry research rankings have their detractors

Despite tremendous interest, there has been some resistance to the T200Citation6 and companion website (www.optomrankings.com), from colleagues who perhaps feel that their ranking in the T200 does not properly reflect their true level of excellence, research impact and standing in the field. However, the T200 is here to stay and for all to see, allowing all research optometrists to be judged fairly, equally and objectively.

Some seek to criticise citation metrics, and the h-index and T200 in particular, by stating that there are other measures of academic excellence, such as the publication of books, book chapters or patents; winning of substantial research funding from peer-reviewed and/or commercial sources; conference invitations and keynote addresses; teaching excellence; service to the university and wider community; academic leadership through administration; mentoring and higher degree supervision; winning research awards and prizes; and appointment or election to prestigious professional or research organisations. These are indeed important indicators of academic excellence; however, unlike the h-index, many of these esteem measures are anecdotal. It must also be said that this is not an especially insightful criticism; HirschCitation3 made all these points in his original 2005 h-index paper, and my colleagues and I have gone to great lengths to issue caveats relating to the h-index in the original T200 paper,Citation6 and other relevant papersCitation4,Citation5,Citation9 and editorials.Citation7,Citation8,Citation10 To criticise the T200 on the basis that it does not take into account other measures of excellence is disingenuous.

Conclusions

So, I say to the detractors: don’t be afraid of the T200. Instead, embrace it; the T200 can be empowering. The T200 is but one objective tool among a number of largely subjective or anecdotal measures, such as those outlined above, that can be of assistance in considering academic appointments and promotions, allocating research funding, and deciding research awards and prizes. Optometrists can use the T200 (www.optomrankings.com) to gain insights into their own research impact relative to their peers. The T200 also (a) facilitates a detailed and informed assessment of the profile of all impactful research optometrists, (b) assists those seeking out research optometrists with whom they wish to collaborate, and (c) helps identify role models for early-career optometrists. I wish I would have had access to such a tool early in my career.

Finally, all optometrists listed in the T200 ought to be celebrated for their impactful research contributions to their chosen profession, regardless of their ranking.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.