483
Views
2
CrossRef citations to date
0
Altmetric
Editorial

Passing the Baton: Reflections on Publishing in Ophthalmic Epidemiology

Pages 143-145 | Published online: 22 Jul 2011

EDITOR-IN-CHIEF

The imagery of strong, committed runners working as a team, handing off responsibility, is irresistible as I sit down to write my last Editorial as Editor-in-Chief of Ophthalmic Epidemiology. Dr Jim Ganley was the founding Editor-in-Chief of the journal. Jim was (and is) devoted to international prevention of blindness, and he diligently created a home for epidemiologists and clinical researchers interested in Ophthalmology from around the world. Editorial board members will recall fondly missives from Jim exhorting leniency for manuscripts with data from countries that were under-represented in the literature. When Jim offered me the position of Editor-in-Chief, the journal had a respectable impact factor and had grown from 4 to 6 issues a year. The opportunity of becoming Editor-in-Chief, besides the workload and administrative issues, included freedom to shape the journal into a new vision of Ophthalmic Epidemiology. My goal was to improve the scientific quality of the manuscripts we published and expand the scope of the journal. The Editorial Board was updated and expanded to include an economist, a low vision expert, and several vibrant young (at the time) epidemiologists and clinicians. The rejection rate of submissions sky-rocketed, but so did the number of submissions, by almost three fold that first year. The journal has become a home for methodological papers, health care and health economic issues in ophthalmology, as well as the international epidemiological studies that were previously the mainstay of submissions.

I leave the judgment of any success in my efforts to the next Editor-in-Chief, Dr Jie Jin Wang, to whom the proverbial baton is now passed as of 1 July 2011. JieJin is an internationally renowned epidemiologist, tough but fair in her critiques, and equally committed to international ophthalmology. I have no doubt she will formulate her own goals for the journal and take it to the next level in the evolution of this young and growing publication. Before the hand-off is complete, though, I want to use this pulpit to share reflections from the past several years with you, the readers (and authors), on publishing articles in this journal, specifically what Editors look for in the manuscripts they receive.

Authors should be aware of three factors that, if present, won’t get the submission out of the starting gate. The first is the style of writing. Although Ophthalmic Epidemiology prides itself on being an international journal, it is still an English language journal, with English speaking members of the Editorial Board. While the editors have sympathy for authors whose native tongue is not English, if we cannot understand what was done or concluded then the article will be rejected. That is a painful judgment but necessary. Our editors cannot re-write articles nor second guess what was meant, and editors increasingly feel it is the responsibility of the authors to communicate clearly the results of their work.

The second non-starter is the absence of any statement on ethical clearance. All journals adhere to this policy for research on human subjects, and it is required that a recognized ethical review was carried out by a dispassionate committee. Manuscripts that state the authors determined that their study did not need ethical clearance because it was a chart review, for example, will be sent back with shocking speed.

The final issue is more unique to Ophthalmic Epidemiology and reflects its roots in Epidemiology. The journal does not consider manuscripts that are case-series, and I suspect more than one author has been annoyed by that rejection critique. Case-series are largely descriptions of a consecutive set of cases from a single institution. These can provide important information on a disease, for example, but they are not epidemiologic studies. There is no hypothesis, the series is rarely generalizable beyond the institution, and has no comparison group for any features of interest. Case series have their place in clinical journals, but readers of epidemiology journals are interested in epidemiological studies. Subtle distinctions exist in the grey area between case series and epidemiology, but this can usually be addressed by answering the question, “compared to what?”. For example, suppose a manuscript reports a case series of patients presenting to a low vision clinic, where among other characteristics reported (typically age, sex, and causes of low vision) the prevalence of depression is 10%. While this finding is interesting, and likely relevant for the reporting institution for programmatic purposes, it has no context. Now suppose the manuscript reports that the case series was compared to a series of low vision cases in a retina clinic who had not presented to a low vision clinic, and the prevalence of depression in that group was twice that in the case group after adjustment for age, race, sex, and visual acuity. The comparison is the spark that gives the manuscript its context and leads to investigative avenues that further science, as well as a higher likelihood of the editor’s consideration.

What other considerations brew inside an editor’s head, especially when faced with 6 issues a year and a narrow opportunity to engage the readership? The “so what” factor looms large in this instance. Understandably, researchers are under significant pressure to publish and the temptation to produce manuscripts is undeniable. However, our field will perceive to be stagnating if its publications provide no new insights or advance science, either through new methods or novel populations or original findings. For example, just because a population-based study is larger than its predecessors or conducted in a different region of a country, does not automatically make its findings more unique. The “so what” factor is a particular issue for Ophthalmic Epidemiology because of its interest in international ophthalmology. For example, in the past, we have published a number of manuscripts on rapid assessment of cataract surgical services (RACCS). The methods were interesting, and the details on the implementation of the surveys in various locations and the resulting treatment of data were useful for other investigators or program evaluators who were just beginning. At this time, however, often little in the way of new material is presented from the RACCS surveys, other than coverage in a different location. The findings are important for the location, and the programs seeking to improve surgical coverage, but less interesting from a scientific point of view, and especially when competing with other manuscripts. Our editors and reviewers are increasingly noting the “so what” factor in their critiques, and it is a pre-emptive question the authors should be addressing.

A few other cautionary reflections are probably worth noting. Editors are concerned when the size of the population or sample is insufficient to study the question of interest. Few things create more of a sinking feeling than reading a manuscript whose introduction poses an important question and whose methods report a sample size clearly too small to answer it. Some editors kindly ask for a power calculation, in the hope that the authors will realize the problem but others will reject the study on methodological grounds. All of this effort on everyone’s part can be avoided with a sample size calculation, or if the sample is drawn, a power calculation.

All epidemiological researchers face the difficulty of recruiting participants, and understand the bias from poor response rates. Good manuscripts will present the response rates, show differences between those who did and did not participate, and include a thoughtful discussion on any potential bias that may have resulted and the limitations that ensue. Bad manuscripts ignore response rates and/or do not include a discussion of biases and limitations if warranted. In fairness to the authors, though, bad editors shout “bias” based solely on response rates without thinking through if bias is truly an issue. I recall editing out one reviewer’s concern about the bias resulting from the 10% who did not respond to a survey, when it was clear that even if the 10% were completely different from the other 90% who did respond (highly unlikely) the results would be unchanged.

The presentation of data is like the presentation of a meal− the substance can only be helped by a thoughtful offering. Reviewers are busy folks, and become frustrated if the data tables are so complicated or convoluted that the information is lost. Authors are close to their data in most cases, and like proud chefs, wish to display many images. This tendency should be fought at all costs, and a critical eye kept on the main question and the data that support it. On the other hand, authors should not underestimate the sophistication of the reviewers and gloss over key data tables in a rush to present fancy models. Reviewers, at least good ones, want some idea of the variables that populate these models to judge if the models represent a reasonable approach.

And now a final harangue. With the increasing availability of data analyses packages for personal computers, we see the increasing use of sophisticated models which were formerly only found on packages for mainframe computers and largely resident in centers with biostatisticians who knew how to use them properly. The adage “a little knowledge is a dangerous thing” applies in the case of an investigator who knows that the data should have multi-variable adjustment, but little else beyond that. The black box in the computer will indiscriminately spew out a model with no thought for its appropriateness. While my personal view tends toward the wider availability of information for all philosophy, I also long for more judicious use of that information. As an editor and a researcher, I advocate strongly for seeking the wisdom of a biostatistician when the data outstrip the knowledge of how to model it. Reviewers now increasingly demand that the methods section describe why a particular model was chosen and the statistics that justify its use. As Editor-in-Chief, I instituted an ongoing series of methodological papers designed to review new methods and approaches of interest to epidemiologists, in the hope that this would prove helpful.

I would be remiss if I did not extend my heartfelt gratitude to the reviewing editors of the journal, a remarkable group of men and women who clearly care about research and advancing science, and who I tried not to abuse. I learned a great deal from them. And I learned a great deal from the authors, whose manuscripts I was privileged to read. For about 60% of these manuscripts, the authors and I together gave birth to publications (some labor more painful than others!). For the other 40%, I hope my comments above are helpful and that you all continue to submit your scientific papers to Ophthalmic Epidemiology. For now, my best wishes to the new Editor-in- Chief, as she takes the title and the journal into the future. My hands feel a little empty, but it won’t last.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.