463
Views
1
CrossRef citations to date
0
Altmetric
EDITORIAL

Reading a “negative” study

(Editor-in-Chief)

In environmental and occupational health, studies that show no effect do not usually get much attention when they are published. Later, they may be included in meta-analyses or cited as evidence that there is no causal association with an exposure. Their value therefore tends to be archival rather than current. Journal editors usually do not get excited about publishing them, and they are less likely to be submitted for publication than studies that have at least some suggestion of an effect. This has been a significant problem in some fields, such as reporting clinical trials.

Colloquially called “negative studies,” studies that show no effect are often not trusted. Certainly considerations of low power, intrinsic bias (which with the exception of confounding usually favors an underestimate of risk), details of study design (such as criteria used to exclude subjects), and distrust of the motivation of the authors often combine to cause doubt that a negative study is truly evidence of no effect. It happens quite often that a small cell or subgroup or subanalysis will show an elevation, sometimes “statistically significant” (obviously, once in twenty times at p = .05). Isolated borderline positive results will lead to debate over whether the study is truly “negative” or just underpowered, is not robust enough in design, or shows a misclassification bias (probably the most common source of bias that underestimates risk). To many colleagues, arguing over subgroup analysis comes dangerously close to cherry picking results, “p hacking” (searching for associations among many outcomes that happened to reach conventional statistical significance, no matter how implausible), and overinterpreting statistical variation. On the other hand, ignoring indications of a hidden and plausible elevation in risk is not listening to what the data may be telling us.

When the hypothesis is reasonable and is framed in advance by the reader without knowledge of what a particular study found, is this really cherry picking or is the reader in effect testing a hypothesis against findings on a new or at least unfamiliar data set provided by the author? For example, if a reader searches for an elevation in a particular cause of death or incidence of disease in a new study or one that has not been reviewed by the reader before asking the question, is this not very similar to testing a hypothesis against new data?

The example of firefighters and their occupational cancer risk demonstrates this. For years, studies that appeared to be “negative” concealed evidence of exposure–response relationships and relative elevations in risks that were masked by an overall healthy-worker effect. As well, elevations for individual outcomes were diluted by aggregations in coding for health outcomes (such as lumping lymphomas, leukemias, and myelomas together), thereby mixing up those that did not share the causal association with those that did. It is now clear that what appeared to be inconsistency in the findings of many of these studies was really failure to appreciate the fine-grained detail, as well as the reality of firefighting exposures.Citation1

To remind ourselves of how a “negative” study should look, to guide authors of best practice in analysis of the data when no obvious association is evident, and for the instruction of students, it is worth looking closely in detail at a truly negative study, one that by any standard does not show a health effect: what we colloquially call a “cold negative.”

Investigators at the Finnish Cancer Registry have provided just such a cold negative in a recent issue of Occupational Medicine.Citation2 The study examines mortality from various causes among Finnish ferrochromium and stainless steel production workers, an occupation that has for many years been suspected of an elevated risk of lung cancer due to chromate exposure. The investigators’ work was made much easier because the industry in Finland is limited to one company that has little turnover; it uses fully enclosed technology; the registry has national coverage; and occupational health standards in Finland may be the world's highest. The study was quite large (8,088 participants) and followed the workers between 1971 and 2004, long enough to see a decline in the healthy-worker effect at hire after the plant opened in the 1960s.

The authors reported risk estimates at or below unity for every outcome studied, with significant deficits for all causes and ischemic heart disease, characteristic of a strong healthy-worker effect. Likewise, the experience of particular departments and operations showed only 2 elevations, a standardized mortality ratio (SMR) of 2.19 (0.45–6.39, with 3 cases) for respiratory diseases among stainless steel melting shop workers, and an SMR of 2.35 resulting from a single case of lung cancer in the hot rolling mill. Two elevations (because whether statistically significant or not, they are elevations) out of about 2 dozen is a nice demonstration of random effects. The data set might have been more convenient if the 2 elevations had achieved statistical significance (at p < .05) because that would make the study useful for teaching the expected frequency of “false positive” finding.

This study represents a truly cold negative—as cold as it gets. However, this study is inconsistent with other studies (cited in the article) that have shown evidence for a higher (but not usually significant) level of risk for some outcomes among workers in this industry. (The outcomes of concern are principally violence. Other outcomes may be suggested by the literature if the healthy-worker effect is taken into account). The favorable characteristics of the employer and occupational health regulation make it clear why this would be the case.

Why spend so much time discussing a negative study? Because studies that are used in standards setting or submitted as evidence in legal actions are subjected to nanoscale levels of scrutiny, but putatively negative studies are not. They rarely receive the same close examination as positive studies. Furthermore, subgroup analysis of negative studies is too often dismissed as “data dredging” or “analytical torture.” As well, suggestions of different results for different study settings and populations are often casually dismissed as “inconsistencies” that invalidate recognition of a trend in findings. It would be considered epidemiological malpractice not to examine “positive” studies with a close reading and level of rigor. Why do we so seldom examine negative studies equally closely?

It is worthwhile reminding ourselves what a negative study looks like and that we should examine studies that do not demonstrate an effect as closely and critically as we do positive studies.

We think we know what a positive study should look like, and we teach our students to look for consistency with similar studies as a marker of validity for positive studies. However, positive findings are not always obvious in studies, and the literature does not always show the consistency we would like to see. That does not mean that evidence suggesting a hidden positive finding can be dismissed or ignored. If weight of evidence is the standard, should we not be just as critical of negative studies as we are of the positive studies that they appear to contradict?

References

  • Guidotti TL. Interpreting the literature. In: Guidotti TL, ed. Health Risks and Fair Compensation in the Fire Service. New York, NY: Springer; 2016:41–62.
  • Huvinen M, Pukkala E. Cause-specific mortality in Finnish ferrochromium and stainless steel production workers. Occup Med. 2016;66:241–246.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.