Publication Cover
Accountability in Research
Ethics, Integrity and Policy
Volume 27, 2020 - Issue 7
818
Views
12
CrossRef citations to date
0
Altmetric
Research Article

Assessing the perceived prevalence of research fraud among faculty at research-intensive universities in the USA

, &
Pages 457-475 | Published online: 01 Jun 2020
 

ABSTRACT

Survey-based studies on research fraud often feature narrow operationalizations of misbehavior and use limited samples. Such factors potentially hinder the development of strategies aimed at reducing the frequency of wrongdoing among researchers. This study asked full-time faculty members in the natural, social, and applied sciences how frequently six types of research fraud (i.e., data fabrication, plagiarism, data falsification, authorship fraud, publication fraud, and grant fraud) occur in their field of study. These data come from mail and online surveys that were administered to a stratified random sample of tenured and tenure-track faculty members (N = 613) at the top 100 research universities in the United States. Factor-analytic modeling demonstrated that the survey items load on the hypothesized latent constructs and also confirmed the presence of a second-order factor. A specific type of authorship fraud – gift authorship – was perceived to be the most prevalent overall. The least common fraud was a form of data fabrication (i.e., creating data from a study that was never actually conducted). The results were largely consistent with previous studies indicating that serious forms of fraud like data fabrication are relatively rare. Future survey-based studies should pay careful attention to the multidimensional nature of research fraud.

Acknowledgments

The opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors, and do not necessarily reflect those of the Department of Health and Human Services. A portion of these findings were presented at the 6th World Congress on Research Integrity in Hong Kong, June 2019. The authors would like to thank Katelyn Golladay, Ryan Mays, Susan Metosky, Travis Pratt, and Natasha Pusch for their valuable assistance.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. Judgments regarding the relative seriousness of different forms of research fraud can be somewhat subjective. However, one study provides some direction in this regard. Bouter et al. (Citation2016) found that data fabrication ranked highest among attendees of four World Conferences on Research Integrity in terms of the negative effect it has on trust among scientists.

2. Article retractions and case summaries from the Office of Research Integrity (ORI) are two additional data sources that are used to study research fraud. Both approaches focus on known accounts of misbehavior – either instances of fraud published in refereed journals (see, e.g., Fang, Steen, and Casadevall Citation2012; Grieneisen and Zhang Citation2012; Steen Citation2011) or cases where a finding of misconduct was issued by the federal government (Davis, Riske-Morris, and Diaz Citation2007; Wright, Titus, and Cornelison Citation2008). Although both data sources have strengths, neither is able to shed much (if any) light on fraudulent behavior that goes undetected (i.e., the “dark figure” of research fraud; see Hesselmann, Wienefoet, and Reinhart Citation2014), nor do these studies provide information about compliant researchers.

3. Data fabrication and data falsification involve the misuse of the scientific method. These two misbehaviors were of interest in this study. Accordingly, academic fields that do not regularly employ the scientific method were not included (e.g., mathematics and music).

4. Response rates were calculated after taking into account bad e-mail addresses (n = 139) and incorrect mail addresses (n = 60). The response rate for the mail survey (15.6%) was higher than the online survey (8.1%). Although there are advantages to administering questionnaires online, such as relatively quick response times, research has shown that web-based surveys yield lower response rates when compared to postal survey methods (see, e.g., Sebo et al. Citation2017).

5. Missing survey data was handled using similar response pattern imputation. This procedure is available in PRELIS 2.3 (Scientific Software International, Chicago, IL).

6. A three-step process was followed to develop survey weights. First, the design-based weights, which were equal to the inverse probability of selection (Levy and Lemeshow Citation1999), were calculated. In this case, 2,000 faculty members from the target population in the social, natural, and applied sciences were randomly selected. Each set of 2,000 faculty were then randomly assigned to a recruitment method with two-thirds being assigned to e-mail recruitment (web survey) and one-third being assigned to mail recruitment (paper survey). Second, a weighting class adjustment was used within each scientific field and recruitment method combination to correct the survey weights for nonresponse (Gary Citation2007). Third, a composite estimator was applied to combine the online and mail respondents (Hartley Citation1962). The composite estimator was calculated using the probability of assignment to a recruitment method. After these steps, the survey weights of the sample respondents represented the target population.

7. Data from the survey have been used in prior investigations of the perceived causes of research misconduct (Holtfreter et al. Citation2020) and scholars’ preferences for dealing with research misconduct (Pratt et al. Citation2019).

8. An example of an item that was perceived to be too broad by the authors included items such as “[d]isagreements about authorship” or “[f]alsifying data,” both of which are included in the Scientific Misconduct Questionnaire – Revised (Broome et al. Citation2005, 274). The study’s authors concluded that these items (and other items like it) conflate different misbehaviors.

9. While these different types of research fraud are treated as if they were mutually exclusive, in actuality some overlap existed. For example, using fictitious data to make a grant proposal more competitive could technically be considered both data fabrication and grant fraud.

10. Two of the items from the grant fraud scale were pulled from Montgomerie and Birkhead's (Citation2005) Scientific Misconduct Questionnaire (i.e., “Using grant funds to attend a conference and then not, or barely, showing up” and “Applying for grants to do work that is already done”).

11. Differences across scientific fields were also assessed using a combined 26-item research fraud scale (summated scale; Cronbach’s α =.926). The results from the one-way ANOVA model did not reveal statistically significant differences between the social, applied, and natural sciences (F-ratio = 1.526, p =.218).

12. Studies of self-reported criminal victimization and criminal involvement (both of which are often rare events) regularly use one year recall periods (see, e.g., Khade, Wang, and Decker Citation2018; Wolfe Citation2015). But whether such a recall period would capture sufficient variation in fraudulent behavior among researchers has yet to be determined.

Additional information

Funding

This work was supported by the U.S. Department of Health and Human Services, Office of Research Integrity [grant numbers ORIIR150018-01-00 and ORIIR160028-04-00].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 461.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.