1,260
Views
8
CrossRef citations to date
0
Altmetric
Articles

Scholars’ preferred solutions for research misconduct: results from a survey of faculty members at America’s top 100 research universities

, , &
Pages 510-530 | Published online: 16 May 2019
 

Abstract

Research misconduct is harmful because it threatens public health and public safety, and also undermines public confidence in science. Efforts to eradicate ongoing and prevent future misconduct are numerous and varied, yet the question of “what works” remains largely unanswered. To shed light on this issue, this study used data from both mail and online surveys administered to a stratified random sample of tenured and tenure-track faculty members (N = 613) in the social, natural, and applied sciences at America’s top 100 research universities. Participants were asked to gauge the effectiveness of various intervention strategies: formal sanctions (professional and legal), informal sanctions (peers), prevention efforts (ethics and professional training), and reducing the pressures associated with working in research-intensive units. Results indicated that (1) formal sanctions received the highest level of support, (2) female scholars and researchers working in the applied sciences favored formal sanctions, and (3) a nontrivial portion of the sample supported an integrated approach that combined elements of different strategies. A key takeaway for university administrators is that a multifaceted approach to dealing with the problem of research misconduct, which prominently features enhanced formal sanctions, will be met with the support of university faculty.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1 Phillips et al.’s (Citation2013) ranking is based on nine performance measures: federal research expenditures; total research spending; endowment size; annual fundraising; National Academy of Sciences members; faculty awards; number of post-docs; number of doctoral degrees awarded; and median standardized student test scores (i.e., measure of undergraduate student quality).

2 The survey data used for this study were part of a larger project that included scholars’ perceptions of research misconduct. The project focused on plagiarism, grant fraud, and publication fraud, which are type of misconduct that can also be found in the formal sciences and humanities. However, the project was also very interested in data fabrication and data falsification, which are forms of misconduct that necessitates the (mis)use of the scientific method. Because of this interest, the scope of the larger project was restricted to the social, natural, and applied sciences. This narrow scope should be taken into consideration when thinking about the potential applications of the findings from this study.

3 The higher response rate for the mail survey was consistent with recent research demonstrating that although online surveys are beneficial in many ways (e.g., lower cost and faster response time than mails surveys), the response rate for such modes are lower when compared to mail surveys (Sebo et al., Citation2017).

4 This factor-analytic approach used mean and variance adjusted weighted least-squares (WLSMV; DiStefano & Morgan, Citation2014), which was available in Mplus version 6.11 (Muthén & Muthén, Los Angeles, CA). An additional feature of Mplus is that it has procedures for handling missing values (e.g., multiple imputation, or MI; see Asparouhov & Muthén, Citation2010, for technical details). To be clear, this method was used during the estimation of the factor models, thus resulting in a slight increase in the number of cases available for analysis.

5 This decision was based on the results from a one-way MANOVA model showing the mean scores for the perceived solutions for research misconduct scales were nearly indistinguishable for Professors and Distinguished Professors, hence empirically justifying collapsing the two categories. Also, the mean scores for this more senior group were different relative to lower ranking faculty, which supports using the senior group as the comparison group.

6 Note that a one-way MANOVA model showed the mean scores for the social sciences were consistently different than the means from the natural and applied sciences, thus supporting the decision to use the former as the reference category.

7 Of the 15,325 cells included in the data file for this study, 337 (or 2.2%) had missing values. Because similar pattern imputation (SRPI) has been found to work well with ordinal survey data (Jönsson & Wohlin, Citation2006), some missing values were replaced using SRPI function in Prelis 2.3 (Scientific Software International, Chicago, IL). After the imputation process, 264 cells remained missing (or 1.7%). These cases were excluded using listwise deletion in the multivariate analyses.

8 The 20 mean scores presented in were recalculated using survey weights. To develop the survey weights, design-based weights were created that were equal to the inverse probability of selection (Levey & Lemeshow, Citation1999). Next, the survey weights were corrected for nonresponse using a weighting class adjustment for each scientific field (i.e., social, natural, and applied) and survey method (i.e., mail and online; Gary, Citation2007). Finally, a composite indicator, which was based on the probability of assignment to one of the two survey modes, was applied (Hartley, Citation1962). The resulting survey weights allow inferences to be made to the full population of tenured and tenured track faculty across the three scientific fields at the 100 research-intensive universities included in the sample. The weighted results were very similar to the mean scores reported in . Indeed, all 20 of the mean scores in fell within the 95% confidence intervals for the results that were observed using survey weights (weighted n = 164,036). In short, the mean scores provided in are accurate and precise.

9 The fit indices used included the Bayesian Information Criteria, Adjusted Bayesian Information Criteria, and the Akaike Information Criterion.

10 It should be noted that the online survey was administered during and after the 2016 U.S. Presidential election. During this time there was considerable media coverage of the Democratic National Committee’s computer system being hacked because of a phishing email (i.e., clicking on a link in an unsolicited email; see Lipton, Sanger, & Shane, Citation2016). The impact that such news had on the response rate is unknown, though likely not trivial.

Additional information

Funding

This work was supported by the United States Department of Health and Human Services, Office of Research Integrity (Grant No. ORIIR160028-04-00 and ORIIR150018-01). The opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors, and do not necessarily reflect those of the Department of Health and Human Services. The authors would like to thank Marcus Berzofsky and Susan Metosky for their helpful comments.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 349.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.