Publication Cover
Accountability in Research
Ethics, Integrity and Policy
Volume 29, 2022 - Issue 5
274
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Toward the development of a perceived IRB violation scale

Pages 309-323 | Published online: 05 May 2021
 

ABSTRACT

This study introduces survey items that can be used to assess the perceived prevalence of specific IRB violations by researchers or to gauge the perceived seriousness of such infractions. Using survey data from tenured and tenure-track faculty at research-intensive universities, the descriptive findings showed that the failure to properly store data and neglecting to maintain project records were perceived to be the most widespread violations by sample members. Although comparatively less definitive, the results also showed that problems with data storage and record keeping were perceived to be relatively serious violations. As for scaling, the results from the exploratory factor analyses showed that the prevalence and seriousness scales were unidimensional. These findings support the practice of providing researchers with services for storing project data and records. Finally, the IRB violation scale developed in this study can be used by research integrity professionals to assess faculty perceptions at their universities.

Acknowledgment

The opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors, and do not necessarily reflect those of the Department of Health and Human Services. The authors would like to thank Marcus Berzofsky, Katelyn Golladay, Ryan Mays, Travis Pratt, and Natasha Pusch for their assistance. Special thanks to Susan Metosky whose subject matter expertise helped to improve the quality of this project.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. This study was part of a larger project that entailed the administration of both online and mail surveys. Data from the mail surveys were not used in this study because members of the research team were unable to determine with reasonable certainty whether the IRB survey items were applicable to participants’ research activities. Specifically, IRB items did not include a “not applicable” option on the mail survey. Data from the online and mail survey portions of this project have been used previously in published articles on various research fraud-related topics: the perceived prevalence of research fraud (Reisig, Holtfreter, and Berzofsky Citation2020), the perceived causes of research misconduct (Holtfreter et al. Citation2020), and the perceived utility of various responses to research misconduct (Pratt et al. Citation2019).

2. The research team also explored whether perceptions of IRB violations significantly varied across scientific fields. The results from the six one-way ANOVA models (one model per survey item) showed that the mean scores for perceived prevalence did not vary across scientific field. However, differences by scientific field were observed in terms of perceived seriousness. Four of the six ANOVA models indicated meaningful differences. The Bonferroni post hoc comparison tests showed that mean scores were higher for the applied sciences when compared to the natural sciences when it came to data storage violations and the failure to maintain proper records. The mean score for the applied science group was significantly higher when compared to social sciences for failing to renew ongoing research. Finally, the mean score for the applied sciences was higher than both the social and natural sciences for the perceived seriousness of beginning data collection prior to receiving IRB approval.

3. The factor models in used listwise deletion to handle missing cases. This procedure resulted in the number of cases dropping below 200 in both models. Under such conditions, it is reasonable to ask whether there are a sufficient number of cases to estimate factor models. Based on their simulations, MacCallum et al. (Citation1999) advise that the number of necessary cases is largely determined by the magnitude of communality estimates. More specifically, the authors found that when communalities were in the range of 0.50, samples that ranged from 100 to 200 cases were sufficiently large. The communalities associated with the models in (both initial and extraction) met MacCallum et al.’s criteria. Accordingly, it was concluded that the sample size was sufficiently large to estimate stable factor models.

Additional information

Funding

This research was supported in part by grants from the United States Department of Health and Human Services, Office of Research Integrity (Grant No. ORIIR160028-04-00 and ORIIR150018-01).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 461.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.