Abstract
Authors within the fields of cyberpsychology and human−computer interaction have demonstrated a particular interest in measurement and scale creation, and exploratory factor analysis (EFA) is an extremely important statistical method for these areas of research. Unfortunately, EFA requires several statistical and methodological decisions to which the best choices are often unclear. The current article reviews five primary decisions and provides direct suggestions for best practices. These decisions are (a) the data inspection techniques, (b) the factor analytic method, (c) the factor retention method, (d) the factor rotation method, and (e) the factor loading cutoff. Then the article reviews authors’ choices for these five EFA decisions in every relevant article within seven cyberpsychology and/or human–computer interaction journals. The results demonstrate that authors do not employ the recommended best practices for most decisions. Particularly, most authors do not inspect their data for violations of assumptions, apply inappropriate factor analytic methods, utilize outdated factor retention methods, and omit the justification for their factor rotation methods. Further, many authors omit altogether their EFA decisions. To rectify these concerns, the current article provides a step-by-step guide and checklist that authors can reference to ensure the use of recommended best practices. Together, the current article identifies concerns with current research and provides direct solutions to these concerns.
ACKNOWLEDGMENTS
I thank Laura Vrana for her comments on a previous version of this article.
Additional information
Notes on contributors
Matt C. Howard
Matt C. Howard is a current PhD candidate at the Pennsylvania State University. His research interests include statistics and methodologies, workplace technologies, and employee training. Please visit MattCHoward.com for more information about the author, his publications, and helpful statistical guides.