ABSTRACT
This article explores the comparability of assessment tools under different format conditions. Prior studies have not considered the interaction of format and device on time to complete an assessment and have instead treated each of them separately with conflicting results. This study assesses, by linear regressions using web-based data, the performance of multiple devices under varying formats while controlling for non-device factors such as demographic information. The results of this study add to the growing literature on the equivalence among devices and formats used to collect and interpret performance in a variety of organizational settings.
Disclosure statement
No potential conflict of interest was reported by the authors.
Additional information
Notes on contributors
Robert Mason
Robert Mason is an assistant professor of Economics at Georgia Gwinnett College. His research interests include price theory, the rhetoric of economics, and monetary economics.
Kyle Huff
Kyle Huff is an associate professor of Management at Georgia Gwinnett College. He concentrates on teaching human resource management and organizational behavior. His research interests are around the broad topic of employee assessment. This includes justice perceptions of performance appraisals, threats to the validity of web-based assessment, HR/talent analytics, and indirect measures of psychological constructs.