ABSTRACT
Algorithms are increasingly used by human resource departments to evaluate employee performance. While the algorithms are perceived to be objective and neutral by removing human biases, they are often perceived to be less fair than human managers. This research proposes dignity as an important construct in explaining the discrepancy in perceived fairness and investigates remedial steps for improving dignity and fairness for algorithm-based employee evaluations. Three experiments’ results show that those evaluated by algorithms perceive lower levels of dignity, leading them to believe the process is less fair. In addition, we find that providing justifications for algorithm usage in employee evaluations improves perceived dignity. However, human-algorithm collaboration does not enhance perceived dignity.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Correction Statement
This article has been corrected with minor changes. These changes do not impact the academic content of the article.
Notes
1 The data is available at https://github.com/lixuanzhang/dignity.
2 ANOVA tests were used in studies 2 and 3 although the data were not normally distributed.
3 More subjects failed to select the correct answer on the question about which scenario they read in human algorithm collaboration conditions than the algorithm-only and human-only conditions, resulting in fewer subjects in these two conditions.