ABSTRACT
We investigate the impact of non-cognitive skills on the quality of task-specific outcomes by conducting a quasi-experiment on a well-known online crowdsourcing platform. We show that worker’s performance varies with personality traits, gender, human capital, crowdsourcing experience and work effort. Regarding the effects of non-cognitive skills, we find that worker’s performance in online microtasks is positively related to extraversion and agreeableness. The positive impact of extroverts is also revealed when performance is adjusted for task completion time. These findings provide implications regarding the integration of selection mechanisms in online labour matching platforms aiming in uncovering micro-workers’ soft skills to improve performance and consequently the allocation of resources in online microtasks.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 Online labor markets are organised in platforms where the creators of these markets provide the environment for individual-specific payments, screen out users who do not have valid accounts and prevent workers from communicating with each other. Some of the most popular platforms are Mechanical Turk, oDesk, Freelancer, Crowdflower, MobileWorks, ManPower, microWorkers.
2 Unfortunately, we are not able to know the population of micro-workers who see the announcement of our task and thus to observe two-groups of micro-workers (assigned to the task and not-assigned to the task). From this point of view potential self-selection issues may arise. For example, even though our online job is skill-specific and thus a worker’s level of English competence may affect task participation and the quality of results, self-selection bias is present. However, it cannot be addressed effectively in the present study.
3 In contrast to standard routine online jobs, our task requires a medium level of creativity (Dontcheva, Gerber, and Lewis Citation2011).
4 Lee and Glass (Citation2011), integrate a series of processing steps in a transcription online job in MTurk, in order to evaluate better word level confidence while, Parent and Eskenazi (Citation2010), allowed workers to participate in an online transcription task, in which a large amount of spoken dialog should be subject to speech recognition. Last but not least, Audhkhasi, Georgiou, and Narayanan (Citation2011), asked multiple workers to transcribe speech and utilised the audio for acoustic model adaptation.
5 A complete reference of the utilised questionnaire can be found in John and Srivastava (Citation1999, 132).
6 By taking into consideration, the skill-specific nature of our task, we measured English competence for each participant. Nonetheless, the inclusion of this variable in the performance equation does not seem to add more explanatory power and is also found to be insignificant.