Abstract
Manual assembly in the future Industry 4.0 workplace will put high demands on operators’ cognitive processing. The development of mental workload (MWL) measures therefore looms large. Physiological gauges such as electroencephalography (EEG) show promising possibilities, but still lack sufficient reliability when applied in the field. This study presents an alternative measure with a substantial ecological validity. First, we developed a behavioural video coding scheme identifying 11 assembly behaviours potentially revealing MWL being too high. Subsequently, we explored its validity by analysing videos of 24 participants performing a high and a low complexity assembly. Results showed that five of the behaviours identified, such as freezing and the amount of part rotations, significantly differed in occurrence and/or duration between the two conditions. The study hereby proposes a novel and naturalistic method that could help practitioners to map and redesign critical assembly phases, and researchers to enrich validation of MWL-measures through measurement triangulation.
Practitioner summary: Current physiological mental workload (MWL) measures still lack sufficient reliability when applied in the field. Therefore, we identified several observable assembly behaviours that could reveal MWL being too high. The results propose a method to map MWL by observing specific assembly behaviours such as freezing and rotating parts.
Abbreviations: MWL: mental workload; EEG: electroencephalography; fNIRS: functional near infrared spectroscopy; AOI: area of interest; SMI: SensoMotoric Instruments, ETG: Eye-Tracking Glasses; FPS: frames per second; BORIS: Behavioral Observation Research Interactive Software; IRR: inter-rater reliability; SWAT: Subjective Workload Assessment Technique; NASA-TLX: National Aeronautics and Space Administration Task Load Index; EL: emotional load; DSSQ: Dundee Stress State Questionnaire; PHL: physical load; SBO: Strategisch Basis Onderzoek
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 Note that participants had all the time they needed to complete each step, so that the length of data differed per participant.
2 Note that we had 25 participants in total, but for one participant the video was corrupted and therefore not usable, because of a technical malfunction.
3 Because we did not have the spatial intelligence data for one participant, we ran the analysis with 24 participants to assure complete comparability.