Abstract
With the rapid development of natural human-computer interaction technologies, gesture-based interfaces have become popular. Although gesture interaction has received extensive attention from both academia and industry, most existing studies focus on hand gesture input, leaving foot-gesture-based interfaces underexplored, especially in scenarios where the user’s hands are occupied for other interaction tasks such as washing the hair in smart shower rooms. In such scenarios, users often have to perform interactive tasks (e.g., controlling water volume) with their eyes closed when water and shampoo liquid flow along with their head to eyes area. One possible way to address this problem is to use eyes-free (rather than eyes-engaged), foot-gesture-based interactive techniques that allow users to interact with the smart shower system without visual involvement. Through our online survey, 71.60% of the participants (58/81) have the requirements of using foot-gesture-based eyes-free interactions during showers. To this end, we conducted a three-phase study to explore foot-gesture-based interaction to achieve eyes-free interaction in smart shower rooms. We first derived a set of user-defined foot gestures for eyes-free interaction in smart shower rooms. Then, we proposed a taxonomy for foot gesture interaction. Our findings indicated that end-users preferred single-foot (76.1%), atomic (73.3%), deictic (65.0%), and dynamic (76.1%) foot gestures, which markedly differs from the results reported by previous studies on user-defined hand gestures. In addition, most of the user-defined dynamic foot gestures involve atomic movements perpendicular to the ground (40.1%) or parallel to the ground (27.7%). We finally distilled a set of concrete guidelines for foot gesture interfaces based on observing end-users’ mental model and behaviors when interacting with foot gestures. Our research can inform the design and development of foot-gesture-based interaction techniques for applications such as smart homes, intelligent vehicles, VR games, and accessibility design.
Acknowledgments
The authors would like to thank the editor and the anonymous reviewers for their insightful comments.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
Additional information
Funding
Notes on contributors
Zhanming Chen
Zhanming Chen is a graduate student at the School of Communication and Design, Sun Yat-Sen University, China. His research interests include human-computer interaction, elicitation study, and usability engineering. He obtained a Bachelor of Marketing from Sun Yat-Sen University, Guangzhou, China, in 2019.
Huawei Tu
Huawei Tu is an Assistant Professor at La Trobe University, Australia. His research area is Human-computer Interaction, with special interests in multimodal interaction and user interface design. He has published more than 30 research papers including top-tier HCI journal papers (e.g. ACM TOCHI) and conference papers such as ACM CHI.
Huiyue Wu
Huiyue Wu is a Full Professor at Sun Yat-Sen University, Guangzhou, China, where he is also the director of the HCI Laboratory. He is the author of five books and more than 40 publications in the field of HCI (e.g., IJHCS, IJHCI). His research interests include human-computer interaction and virtual reality.