304
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Dual-Gain Mode of Head-Gaze Interaction Improves the Efficiency of Object Positioning in a 3D Virtual Environment

, , & ORCID Icon
Pages 2067-2082 | Received 28 Nov 2022, Accepted 06 Jun 2023, Published online: 04 Jul 2023
 

Abstract

Head-gaze interaction is an integral mode of interaction in virtual reality (VR) applications, demonstrating high precision in fine manipulation tasks but low efficiency in large-scale object movements. To enhance the efficiency of head-gaze interaction, this study adjusted the control-display gain to compensate for the weaknesses of head-gaze interaction in a long-distance object-positioning task. We investigated the effect of the control-display gain of head-gaze interaction on movement time (MT) using a cohort of participants (n = 24) to perform experiments. The results showed that the MT first decreased as the gain increased from 1 to 1.5 and then increased afterwards. Further analysis showed that a high gain improved the interaction efficiency in the ballistic phase, but reduced the interaction efficiency in the corrective phase. To be able to obtain higher efficiency of interaction, we designed a dual-gain mode which set different gains in the ballistic and corrective phases. Evaluated using an additional experimental cohort (n = 24), our results showed that the dual-gain mode was more efficient than the mono-gain mode. Moreover, the dual-gain mode with optimal gains did not induce a more serious perception of inconsistency, confusion, nonacceptance and motion sickness, while it had a tendency to reduce the total workload compared to the interaction with normal gain. Our findings provide potential valuable design insights and guidance contributing to improving the efficiency of head-gaze interaction in virtual spaces.

Acknowledgments

We thank Yu Zhou, Chen-Yu Li, Hong-Xuan Zhang, Yi-Chen Li, and Yue Qu for their help with data collection and analysis.

Disclosure statement

No potential competing interest was reported by the author(s).

Additional information

Funding

This work was supported by the National Natural Science Foundation of China under Grant 32100879 and Grant 32022031; the Basic Research Project of Shanghai Science and Technology Commission under Grant 19JC1410101; and the “Flower of Happiness” Fund Pilot Project of East China Normal University under Grant 2019JK2203.

Notes on contributors

Cheng-Long Deng

Cheng-Long Deng is an associate research fellow at the Institute of Brain and Education Innovation at East China Normal University, China. His research focuses on human-computer interaction in virtual reality (VR), user experience and engineering psychology.

Lei Sun

Lei Sun is a master’s student at the Department of Applied Psychology, Fudan University. His research focus is on human-computer interaction and engineering psychology in 3D virtual environments.

Chu Zhou

Chu Zhou is a professor of psychology at Fudan University. Her research has explored the nature of human memory and memory distortions, the relation between memory and decision-making, as well as engineering psychology.

Shu-Guang Kuai

Shu-Guang Kuai is a professor and the head of the Visual Cognition and Virtual Reality Application Lab at the School of Psychology and Cognitive Science, East China Normal University. He integrates VR, neuroimaging techniques, and computational modeling to investigate various aspects of human social interaction behavior and human-computer interaction.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.