Abstract
Traditional computer-aided design (CAD) tools based on keyboard and mouse interactions present challenges to efficiency and quality in terms of free creation, iteration efficiency, and operational experience. This paper proposes a multimodal interactive guiding grammar framework for virtual-reality modeling scenarios. In studies on the characteristics of multimodal combinations, the present heuristic method leads to a high cognitive load and experimental cost, and the heuristic is difficult. Therefore, we propose a two-stage heuristic method. Subsequently, we use this method to decompose the interaction task into two stages: modalities and interactive operations. A clearer and more specific reference for multimodal selection is provided for the interaction design phase, based on the two dimensions of modality and task. This can reduce the heuristic cost and difficulty of users in completing tasks in multimodal interaction scenarios. Finally, the proposed virtual modeling platform which is used for experimental verification, and the results shows that, compared with traditional modeling, the interactive modeling method built according to the results of modal interaction in this study can better satisfy users in terms of emotional experience. In the index evaluation, high scores are achieved, with an average score of 5 points (1 and 7 are the lowest and highest scores of the evaluation index, respectively), and the large-area distribution of high scores also shows a good user experience. To a certain extent, this method improves the interaction mode of the previous 3D modeling.
Acknowledgements
To the peers and reviewers who have provided suggestions on this paper!
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Notes on contributors
Wen-Jun Hou
Wen-Jun Hou, born in 1964, PhD, professor. Her research interests include natural interaction, information visualization, interaction design theory and mode et al.
Ge-Xin Guo
Ge-Xin Guo, born in 1999, PhD candidate. Her research interests include human-computer interaction, natural interaction and multimodal user interface.
Yi-Ting Cheng
Yi-Ting Cheng, born in 1997, MFA Her research interests include human-computer interaction and multimodal user interface.