Abstract
Restricted by the limited interaction area of native AR glasses, creating sketches is a challenge in it. Existing solutions attempt to use mobile devices (e.g., tablets) or mid-air hand gestures to expand the interactive spaces and as the 2D/3D sketching input interfaces for AR glasses. Between them, mobile devices allow for accurate sketching but are often heavy to carry. Sketching with bare hands is zero-burden but can be inaccurate due to arm instability. In addition, mid-air sketching can easily lead to social misunderstandings and its prolonged use can cause arm fatigue. In this work, we present WristSketcher, a new AR system based on a flexible sensing wristband that enables users to place multiple virtual plane canvases in the real environment and create 2D dynamic sketches based on them, featuring an almost zero-burden authoring model for accurate and comfortable sketch creation in real-world scenarios. Specifically, we streamlined the interaction space from the mid-air to the surface of a lightweight sensing wristband, and implemented AR sketching and associated interaction commands by developing a gesture recognition method based on the sensing pressure points. We designed a set of interactive gestures consisting of Long Press, Tap and Double Tap based on a heuristic study involving 26 participants. These gestures are correspondingly mapped to various command interactions using a combination of multi-touch and hotspots. Moreover, we endow our WristSketcher with the ability of animation creation, allowing it to create dynamic and expressive sketches. Experimental results demonstrate that our WristSketcher (i) recognizes users’ gesture interactions with a high accuracy of 95.9%; (ii) achieves higher sketching accuracy than Freehand sketching; (iii) achieves high user satisfaction in ease of use, usability and functionality; and (iv) shows innovation potentials in art creation, memory aids, and entertainment applications.
Keywords:
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Notes on contributors
Enting Ying
Enting Ying is currently working toward the master’s degree with the School of Informatics, Xiamen University, Xiamen, China and supervised by professor Shihui Guo and associate professor Ming Qiu. His research interests include human–computer interaction and virtual reality.
Tianyang Xiong
Tianyang Xiong is currently working toward the master’s degree with the School of Informatics, Xiamen University, Xiamen, China and supervised by professor Shihui Guo and associate professor Ming Qiu. His research interests include human–computer interaction and virtual reality.
Gaoxiang Zhu
Gaoxiang Zhu received the bachelor’s degree in mechanical design, manufacturing and automation from Hunan University of Arts and Science, Hunan, China, in 2011. He is currently working in Hunan Huanan Optoelectronics (Group) Co., Ltd., engaged in scientific research project demonstration. He is interested in 3D modeling.
Ming Qiu
Ming Qiu received the PhD degrees from Zhejiang University, Hangzhou, China, in 2006. He is currently an assistant professor at the School of Informatics, Xiamen University. He is interested in intelligent information processing, Semantic Web, and Ontology.
Yipeng Qin
Yipeng Qin received the BSc in Electronic Engineering from Shanghai Jiao Tong University, China and the PhD in Computer Science from the National Centre for Computer Animation, Bournemouth University, UK. He is currently an Associate Professor at the School of Computer Science and Informatics, Cardiff University, UK.
Shihui Guo
Shihui Guo received the BS degree from Peking University, Beijing, China, and the PhD degree in computer animation from the National Centre for Computer Animation, Bournemouth University, Poole, U.K. He is currently an associate professor with the School of Informatics, Xiamen University. His research interests include virtual reality.