303
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Climbing Keyboard: A Tilt-Based Selection Keyboard Entry for Virtual Reality

ORCID Icon, ORCID Icon, , ORCID Icon &
Pages 1327-1338 | Received 25 Feb 2022, Accepted 01 Nov 2022, Published online: 15 Nov 2022

References

  • Adhikary, J., & Vertanen, K. (2021). Text entry in virtual environments using speech and a midair keyboard. IEEE Transactions on Visualization and Computer Graphics, 27(5), 2648–2658. https://doi.org/10.1109/TVCG.2021.3067776
  • Azenkot, S., & Zhai, S. (2012). Touch behavior with different postures on soft smartphone keyboards. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services (pp. 251–260). https://doi.org/10.1145/2371574.2371612
  • Boletsis, C., & Kongsvik, S. (2019). Text input in virtual reality: A preliminary evaluation of the drum-like vr keyboard. Technologies, 7(2), 31. https://doi.org/10.3390/technologies7020031
  • Bowman, D., Wingrave, C., Campbell, J., & Ly, V. (2001). Using pinch gloves (TM) for both natural and abstract interaction techniques in virtual environments. Technical Report TR-01-23, Computer Science, Virginia Tech.
  • Çetiner, M. Y., Ild Ir Im, A., Onay, B., & Öksüz, C. (2021). Word sense disambiguation using kenet. In 2021 29th Signal Processing and Communications Applications Conference (SIU) (pp. 1–4). https://doi.org/10.1109/SIU53274.2021.9477816
  • Chang, Y., L’Yi, S., Koh, K., Seo, J. (2015). Understanding users’ touch behavior on large mobile touch-screens and assisted targeting by tilting gesture. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 1499–1508).
  • Gogoi, A., Baruah, N., & Nath, L. J. (2021). Assamese word sense disambiguation using cuckoo search algorithm. Procedia Computer Science, 189, 142–147. https://doi.org/10.1016/j.procs.2021.05.110
  • González, G., Molina, J. P., García, A. S., Martínez, D., & González, P. (2009). Evaluation of text input techniques in immersive virtual environments. In New Trends on Human–Computer Interaction (pp. 109–118). Springer.
  • Gupta, A., Ji, C., Yeo, H.-S., Quigley, A., & Vogel, D. (2019). Rotoswype: Word-gesture typing using a ring. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–12).
  • Hinckley, K., & Song, H. (2011). Sensor synaesthesia: Touch in motion, and motion in touch. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 801–810).
  • Hinrichs, U., Schmidt, H., Isenberg, T., Hancock, M. S., & Carpendale, S. (2008). Bubbletype: Enabling text entry within a walk-up tabletop installation. Research Report 2008-893-06, Department of Computer Science, University of Calgary.
  • Hoste, L., Dumas, B., & Signer, B. (2012). Speeg: A multimodal speech-and gesture-based text input solution. In Proceedings of the International Working Conference on Advanced Visual Interfaces (pp. 156–163).
  • Huckauf, A., & Urbina, M. H. (2008). Gazing with peyes: Towards a universal input for various applications. In Proceedings of the 2008 Symposium on Eye Tracking Research & Applications (pp. 51–54).
  • Ide, N., Macleod, C. (2001). The american national corpus: A standardized resource of american english. In Proceedings of Corpus Linguistics (Vol. 3, pp. 1–7).
  • Jiang, H., & Weng, D. (2020). Hipad: Text entry for head-mounted displays using circular touchpad. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 692–703). https://doi.org/10.1109/VR46266.2020.00092
  • Jimenez, J. G. (2017). A prototype for text input in virtual reality with a swype-like process using a hand-tracking device. [Unpublished doctoral dissertation].
  • Jones, E., Alexander, J., Andreou, A., Irani, P., & Subramanian, S. (2010). Gestext: Accelerometer-based gestural text-entry systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2173–2182).
  • Kaddoura, S. D. Ahmed, R. (2022). A comprehensive review on arabic word sense disambiguation for natural language processing applications. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, e1447.
  • Kim, Y. R., & Kim, G. J. (2017). Hovr-type: Smartphone as a typing interface in vr using hovering. In 2017 IEEE International Conference on Consumer Electronics (ICCE) (pp. 200–203). https://doi.org/10.1109/ICCE.2017.7889285
  • Knierim, P., Schwind, V., Feit, A. M., Nieuwenhuizen, F., & Henze, N. (2018). Physical keyboards in virtual reality: Analysis of typing performance and effects of avatar hands. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–9).
  • Lee, K. C., & Chung, N. (2008). Empirical analysis of consumer reaction to the virtual reality shopping mall. Computers in Human Behavior, 24(1), 88–104. https://doi.org/10.1016/j.chb.2007.01.018
  • Lee, Y., Kim, G. J. (2017). Vitty: Virtual touch typing interface with added finger buttons. In International Conference on Virtual, Augmented and Mixed Reality (pp. 111–119).
  • Levenshtein, V. I. (1966). Binary codes capable of correcting deletions, insertions, and reversals. In Soviet Physics Doklady, 10, 707–710.
  • Liebowitz, S. J., & Margolis, S. E. (1990). The fable of the keys. The Journal of Law and Economics, 33(1), 1–25. https://doi.org/10.1086/467198
  • Lin, J.-W., Han, P.-H., Lee, J.-Y., Chen, Y.-S., Chang, T.-W., & Chen, K.-W. (2017). Visualizing the keyboard in virtual reality for enhancing immersive experience. In ACM SIGGRAPH 2017 Posters (pp. 1–2). https://doi.org/10.1145/3102163.3102175
  • MacKenzie, I. S. (1992). Fitts’ law as a research and design tool in human-computer interaction. Human–Computer Interaction, 7(1), 91–139. https://doi.org/10.1207/s15327051hci0701_3
  • MacKenzie, I. S. (2002). Kspc (keystrokes per character) as a characteristic of text entry techniques. In International Conference on Mobile Human-Computer Interaction (pp. 195–210).
  • MacKenzie, I. S., & Soukoreff, R. W. (2003). Phrase sets for evaluating text entry techniques. In CHI’03 Extended Abstracts on Human Factors in Computing Systems (pp. 754–755). https://doi.org/10.1145/765891.765971
  • Majaranta, P., & Räihä, K.-J. (2007). Text entry by gaze: Utilizing eye-tracking. Text Entry Systems: Mobility, Accessibility, Universality, 175–187. https://doi.org/10.1145/3313831.3376317
  • Mankoff, J., & Abowd, G. D. (1998). Cirrin: A word-level unistroke keyboard for pen input. In Proceedings of the 11th Annual ACM Symposium on User Interface Software and Technology (pp. 213–214).
  • McGill, M., Boland, D., Murray-Smith, R., & Brewster, S. (2015). A dose of reality: Overcoming usability challenges in vr head-mounted displays. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 2143–2152).
  • Olofsson, J. (2017). Input and display of text for virtual reality head-mounted displays and hand-held positionally tracked controllers (Vol. 15). Lulea University of Technology, Department of Computer Science, Electrical and Space Engineering.
  • Partridge, K., Chatterjee, S., Sazawal, V., Borriello, G., & Want, R. (2002). Tilttype: Accelerometer-supported text entry for very small devices. In Proceedings of the 15th Annual ACM Symposium on User Interface Software and Technology (pp. 201–204).
  • Pick, S., Puika, A. S., & Kuhlen, T. W. (2016). Swifter: Design and evaluation of a speech-based text input metaphor for immersive virtual environments. In 2016 IEEE Symposium on 3D User Interfaces (3DUI) (pp. 109–112). https://doi.org/10.1109/3DUI.2016.7460039
  • Proschowsky, M., Schultz, N., & Jacobsen, N. E. (2006). An intuitive text input method for touch wheels. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 467–470).
  • Schneider, D., Otte, A., Gesslein, T., Gagel, P., Kuth, B., Damlakhi, M. S., Dietz, O., Ofek, E., Pahud, M., Kristensson, P. O., Muller, J., & Grubert, J. (2019). Reconviguration: Reconfiguring physical keyboards in virtual reality. IEEE Transactions on Visualization and Computer Graphics, 25(11), 3190–3201. https://doi.org/10.1109/TVCG.2019.2932239
  • Speicher, M., Feit, A. M., Ziegler, P., & Krüger, A. (2018). Selection-based text entry in virtual reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–13).
  • Walker, J., Li, B., Vertanen, K., & Kuhl, S. (2017). Efficient typing on a visually occluded physical keyboard. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 5457–5461).
  • Wigdor, D., & Balakrishnan, R. (2003). Tilttext: Using tilt for text input to mobile phones. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology (pp. 81–90).
  • Wong, P. C., Zhu, K., & Fu, H. (2018). Fingert9: Leveraging thumb-to-finger interaction for same-side-hand text entry on smartwatches. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–10).
  • Yeo, H.-S., Phang, X.-S., Castellucci, S. J., Kristensson, P. O., & Quigley, A. (2017). Investigating tilt-based gesture keyboard entry for single-handed text entry on large devices. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 4194–4202). https://doi.org/10.1145/3025453.3025520
  • Yi, X., Yu, C., Xu, W., Bi, X., & Shi, Y. (2017). Compass: Rotational keyboard on non-touch smartwatches. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 705–715).
  • Yu, C., Gu, Y., Yang, Z., Yi, X., Luo, H., Shi, Y. (2017). Tap, dwell or gesture? exploring head-based text entry techniques for hmds. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 4479–4488).
  • Yu, D., Fan, K., Zhang, H., Monteiro, D., Xu, W., & Liang, H.-N. (2018). Pizzatext: Text entry for virtual reality systems using dual thumbsticks. IEEE Transactions on Visualization and Computer Graphics, 24(11), 2927–2935. https://doi.org/10.1109/TVCG.2018.2868581
  • Zhao, S., Dragicevic, P., Chignell, M., Balakrishnan, R., & Baudisch, P. (2007). Earpod: Eyes-free menu selection using touch input and reactive audio feedback. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1395–1404).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.