References
- Anthony, L., & Wobbrock, J. O. (2012). $N-Protractor: A fast and accurate multistroke recognizer. In Proceedings of graphics interface (pp. 117–120), Toronto, ON.
- Buchanan, S., Bourke Floyd, C. H., IV, & Holderness, W. (2013). Towards user-defined multi-touch gestures for 3D objects. In Proceedings of the 2013 ACM international conference on interactive tabletops and surfaces (pp. 231–240), St. Andrews, UK.
- Chen, L., Chen, D. Y., & Chen, X. (2018a). BackAssist: Augmenting mobile touch manipulation with back-of-device assistance. IEICE transactions on information and systems, E101-D 6, 1682–1685. doi:10.1587/transinf.2017EDL8209
- Chen, Z., Ma, X. C., Zhou, Y., Yao, M. G., Ma, Z., Wang, C., … Shen, M. W. (2018b). User-defined gestures for gestural interaction: Extending from hands to other body parts. International Journal of Human-Computer Interaction, 34, 238–250. doi:10.1080/10447318.2017.1342943
- Efron, D. (1941). Gesture and environment. Morningside Heights, New York: King’s Crown Press.
- Fuccella, V., & Costagliola, G. (2015). Unistroke gesture recognition through polyline approximation and alignment. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 3351–3354), Seoul, Republic of Korea.
- Grijincu, D., Nacenta, M. A., & Kristensson, P. O. (2014). User-defined interface gestures: Dataset and analysis. In Proceedings of the 2014 ACM international conference on interactive tabletops and surfaces (pp. 25–34). doi:10.1089/g4h.2013.0066
- Hutchins, E., Hollan, J., & Norman, D. (1985). Direct manipulation interfaces. Human-Computer Interaction, 1, 311–338. doi:10.1207/s15327051hci0104_2
- Katie, A. S., Yvonne, R., & Kay, H. C. (2005). Fat finger worries. How older and younger users physically interact with PDAs. In Proceedings of the 2005 IFIP TC 13 international conference on human-computer interaction (pp. 267–280), Rome, Italy.
- Kendon, A. (1988). How gestures can become like words. In F. Poyatos (Ed.), Crosscultural perspectives in nonverbal communication (pp. 131–141). Toronto, ON: C. J. Hogrefe.
- Kistler, F., & André, E. (2013). User-defined body gestures for an interactive storytelling scenario. In Interact’13 (pp. 264–281), Cape Town, South Africa.
- Kratz, S., & Rohs, M. (2010). A $3 gesture recognizer – Simple gesture recognition for devices equipped with 3D acceleration sensors. In Proceedings of the 15th international conference on intelligent user interfaces (pp. 341–344), Hong Kong, China.
- Kray, C., Nesbitt, D., & Rohs, M. (2010). User-defined gestures for connecting mobile phones, public displays, and tabletops. In Proceedings of the 12th international conference on human computer interaction with mobile devices and services (pp. 239–248). doi:10.1177/1753193409349856
- Kurdyukova, E., Redlin, M., & André, E. (2012). Studying user-defined iPad gestures for interaction in multi-display environment. In Proceedings of the 17th international conference on intelligent user interfaces (pp. 93–96), Lisbon, Portugal.
- Le, H. V., Mayer, S., Wolf, K., & Henze, N. (2016). Finger placement and hand grasp during smartphone interaction. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 2576–2584), San Jose, CA.
- McNeill, D. (1992). Hand and mind: What gestures reveal about thought. Leonardo, 27(4), 81–82.
- Morris, M. R., Danielescu, A., Drucker, S., Fisher, D., Lee, B., Schraefel, M. C., & Wobbrock, J. O. (2014). Reducing legacy bias in gesture elicitation studies. Interactions, 21(3), 40–45. doi:10.1145/2591689
- Obaid, M., Haring, M., Kistler, F., Buhling, R., & André, E. (2012). User-defined body gestures for navigational control of a humanoid robot. In Proceedings of the 4th international conference on social robotics (pp. 367–377), Chengdu, China.
- Ohtani, T., Hashida, T., Kakehi, Y., & Naemura, T. (2011). Comparison of front touch and back touch while using transparent double-sided touch display. In ACM Siggraph’11, Vancouver, BC. Article No. 42.
- Rädle, R., Jetter, H. C., Schreiner, M., Lu, Z. H., Reiterer, H., & Rogers, Y. (2015). Spatially-aware or spatially-agnostic? elicitation and evaluation of user-defined cross-device interactions. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 3913–3922), Seoul, Republic of Korea.
- Ruiz, J., Li, Y., & Lank, E. (2011). User-defined motion gestures for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 197–206), Vancouver, BC.
- Tung, Y. C., Hsu, C. Y., Wang, H. Y., Chyou, S., Lin, J. W., Wu, P. J., … Chen, M. Y. (2015). User-defined game input for smart glasses in public space. In: Proceedings of the SIGCHI conference on human factors in computing systems (pp. 3327–3336), Seoul, Republic of Korea.
- Valdes, C., Eastman, D., Grote, C., Thatte, S., Shaer, O., Mazalek, A., … Konkel, M. K. (2014). Exploring the design space of gestural interaction with active tokens through user-defined gestures. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 4107–4116), Toronto, ON.
- Vatavu, R. D., Anthony, L., & Wobbrock, J. O. (2012). Gestures as point clouds: A $P recognizer for user interface prototypes. In Proceedings of the 14th ACM international conference on multimodal interaction (pp. 273–280). doi:10.4085/1062-6050-47.3.05
- Vatavu, R. D., & Wobbrock, J. O. (2015). Formalizing agreement analysis for elicitation studies: New measures, significance test, and toolkit. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1325–1334), Seoul, Republic of Korea.
- Wobbrock, J. O., Aung, H. H., Rothrock, B., & Myers, B. A. (2005). Maximizing the guessability of symbolic input. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1869–1872), Portland, OR.
- Wobbrock, J. O., Morris, M. R., & Wilson, A. D. (2009). User-defined gestures for surface computing. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1083–1092), Boston, MA.
- Wobbrock, J. O., Myers, B. A., & Aung, H. H. (2008). The performance of hand postures in front- and back-of-device interaction for mobile computing. International Journal of Human-Computer Studies, 66(12), 857–875. doi:10.1016/j.ijhcs.2008.03.004
- Wobbrock, J. O., Wilson, A. D., & Li, Y. (2007). Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes. In Proceedings of the 20th annual ACM symposium on user interface software and technology (pp. 159–168), Newport, RI.
- Wolf, K., Schleicher, R., & Rohs, M. (2014). Ergonomic characteristics of gestures for front- and back-of-tablets interaction with grasping hands. In Proceedings of the 16th international conference on human computer interaction with mobile devices and services (pp. 23–26). doi:10.1007/s00062-013-0263-5
- Wu, H. Y., Zhang, S. K., Qiu, J. L., Liu, J. Y., & Zhang, X. L. (2018). The gesture disagreement problem in freehand gesture interaction. International Journal of Human-Computer Interaction. doi:10.1080/10447318.2018.1510607
- Zhang, C., Parnami, A., Southern, C., Thomaz, E., Gabriel, R., Arriaga, R. I., & Abowd, G. D. (2013). BackTap: Robust four-point tapping on the back of an off-the-shelf smartphone. In Proceedings of the 26th annual ACM symposium on user interface software and technology (pp. 111–112). doi:10.1016/j.febslet.2013.11.017