918
Views
19
CrossRef citations to date
0
Altmetric
Articles

Elicitation study investigating hand and foot gesture interaction for immersive maps in augmented reality

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 214-228 | Received 08 Jul 2019, Accepted 19 Nov 2019, Published online: 07 Jan 2020

References

  • Adhikarla, V. K., Woźniak, P., Barsi, A., Singhal, D., Kovács, P. T., & Balogh, T. (2014). Freehand interaction with large-scale 3d map data. Proceedings of the 2014 3DTV-conference: The true vision-capture, transmission and display of 3D video (3DTV-CON). Piscataway, NJ: IEEE. doi:10.1109/3DTV.2014.6874711
  • Alexander, J., Han, T., Judd, W., Irani, P., & Subramanian, S. (2012). Putting your best foot forward: Investigating real-world mappings for foot-based gestures. In J.A. Konstan (Ed.) Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12) (pp. 1229–1238). New York: ACM. doi:10.1145/2207676.2208575
  • Andre, A. D., & Wickens, C. D. (1995). When users want what’s not best for them. Ergonomics in Design, 3(4), 10–14. doi:10.1177/106480469500300403
  • Augsten, T., Kaefer, K., Meusel, R., Fetzer, C., Kanitz, D., Stoff, T., … Baudisch, P. (2010). Multitoe: High-precision interaction with back-projected floors based on high-resolution multi-touch input. In K. Perlin (Ed.) Proceedings of the 23nd annual ACM symposium on user interface software and technology (UIST ‘10) (pp. 209–218). New York: ACM. doi:10.1145/1866029.1866064
  • Bailey, R. W. (1993). Performance vs. preference. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37(4), 282–286. doi:10.1177/154193129303700406
  • Bartoschek, T., Pape, G., Kray, C., Jones, J., & Kauppinen, T. (2014). Gestural interaction with spatiotemporal linked open data. OSGeo Journal, 13(1), 60–67. doi: 10.7275/R57D2SBM
  • Beckhaus, S., Blom, K. J., & Haringer, M. (2005). Intuitive, hands-free travel interfaces for virtual environments. New directions in 3D user interfaces workshop of IEEE VR (pp. 57–60). Bonn, Germany.
  • Boulos, M. N. K., Blanchard, B. J., Walker, C., Montero, J., Tripathy, A., & Gutierrez-Osuna, R. (2011). Web GIS in practice X: A Microsoft Kinect natural user interface for Google Earth navigation. International Journal of Health Geographics, 10(45), 1–14. doi:10.1186/1476-072X-10-45
  • Bowman, D. A., McMahan, R. P., & Ragan, E. D. (2012). Questioning naturalism in 3D user interfaces. Communications of the ACM, 55(9), 78–88. doi:10.1145/2330667.2330687
  • Butscher, S., Hubenschmid, S., Müller, J., Fuchs, J., & Reiterer, H. (2018). Clusters, trends, and outliers: How immersive technologies can facilitate the collaborative analysis of multidimensional data. In R. Mandryk & M. Hancock (Eds.) Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ‘18), paper 90. New York: ACM. doi:10.1145/3173574.3173664
  • Carbonell Carrera, C., & Bermejo Asensio, L. A. (2017). Augmented reality as a digital teaching environment to develop spatial thinking. Cartography and Geographic Information Science, 44(3), 259–270. doi:10.1080/15230406.2016.1145556
  • Chan, L. W., Kao, H. S., Chen, M. Y., Lee, M. S., Hsu, J., & Hung, Y. P. (2010). Touching the void: Direct-touch interaction for intangible displays. In E. Mynatt (Ed.) Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘10) (pp. 2625–2634). New York: ACM. doi:10.1145/1753326.1753725
  • Chen, L. C., Cheng, Y. M., Chu, P. Y., & Sandnes, F. E. (2016). The common characteristics of user-defined and mid-air gestures for rotating 3D digital contents. In M. Antona & C. Stephanidis (Eds.) Lecture Notes in Computer Science: Vol. 9738. Proceedings of Universal Access in Human-Computer Interaction: Interaction Techniques and Environments. 10th International Conference UAHCI 2016 (pp. 5–22). Cham, Switzerland: Springer. doi:10.1007/978-3-319-40244-4_2
  • Çöltekin, A., Hempel, J., Brychtova, A., Giannopoulos, I., Stellmach, S., & Dachselt, R. (2016). Gaze and feet as additional input modalities for interacting with geospatial interfaces. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, III-2, 113–120. doi:10.5194/isprsannals-III-2-113-2016
  • Crossan, A., Brewster, S., & Ng, A. (2010). Foot tapping for mobile interaction. Proceedings of the 24th BCS interaction specialist group conference (pp. 418–422). Dundee, UK: British Computer Society.
  • Daiber, F., Schöning, J., & Krüger, A. (2009). Whole body interaction with geospatial data. In A. Butz, B. Fisher, M. Christie, A. Krüger, P. Olivier, & R. Therón (Eds.), Lecture Notes in Computer Science: Vol. 5531. Smart Graphics. SG 2009 (pp. 81–92). Berlin: Springer. doi:10.1007/978-3-642-02115-2_7
  • Danyluk, D., Jenny, B., & Willett, W. (2019). Look-from camera control for 3D terrain maps. In S. Brewster & G. Fitzpatrick (Eds.) Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ‘19), paper 364. New York: ACM. doi:10.1145/3290605.3300594
  • Darken, R. P., Cockayne, W. R., & Carmein, D. (1997). The omni-directional treadmill: A locomotion device for virtual worlds. In G. Robertson & C. Schmandt (Eds.) Proceedings of User Interface Software and Technology (UIST ‘97) (pp. 213–221). doi:10.1145/263407.263550
  • de Haan, G., Griffith, E. J., & Post, F. H. (2008). Using the Wii Balance Board as a low-cost VR interaction device. In S. Feiner, D. Thalmann & P. Guitton (Eds.) Proceedings of the 2008 ACM Symposium on Virtual Reality Software and Technology (VRST ‘08) (pp. 289–290). New York: ACM. doi:10.1145/1450579.1450657
  • Fabroyir, H., & Teng, W. C. (2018). Navigation in virtual environments using head-mounted displays: Allocentric vs. egocentric behaviors. Computers in Human Behavior, 80, 331–343. doi:10.1016/j.chb.2017.11.033
  • Felberbaum, Y., & Lanir, J. (2018). Better understanding of foot gestures: An elicitation study. In R. Mandryk & M. Hancock (Eds.) Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ‘18). New York: ACM. doi:10.1145/3173574.3173908
  • Findlater, L., Lee, B., & Wobbrock, J. (2012). Beyond QWERTY: Augmenting touch screen keyboards with multi-touch gestures for non-alphanumeric input. In J.A. Konstan (Ed.) Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘12) (pp. 2679–2682). New York: ACM. doi:10.1145/2207676.2208660
  • Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378–382. doi:10.1037/h0031619
  • Giannopoulos, I., Komninos, A., & Garofalakis, J. (2017). Interacting with large maps using HMDs in VR settings. In M. Jones & M. Tscheligi (Eds.) Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’17), article 105. New York: ACM. doi:10.1145/3098279.3122148
  • Guy, E., Punpongsanon, P., Iwai, D., Sato, K., & Boubekeur, T. (2015). LazyNav: 3D ground navigation with non-critical body parts. In R. Lindeman, F. Steinicke & B. Thomas (Eds.) Proceedings of 2015 IEEE symposium on 3D user interfaces (3DUI) (pp. 43–50). Piscataway, NJ: IEEE. doi:10.1109/3DUI.2015.7131725
  • Hegarty, M., Smallman, H. S., Stull, A. T., & Canham, M. S. (2009). Naïve cartography: How intuitions about display configuration can hurt performance. Cartographica: the International Journal for Geographic Information and Geovisualization, 44(3), 171–186. doi:10.3138/carto.44.3.171
  • Herman, L., & Stachoň, Z. (2018). Controlling 3D geovisualizations on touch screen—The role of users age and gestures intuitiveness. In T. Bandrova & M. Konecný (Eds.), Proceedings of the 7th international conference on cartography and GIS (pp. 473–480). Sofia, Bulgaria: Bulgarian Cartographic Association.
  • Hincapié-Ramos, J. D., Guo, X., Moghadasian, P., & Irani, P. (2014). Consumed endurance: A metric to quantify arm fatigue of mid-air interactions. In M. Jones & P. Palanque (Eds.) Proceedings of the SIGCHI conference on human factors in computing systems (CHI ‘14) (pp. 1063–1072). New York: ACM. doi:10.1145/2556288.2557130
  • Hinckley, K., Pausch, R., Goble, J. C., & Kassell, N. F. (1994). A survey of design issues in spatial input. In P. Szekely (Ed.) Proceedings of the 7th Annual ACM Symposium on User Interface Software and Technology (UIST ’94) (pp. 213–222). New York: ACM. doi:10.1145/192426.192501
  • Hoffmann, E. R. (1991). A comparison of hand and foot movement times. Ergonomics, 34(4), 397–406. doi:10.1080/00140139108967324
  • Jahani, H., & Kavakli, M. (2018). Exploring a user-defined gesture vocabulary for descriptive mid-air interactions. Cognition, Technology & Work, 20(1), 11–22. doi:10.1007/s10111-017-0444-0
  • Jokisch, M., Bartoschek, T., & Schwering, A. (2011). Usability testing of the interaction of novices with a multi-touch table in semi public space. In J. A. Jacko (Ed.), Lecture Notes in Computer Science: Vol. 6762. Human-computer interaction: Interaction techniques and environments (pp. 71–80). Berlin: Springer. doi:10.1007/978-3-642-21605-3_8
  • Kim, T., Ju, H., & Cooperstock, J. R. (2018). Pressure or movement? Usability of multi functional foot-based interfaces. In I. Koskinen & Y. Lim (Eds.) Proceedings of the 2018 Designing Interactive Systems Conference (DIS ‘18) (pp. 1219–1227). New York: ACM. doi:10.1145/3196709.3196759
  • Klamka, K., Siegel, A., Vogt, S., Göbel, F., Stellmach, S., & Dachselt, R. (2015). Look & pedal: Hands-free navigation in zoomable information spaces through gaze-supported foot input. In Z. Zhang & P. Cohen (Eds.) Proceedings of the International Conference on Multimodal Interaction (ICMI ‘15) (pp. 123–130). New York: ACM. doi:10.1145/2818346.2820751
  • LaViola, J. J., Jr., Kruijff, E., McMahan, R. P., Bowman, D., & Poupyrev, I. P. (2017). 3D user interfaces: Theory and practice. Boston, MA: Addison-Wesley.
  • Lee, Y. S., & Sohn, B.-S. (2018). Immersive gesture interfaces for navigation of 3D maps in HMD-based mobile virtual environments. Mobile Information Systems, 2018, 1–11. doi:10.1155/2018/2585797
  • Mapbox. (n.d.). Maps SDK for Unity. Retrieved from docs.mapbox.com/unity/maps/overview/
  • Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems, 77(12), 1321–1329.
  • Morris, M. R., Danielescu, A., Drucker, S., Fisher, D., Lee, B., & Wobbrock, J. O. (2014). Reducing legacy bias in gesture elicitation studies. Interactions, 21(3), 40–45. doi:10.1145/2591689
  • Müller, F., McManus, J., Günther, S., Schmitz, M., Mühlhäuser, M., & Funk, M. (2019). Mind the tap: Assessing foot-taps for interacting with head-mounted displays. In F. Brewster & G. Fitzpatrick (Eds.) Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ‘19), paper 477. New York: ACM. doi:10.1145/3290605.3300707
  • Mustafa, Z., Flores, J., & Cotos, J. M. (2018). Multimodal user interaction for GIS applications (MUI_GIS). In C. Manresa-Yee & R. Mas Sansó (Eds.) Proceedings of the XIX International Conference on Human Computer Interaction (Interacción 2018), article 26. New York: ACM. doi:10.1145/3233824.3233855
  • Ortega, F. R., Galvan, A., Tarre, K., Barreto, A., Rishe, N., Bernal, J., … Thomas, J. L. (2017). Gesture elicitation for 3D travel via multi-touch and mid-air systems for procedurally generated pseudo-universe. In M. Marchal, R.J. Teather & B. Thomas (Eds.) Proceedings of the 2017 IEEE symposium on 3D user interfaces (3DUI) (pp. 144–153). IEEE. doi:10.1109/3DUI.2017.7893331
  • Pakkanen, T., & Raisamo, R. (2004). Appropriateness of foot interaction for non-accurate spatial tasks. In E. Dykstra-Erikson & M. Tscheligi (Eds.) Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘04) (pp. 1123–1126). New York: ACM. doi:10.1145/985921.986004
  • Pearson, G., & Weiser, M. (1986). Of moles and men: The design of foot controls for workstations. ACM SIGCHI Bulletin, 17(4), 333–339. doi:10.1145/22339.22392
  • Pham, T., Vermeulen, J., Tang, A., & MacDonald Vermeulen, L. (2018). Scale impacts elicited gestures for manipulating holograms: Implications for AR gesture design. In I. Koskinen & Y. Lim (Eds.) Proceedings of the 2018 on Designing Interactive Systems Conference (DIS ‘18) (pp. 227–240). New York: ACM. doi:10.1145/3196709.3196719
  • Piumsomboon, T., Clark, A., Billinghurst, M., & Cockburn, A. (2013). User-defined gestures for augmented reality. In S. Brewster & S. Bødker (Eds.) Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘13) (pp. 955–960). New York: ACM. doi:10.1145/2468356.2468527
  • Punpongsanon, P., Guy, E., Iwai, D., Sato, K., & Boubekeur, T. (2016). Extended LazyNav: Virtual 3D ground navigation for large displays and head-mounted displays. IEEE Transactions on Visualization and Computer Graphics, 23(8), 1952–1963. doi:10.1109/TVCG.2016.2586071
  • Rauschert, I., Agrawal, P., Sharma, R., Fuhrmann, S., Brewer, I., & MacEachren, A. (2002). Designing a human-centered, multimodal GIS interface to support emergency management. In K. Makki & M. Pissinou (Eds.) Proceedings of the 10th ACM International Symposium on Advances in Geographic Information Systems (GIS ‘02) (pp. 119–124). New York: ACM. doi:10.1145/585147.585172
  • Roth, R. E. (2013a). An empirically-derived taxonomy of interaction primitives for interactive cartography and geovisualization. IEEE Transactions on Visualization and Computer Graphics, 19(12), 2356–2365. doi:10.1109/TVCG.2013.130
  • Roth, R. E. (2013b). Interactive maps: What we know and what we need to know. Journal of Spatial Information Science, 6(6), 59–115. doi:10.5311/JOSIS.2013.6.105
  • Roupé, M., Bosch-Sijtsema, P., & Johansson, M. (2014). Interactive navigation interface for virtual reality using the human body. Computers, Environment and Urban Systems, 43, 42–50. doi:10.1016/j.compenvurbsys.2013.10.003
  • Rudi, D., Giannopoulos, I., Kiefer, P., Peier, C., & Raubal, M. (2016). Interacting with maps on optical head-mounted displays. In C. Sandor (Ed.) Proceedings of the 2016 Symposium on Spatial User Interaction, (SUI ‘16) (pp. 3–12). New York: ACM. doi:10.1145/2983310.2985747
  • Ruiz, J., & Vogel, D. (2015). Soft-constraints to reduce legacy and performance bias to elicit whole-body gestures with low arm fatigue. In B. Begole & J. Kim (Eds.) Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ‘15) (pp. 3347–3350). New York: ACM. doi:10.1145/2702123.2702583
  • Santos-Torres, A., Zarraonandia, T., Díaz, P., & Aedo, I. (2018). Exploring interaction mechanisms for map interfaces in virtual reality environments. In C. Manresa-Yee & R. Mas Sansó (Eds.) Proceedings of the XIX international conference on human computer interaction (Interacción 2018), article 7. New York: ACM. doi:10.1145/3233824.3233828
  • Satriadi, K. A., Ens, B., Cordeil, M., Czauderna, T., Willett, W., & Jenny, B. (2019). Augmented reality map navigation with freehand gestures. Proceedings of the 26th IEEE conference on virtual reality and 3D user interfaces (VR 2019) (pp. 593–603). doi:10.1109/VR.2019.8798340
  • Schöning, J., Daiber, F., Krüger, A., & Rohs, M. (2009). Using hands and feet to navigate and manipulate spatial data. In D. Olson (Ed.) CHI’09 Extended abstracts on human factors in computing systems (CHI EA ‘09) (pp. 4663–4668). New York: ACM. doi:10.1145/1520340.1520717
  • Simeone, A. L., Velloso, E., Alexander, J., & Gellersen, H. (2014). Feet movement in desktop 3D interaction. In X. Editor (ed.) Proceedings of the IEEE symposium on 3D user interfaces (3DUI 2014) (pp. 71–74). Piscataway, NJ: IEEE doi:10.1109/3DUI.2014.6798845
  • Smallman, H. S., & John, M. S. (2005). Naïve realism: Misplaced faith in realistic displays. Ergonomics in Design, 13(3), 6–13. doi:10.1177/106480460501300303
  • Stannus, S., Rolf, D., Lucieer, A., & Chinthammit, W. (2011). Gestural navigation in Google Earth. Proceedings of the 23rd Australian Computer-Human Interaction Conference (pp. 269–272). New York: ACM. doi:10.1145/2071536.2071580
  • Stellmach, S., Jüttner, M., Nywelt, C., Schneider, J., & Dachselt, R. (2012). Investigating freehand pan and zoom. In H. Reiterer & O. Deussen (Eds.), Mensch & Computer 2012 (pp. 303–312). Munich, Germany: Oldenbourg. doi:10.1524/9783486718782.303
  • Tsandilas, T. (2018). Fallacies of agreement : A critical review of consensus assessment methods for gesture elicitation. ACM Transactions on Computer-Human Interaction, 25(3), 1–49. article 18. doi:10.1145/3182168
  • Tscharn, R., Schaper, P., Sauerstein, J., Steinke, S., Stiersdorfer, S., Scheller, C., & Huynh, H. T. (2016). User experience of 3D map navigation: Bare-hand interaction or touchable device?. In W. Prinz, J. Borchers, & M. Ziefle (Eds.), Mensch und Computer 2016 (pp. 1–10). Aachen, Germany: Gesellschaft für Informatik. doi:10.18420/muc2016-mci-0167
  • Tse, E., Shen, C., Greenberg, S., & Forlines, C. (2006). Enabling interaction with single user applications through speech and gestures on a multi-user tabletop. In A. Celentano (Ed.) Proceedings of the Working Conference on Advanced Visual Interfaces (AVI ’06) (pp. 336–343). New York: ACM. doi:10.1145/1133265.1133336
  • Valkov, D., Steinicke, F., Bruder, G., & Hinrichs, K. H. (2010). Traveling in 3d virtual environments with foot gestures and a multi-touch enabled WIM. Proceedings of Virtual Reality International Conference (VRIC 2010) (pp. 171–180). https://basilic.informatik.uni-hamburg.de/Publications/2010/VSBH10/42.pdf
  • Vatavu, R.-D., & Wobbrock, J. O. (2015). Formalizing agreement analysis for elicitation studies: New measures, significance test, and toolkit. In B. Begole & J. Kim (Eds.) Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15) (pp. 1325–1334). New York: ACM. doi:10.1145/2702123.2702223
  • Velloso, E., Schmidt, D., Alexander, J., Gellersen, H., & Bulling, A. (2015). The feet in human-computer interaction: A survey of foot-based interaction. ACM Computing Surveys, 48(2), 1–35. article 21. doi:10.1145/2816455
  • Vogiatzidakis, P., & Koutsabasis, P. (2018). Gesture elicitation studies for mid-air interaction: A review. Multimodal Technologies and Interaction, 2(4), 65. doi:10.3390/mti2040065
  • Wigdor, D., & Wixon, D. (2011). Brave NUI world: Designing natural user interfaces for touch and gesture. San Francisco, CA: Morgan Kaufmann.
  • Wobbrock, J. O., Aung, H. H., Rothrock, B., & Myers, B. A. (2005). Maximizing the guessability of symbolic input. Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘15) (pp. 1869–1872). New York: ACM. doi:10.1145/1056808.1057043
  • Wobbrock, J. O., Morris, M. R., & Wilson, A. D. (2009). User-defined gestures for surface computing. In G. van der Veer & C. Gale (Eds.) Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’09) (pp. 1083–1092). New York: ACM. doi:10.1145/1518701.1518866
  • Yan, Y., Yu, C., Ma, X., Yi, X., Sun, K., & Shi, Y. (2018). VirtualGrasp: Leveraging experience of interacting with physical objects to facilitate digital object retrieval. In R. Mandryk & M. Hancock (Eds.) Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ‘18). New York: ACM. doi:10.1145/3173574.3173652
  • Yang, Y., Jenny, B., Dwyer, T., Marriott, K., Chen, H., & Cordeil, M. (2018). Maps and globes in virtual reality. Computer Graphics Forum, 37(3), 427–438. doi:10.1111/cgf.13431

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.