References
- Abrams, R. A., & Christ, S. E. (2003). Motion onset captures attention. Psychological Science, 14(5), 427–432. https://doi.org/10.1111/1467-9280.01458
- Anagnostopoulos, V., Havlena, M., Kiefer, P., Giannopoulos, I., Schindler, K., & Raubal, M. (2017). Gaze-informed location-based services. International Journal of Geographical Information Science, 31(9), 1770–1797. https://doi.org/10.1080/13658816.2017.1334896
- Anguelov, D., Dulong, C., Filip, D., Frueh, C., Lafon, S., Lyon, R., Ogale, A., Vincent, L., & Weaver, J. (2010). Google street view: Capturing the world at street level. Computer, 43(6), 32–38. https://doi.org/10.1109/MC.2010.170
- Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615
- Bay, H., Tuytelaars, T., & Van Gool, L. (2006). SURF: Speeded up robust features. In H. B. A. Leonardis & A. Pinz (Eds.), ECCV 2006, Part I, LNCS 3951 (pp. 404–417). Springer. https://doi.org/10.1007/11744023_32
- Beugher, S. D., Brône, G., & Goedemé, T. (2014). Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection. In S. Battatiato & S. Braz (Eds.) 2014 International Conference on Computer Vision Theory and Applications (VISAPP), IEEE. https://ieeexplore.ieee.org/abstract/document/7294867
- Birmingham, E., Bischof, W. F., & Kingstone, A. (2008). Social attention and real-world scenes: The roles of action, competition and social content. Quarterly Journal of Experimental Psychology, 61(7), 986–998. https://doi.org/10.1080/17470210701410375
- Boisvert, J. F. G., & Bruce, N. D. B. (2016). Predicting task from eye movements: On the importance of spatial distribution, dynamics, and image features. Neurocomputing, 207, 653–668. https://doi.org/10.1016/j.neucom.2016.05.047
- Borji, A., & Itti, L. (2014). Defending Yarbus: Eye movements reveal observers’ task. Journal of Vision, 14(3), 1–22. https://doi.org/10.1167/14.3.29
- Brügger, A., Richter, K.-F., & Fabrikant, S. I. (2018). Which egocentric direction suffers from visual attention during aided wayfinding? In P. Kiefer, I. Giannopoulos, F. Göbel, M. Raubal, & A. T. Duchowski (Eds.), Proceedings of the 3rd International Workshop Eye Tracking for Spatial Research (pp. 22–27). ETH Zurich. https://doi.org/10.3929/ethz-b-000222472
- Calonder, M., Lepetit, V., Strecha, C., & Fua, P. (2010). Brief: Binary robust independent elementary features. In P. M. K. Daniilidis & N. Paragios (Eds.), ECCV 2010, Part IV, LNCS 6314 (pp. 778–792). Springer. https://doi.org/10.1007/978-3-642-15561-1_56
- Cavanagh, P. (2011). Visual cognition. Vision Research, 51(13), 1538–1551. https://doi.org/10.1016/j.visres.2011.01.015
- Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., & Adam, H. (2018). Encoder-decoder with atrous separable convolution for semantic image segmentation. In V. Ferrari, M. Hebert, C. Sminchisescu, & Y. Weiss (Eds.), Computer Vision – ECCV 2018 (vol 11211, pp. 833–851). Springer. https://doi.org/10.1007/978-3-030-01234-2_49
- Cityscapes. (2018). Benchmark suite. Cityscapes Dataset. https://www.cityscapes-dataset.com/benchmarks/
- Claessen, M. H. G., Visser-Meily, J. M. A., De Rooij, N. K., Postma, A., & Van der Ham, I. J. M. (2016). A direct comparison of real-world and virtual navigation performance in chronic stroke patients. Journal of the International Neuropsychological Society, 22(4), 467–477. https://doi.org/10.1017/S1355617715001228
- Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum.
- Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., & Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. Proceedings of the IEEE conference on computer vision and pattern recognition.arXiv:1604.01685
- Cristino, F., & Baddeley, R. (2009). The nature of the visual representations involved in eye movements when walking down the street. Visual Cognition, 17(6–7), 880–903. https://doi.org/10.1080/13506280902834696
- Dong, W., Liao, H., Roth, R. E., & Wang, S. (2014). Eye tracking to explore the potential of enhanced imagery basemaps in web mapping. Cartographic Journal, 51(4), 313–329. https://doi.org/10.1179/1743277413Y.0000000071
- Dong, W., Qin, T., Liao, H., Liu, Y., & Liu, J. (2020). Comparing the roles of landmark visual salience and semantic salience in visual guidance during indoor wayfinding. Cartography and Geographic Information Science, 47(3), 229–243. https://doi.org/10.1080/15230406.2019.1697965
- Ebdon, D. (1985). Statistics in geography: A practical approach-revised with 17 programs. Blackwell.
- Emo, B. (2014). Seeing the axial line: Evidence from wayfinding experiments. Behavioral Sciences, 4(3), 167–180. https://doi.org/10.3390/bs4030167
- ESRI. (2018). How average nearest neighbor works. ArcGIS Desktop. https://desktop.arcgis.com/en/arcmap/10.3/tools/spatial-statistics-toolbox/h-how-average-nearest-neighbor-distance-spatial-st.htm
- Evans, K. M., Jacobs, R. A., Tarduno, J. A., & Pelz, J. B. (2012). Collecting and analyzing eye-tracking data in outdoor environments. Journal of Eye Movement Research, 5(2), 1–19. https://doi.org/10.16910/jemr.5.2.6
- Everingham, M., Eslami, S. A., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2015). The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1), 98–136. https://doi.org/10.1007/s11263-014-0733-5
- Fotios, S., Uttley, J., Cheal, C., & Hara, N. (2015). Using eye-tracking to identify pedestrians’ critical visual tasks, Part 1: Dual task approach. Lighting Research and Technology, 47(2), 133–148. https://doi.org/10.1177/1477153514522472
- Fotios, S., Uttley, J., & Yang, B. (2015). Using eye-tracking to identify pedestrians’ critical visual tasks, Part 2: Fixation on pedestrians. Lighting Research and Technology, 47(2), 149–160. https://doi.org/10.1177/1477153514522473
- Foulsham, T., Walker, E., & Kingstone, A. (2011). The where, what and when of gaze allocation in the lab and the natural environment. Vision Research, 51(17), 1920–1931. https://doi.org/10.1016/j.visres.2011.07.002
- Franconeri, S. L., & Simons, D. J. (2003). Moving and looming stimuli capture attention. Perception & Psychophysics, 65(7), 999–1010. https://doi.org/10.3758/BF03194829
- Henderson, J. M. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7(11), 498–504. https://doi.org/10.1016/j.tics.2003.09.006
- Hessels, R. S., Niehorster, D. C., Nyström, M., Andersson, R., & Hooge, I. T. C. (2018). Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. Royal Society Open Science, 5(8), 180502. https://doi.org/10.1098/rsos.180502
- Ishikawa, T., & Montello, D. R. (2006). Spatial knowledge acquisition from direct experience in the environment: Individual differences in the development of metric knowledge and the integration of separately learned places. Cognitive Psychology, 52(2), 93–129. https://doi.org/10.1016/j.cogpsych.2005.08.003
- Just, M. A., & Carpenter, P. A. (1976). Eye fixations and cognitive processes. Cognitive Psychology, 8(4), 441–480. https://doi.org/10.1016/0010-0285(76)90015-3
- Karami, E., Prasad, S., & Shehata, M. (2017). Image matching using SIFT, SURF, BRIEF and ORB: Performance comparison for distorted images. arXiv Preprint arXiv:1710.02726, 1–5. https://arxiv.org/ftp/arxiv/papers/1710/1710.02726.pdf
- Kiefer, P., Giannopoulos, I., & Raubal, M. (2014). Where am I? Investigating map matching during self-localization with mobile eye tracking in an urban environment. Transactions in GIS, 18(5), 660–686. https://doi.org/10.1111/tgis.12067
- Kiefer, P., Giannopoulos, I., Raubal, M., & Duchowski, A. T. (2017). Eye tracking for spatial research: Cognition, computation, challenges. Spatial Cognition & Computation, 17(1–2), 1–19. https://doi.org/10.1080/13875868.2016.1254634
- Kingstone, A. (2009). Taking a real look at social attention. Current Opinion in Neurobiology, 19(1), 52–56. https://doi.org/10.1016/j.conb.2009.05.004
- Koletsis, E., Elzakker, C. P. J. M. V., Kraak, M. J., Cartwright, W., Arrowsmith, C., & Field, K. (2017). An investigation into challenges experienced when route planning, navigating and wayfinding. International Journal of Cartography, 3(1), 4–18. https://doi.org/10.1080/23729333.2017.1300996
- LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
- Liao, H., & Dong, W. (2017). An exploratory study investigating gender effects on using 3D maps for spatial orientation in wayfinding. ISPRS International Journal of Geo-Information, 6(3), 1–19. https://doi.org/10.3390/ijgi6030060
- Liao, H., & Dong, W. (2019). Challenges of using eye tracking to evaluate usability of mobile maps in real environments. Retrieved 1 February, from https://use.icaci.org/wp-content/uploads/2018/11/LiaoDong.pdf
- Liao, H., Dong, W., Huang, H., Gartner, G., & Liu, H. (2019). Inferring user tasks in pedestrian navigation from eye movement data in real-world environments. International Journal of Geographical Information Science, 33(4), 739–763. https://doi.org/10.1080/13658816.2018.1482554
- Liao, H., Dong, W., Peng, C., & Liu, H. (2017). Exploring differences of visual attention in pedestrian navigation when using 2D maps and 3D geo-browsers. Cartography and Geographic Information Science, 44(6), 474–490. https://doi.org/10.1080/15230406.2016.1174886
- Liao, H., Wang, X., Dong, W., & Meng, L. (2019). Measuring the influence of map label density on perceived complexity: A user study using eye tracking. Cartography and Geographic Information Science, 46(3), 210–227. https://doi.org/10.1080/15230406.2018.1434016
- Lin, C. T., Huang, T. Y., Lin, W. J., Chang, S. Y., Lin, Y. H., Ko, L. W., Hung, D. L., & Chang, E. C. (2012). Gender differences in wayfinding in virtual environments with global or local landmarks. Journal of Environmental Psychology, 32(2), 89–96. https://doi.org/10.1016/j.jenvp.2011.12.004
- Lokka, I. E., Çöltekin, A., Wiener, J., Fabrikant, S. I., & Röcke, C. (2018). Virtual environments as memory training devices in navigational tasks for older adults. Scientific Reports, 8(1), 1–15. https://doi.org/10.1038/s41598-018-29029-x
- Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110. https://doi.org/10.1023/b:visi.0000029664.99615.94
- Lynch, S. D., Kulpa, R., Meerhoff, L. A., Pettre, J., Cretual, A., & Olivier, A.-H. (2018). Collision avoidance behavior between walkers: Global and local motion cues. IEEE Transactions on Visualization and Computer Graphics, 24(7), 2078–2088. https://doi.org/10.1109/TVCG.2017.2718514
- Marigold, D. S., & Patla, A. E. (2007). Gaze fixation patterns for negotiating complex ground terrain. Neuroscience, 144(1), 302–313. https://doi.org/10.1016/j.neuroscience.2006.09.006
- Muja, M., & Lowe, D. G. (2014). Scalable nearest neighbor algorithms for high dimensional data. IEEE Transactions on Pattern Analysis & Machine Intelligence, 36(11), 2227–2240. https://doi.org/10.1109/TPAMI.2014.2321376
- Ooms, K., De Maeyer, P., Fack, V., Van Assche, E., & Witlox, F. (2012). Interpreting maps through the eyes of expert and novice users. International Journal of Geographical Information Science, 26(10), 1773–1788. https://doi.org/10.1080/13658816.2011.642801
- Popelka, S. (2018). Eye-tracking evaluation of 3D thematic maps. In Proceedings of the 3rd Workshop on Eye Tracking and Visualization (pp. 1–5). ACM. https://doi.org/10.1145/3205929.3205932
- Raubal, M., & Winter, S. (2002). Enriching wayfinding instructions with local landmarks. In M. J. Egenhofer & D. M. Mark (Eds.), GIScience 2002, LNCS 2478 (pp. 243–259). Springer. https://doi.org/10.1007/3-540-45799-2_17
- Rayner, K. (2009). Eye movements and attention in reading, scene perception, and visual search. Quarterly Journal of Experimental Psychology, 62(8), 1457–1506. https://doi.org/10.1080/17470210902816461
- Rey, B., & Alcañiz, M. (2010). Research in neuroscience and virtual reality. In J.-J. Kim (Ed.), Virtual Reality (pp. 377–394). InTech.
- Richardson, A. E., Montello, D. R., & Hegarty, M. (1999). Spatial knowledge acquisition from maps and from navigation in real and virtual environments. Memory & Cognition, 27(4), 741–750. https://doi.org/10.3758/BF03211566
- Richter, K.-F., & Winter, S. (2014). Landmarks: GIScience for intelligent services. Springer. https://doi.org/10.1007/978-3-319-05732-3
- Risko, E., Laidlaw, K., Freeth, M., Foulsham, T., & Kingstone, A. (2012). Social attention with real versus reel stimuli: Toward an empirical approach to concerns about ecological validity [Review]. Frontiers in Human Neuroscience, 6(143), 1–11. https://doi.org/10.3389/fnhum.2012.00143
- Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. Computer Vision (ICCV), 2011 IEEE international conference on computer vision (pp. 2564-2571). IEEE. https://doi.org/10.1109/ICCV.2011.6126544
- Salvucci, D. D., & Goldberg, J. H. (2000). Identifying fixations and saccades in eye-tracking protocols. In A.Duchowski (Ed.), Proceedings of the 2000 symposium on eye tracking research & applications (pp. 71–78). ACM. https://doi.org/10.1145/355017.355028
- Schwarzkopf, S., von Stülpnagel, R., Büchner, S. J., Konieczny, L., Kallert, G., & Hölscher, C. (2013). What lab eye tracking tells us about wayfinding: A comparison of stationary and mobile eye tracking in a large building scenario. In P. Kiefer, I. Giannopoulos, M. Raubal, & M. Hegarty (Eds.), Eye Tracking for Spatial Research, Proceedings of the 1st International Workshop (pp. 31–36).
- Siegel, A. W., & White, S. H. (1975). The development of spatial representations of large-scale environments. Advances in Child Development and Behavior, 10, 9–55. https://doi.org/10.1016/S0065-2407(08)60007-5
- Slater, M., & Wilbur, S. (1997). A framework for immersive virtual environments (FIVE): Speculations on the role of presence in virtual environments. Presence: Teleoperators & Virtual Environments, 6(6), 603–616. https://doi.org/10.1162/pres.1997.6.6.603
- SMI. (2017). BeGaze manual version 3.7. Retrieved June 9, from www.humre.vu.lt/files/doc/Instrukcijos/SMI/BeGaze2.pdf
- Spiers, H. J., & Maguire, E. A. (2008). The dynamic nature of cognition during wayfinding. Journal of Environmental Psychology, 28(3), 232–249. https://doi.org/10.1016/j.jenvp.2008.02.006
- Steck, S. D., & Mallot, H. A. (2000). The role of global and local landmarks in virtual environment navigation. Presence: Teleoperators & Virtual Environments, 9(1), 69–83. https://doi.org/10.1162/105474600566628
- Van der Ham, I. J., Faber, A. M., Venselaar, M., Van Kreveld, M. J., & Löffler, M. (2015). Ecological validity of virtual environments to assess human navigation ability. Frontiers in Psychology, 6, 637. https://doi.org/10.3389/fpsyg.2015.00637
- Waller, D., Beall, A. C., & Loomis, J. M. (2004). Using virtual environments to assess directional knowledge. Journal of Environmental Psychology, 24(1), 105–116. https://doi.org/10.1016/S0272-4944(03)00051-3
- Warren, W. H., & Hannon, D. J. (1988). Direction of self-motion is perceived from optical flow. Nature, 336(6195), 162–163. https://doi.org/10.1038/336162a0
- Wenczel, F., Hepperle, L., & von Stülpnagel, R. (2017). Gaze behavior during incidental and intentional navigation in an outdoor environment. Spatial Cognition & Computation, 17(1–2), 121–142. https://doi.org/10.1080/13875868.2016.1226838
- Wickens, C. D., & Baker, P. (1995). Cognitive issues in virtual reality. In W. Barfield & T. A. Furness III (Eds.), Virtual environments and advanced interface design (pp. 514–541). Oxford University Press.
- Wolfe, J. M., & Horowitz, T. S. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience, 5(6), 495–501. https://doi.org/10.1038/nrn1411
- Yarbus, A. L. (1967). Eye movements and vision (Vol. 2). Plenum Press.
- Zhang, Y., Zheng, X., Hong, W., & Mou, X. (2015). A comparison study of stationary and mobile eye tracking on EXITs design in a wayfinding system. 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA) IEEE. https://doi.org/10.1109/APSIPA.2015.7415350
- Zhao, H., Shi, J., Qi, X., Wang, X., & Jia, J. (2017). Pyramid scene parsing network. IEEE Conference on Computer Vision and Pattern Recognition (CVPR).