922
Views
22
CrossRef citations to date
0
Altmetric
Articles

Comparing the roles of landmark visual salience and semantic salience in visual guidance during indoor wayfinding

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 229-243 | Received 20 Jul 2019, Accepted 23 Nov 2019, Published online: 18 Dec 2019

References

  • Baluch, F., & Itti, L. (2011). Mechanisms of top-down attention. Trends in Neurosciences, 34(4), 210–224. doi:10.1016/j.tins.2011.02.003
  • Baskaya, A., Wilson, C., & Özcan, Y. Z. (2004). Wayfinding in an unfamiliar environment: different spatial settings of two polyclinics. Environment and Behavior, 36(6), 839–867. doi:10.1177/0013916504265445
  • Biederman, I. (1972). Perceiving real-world scenes. Science, 177(4043), 77–80. doi:10.1126/science.177.4043.77
  • Borji, A., & Itti, L. (2013). State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 185–207. doi:10.1109/TPAMI.2012.89
  • Borji, A., Sihite, D. N., & Itti, L. (2013). Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing, 22(1), 55–69. doi:10.1109/TIP.2012.2210727
  • Caduff, D., & Timpf, S. (2008). On the assessment of landmark salience for human navigation. Cognitive Processing, 9(4), 249–267. doi:10.1007/s10339-007-0199-2
  • Cao, C., Liu, X., Yang, Y., Yu, Y., Wang, J., Wang, Z., … Xu, W. (2015). Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks. In R. Bajcsy, G. Hage, & Y. Ma (Eds.), Proceedings of the IEEE international conference on computer vision (pp. 2956–2964). Piscataway, NJ: IEEE.
  • Cavanagh, P. (2011). Visual cognition. Vision Research, 51(13), 1538–1551. doi:10.1016/j.visres.2011.01.015
  • Cave, A. R., Blackler, A. L., Popovic, V., & Kraal, B. J. (2014). Examining intuitive navigation in airports. In Y.-K. Lim & K. Niedderer (Eds.), Proceedings of DRS 2014: Design’s big debates. (pp. 20). Umeå, Sweden: Umeå Institute of Design, Umeå University.
  • Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., & Adam, H. (2018). Encoder-decoder with atrous separable convolution for semantic image segmentation. Proceedings of the European Conference on Computer Vision (ECCV). (pp. 801–818) doi:10.1177/1753193418780184
  • Davies, C., & Peebles, D. (2010). Spaces or scenes: Map-based orientation in urban environments. Spatial Cognition & Computation, 10(2–3), 135–156. doi:10.1080/13875861003759289
  • Deakin, A. K. (1996). Landmarks as navigational aids on street maps. Cartography and Geographic Information Systems, 23(1), 21–36. doi:10.1559/152304096782512159
  • Delikostidis, I., van Elzakker, C. P., & Kraak, M.-J. (2016). Overcoming challenges in developing more usable pedestrian navigation systems. Cartography and Geographic Information Science, 43(3), 189–207. doi:10.1080/15230406.2015.1031180
  • Dogu, U., & Erkip, F. (2010). Spatial factors affecting wayfinding and orientation: A case study in a shopping mall. Environment and Behavior, 32(6), 731–755. doi:10.1177/00139160021972775
  • Duckham, M., Winter, S., & Robinson, M. (2010). Including landmarks in routing instructions. Journal of Location Based Services, 4(1), 28–52. doi:10.1080/17489721003785602
  • Egly, R., Driver, J., & Rafal, R. D. (1994). Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of Experimental Psychology: General, 123(2), 161–177. doi:10.1037/0096-3445.123.2.161
  • Elazary, L., & Itti, L. (2008). Interesting objects are visually salient. Journal of Vision, 8(3), 3. doi:10.1167/8.3.3
  • Fabrikant, S. I., Hespanha, S. R., & Hegarty, M. (2010). Cognitively inspired and perceptually salient graphic displays for efficient spatial inference making. Annals of the Association of American Geographers, 100(1), 13–29. doi:10.1080/00045600903362378
  • Gangaputra, R. (2017). Indoor landmark and indoor wayfinding: The indoor landmark identification issue (Unpublished master’s thesis). Technische Universität München, München.
  • Garlandini, S., & Fabrikant, S. I. (2009). Evaluating the effectiveness and efficiency of visual variables for geographic information visualization. In K. S. Hornsby, C. Claramunt, M. Denis, & G. Ligozat (Eds.), Spatial Information Theory (pp. 195–211). Berlin: Springer.
  • Goldsmith, M. (1998). What’s in a location? Comparing object-based and space-based models of feature integration in visual search. Journal of Experimental Psychology: General, 127(2), 189–219. doi:10.1037/0096-3445.127.2.189
  • Henderson, J. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7(11), 498–504. doi:10.1016/j.tics.2003.09.006
  • Henderson, J. M., & Hayes, T. R. (2017). Meaning-based guidance of attention in scenes as revealed by meaning maps. Nature Human Behaviour, 1(10), 743–747. doi:10.1038/s41562-017-0208-0
  • Henderson, J. M., & Hollingworth, A. (1999). High-level scene perception. Annual Review of Psychology, 50(1), 243–271. doi:10.1146/annurev.psych.50.1.243
  • Henderson, J. M., Malcolm, G. L., & Schandl, C. (2009). Searching in the dark: Cognitive relevance drives attention in real-world scenes. Psychonomic Bulletin & Review, 16(5), 850–856. doi:10.3758/PBR.16.5.850
  • Itti, L., & Koch, C. (2001). Computational modelling of visual attention. Nature Reviews Neuroscience, 2(3), 194–203. doi:10.1038/35058500
  • Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis & Machine Intelligence, 11. 1254–1259. doi:10.1109/34.730558
  • Kiefer, P., Giannopoulos, I., & Raubal, M. (2014). Where Am I? Investigating map matching during self-localization with mobile eye tracking in an urban environment: Self-localization and mobile eye tracking. Transactions in GIS, 18(5), 660–686. doi:10.1111/tgis.12067
  • Kiefer, P., Giannopoulos, I., Raubal, M., & Duchowski, A. (2017). Eye tracking for spatial research: Cognition, computation, challenges. Spatial Cognition & Computation, 17(1–2), 1–19. doi:10.1080/13875868.2016.1254634
  • Klepeis, N. E., Nelson, W. C., Ott, W. R., Robinson, J. P., Tsang, A. M., Switzer, P., … Engelmann, W. H. (2001). The National Human Activity Pattern Survey (NHAPS): A resource for assessing exposure to environmental pollutants. Journal of Exposure Science & Environmental Epidemiology, 11(3), 231–252. doi:10.1038/sj.jea.7500165
  • Liao, H., & Dong, W. (2017). An exploratory study investigating gender effects on using 3D maps for spatial orientation in wayfinding. ISPRS International Journal of Geo-Information, 6(3), 1–19. doi:10.3390/ijgi6030060
  • Liao, H., Dong, W., Huang, H., Gartner, G., & Liu, H. (2018). Inferring user tasks in pedestrian navigation from eye movement data in real-world environments. International Journal of Geographical Information Science, 739–763. doi:10.1080/13658816.2018.1482554
  • Liao, H., Dong, W., Peng, C., & Liu, H. (2017). Exploring differences of visual attention in pedestrian navigation when using 2D maps and 3D geo-browsers. Cartography and Geographic Information Science, 44(6), 474–490. doi:10.1080/15230406.2016.1174886
  • Lin, C.-T., Huang, T.-Y., Lin, W.-J., Chang, S.-Y., Lin, Y.-H., Ko, L.-W., … Chang, E. C. (2012). Gender differences in wayfinding in virtual environments with global or local landmarks. Journal of Environmental Psychology, 32(2), 89–96. doi:10.1016/j.jenvp.2011.12.004
  • Lynch, K. (1960). The image of the city. Cambridge, London: MIT press.
  • Nothdurft, H. C. (2005). Salience of feature contrast. In L. Itti, G. Rees, & J. K. Tsotsos (Eds.), Neurobiology of attention (pp. 233–239). London: Elsevier Academic Press.
  • Nothegger, C., Winter, S., & Raubal, M. (2004). Selection of salient features for route directions. Spatial Cognition & Computation, 4(2), 113–136. doi:10.1207/s15427633scc0402_1
  • Ohm, C., Müller, M., & Ludwig, B. (2017). Evaluating indoor pedestrian navigation interfaces using mobile eye tracking. Spatial Cognition & Computation, 17(1–2), 89–120. doi:10.1080/13875868.2016.1219913
  • Ohm, C., Muller, M., Ludwig, B., & Bienk, S. (2014). Where is the landmark? Eye tracking studies in large-scale indoor environments. In P. Kiefer, I. Giannopoulos, M. Raubal, & A. Krüger (Eds.), Proceedings of the 2nd international workshop on eye tracking for spatial research co-located with the 8th international conference on geographic information science (GIScience 2014) (pp. 47–51), Vienna, Austria.
  • Oliva, A., Torralba, A., Castelhano, M. S., & Henderson, J. M. (2003). Top-down control of visual attention in object detection. In S. O. Ltd (Ed.), Proceedings 2003 international conference on image processing (Vol. 1, pp. 253–256). Piscataway, NJ: IEEE.
  • Parkhurst, D., Law, K., & Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42(1), 107–123. doi:10.1016/S0042-6989(01)00250-4
  • Potter, M. C. (1975). Meaning in visual search. Science, 187(4180), 965–966. doi:10.1126/science.1145183
  • Quesnot, T., & Roche, S. (2014). Measure of landmark semantic salience through geosocial data streams. ISPRS International Journal of Geo-Information, 4(1), 1–31. doi:10.3390/ijgi4010001
  • Quesnot, T., & Roche, S. (2015). Quantifying the significance of semantic landmarks in familiar and unfamiliar environments. In S. Fabrikant, M. Raubal, M. Bertolotto, C. Davies, S. Freundschuh, & S. Bell (Eds.), International conference on spatial information theory (pp. 468–489). Berlin: Springer
  • Raubal, M., & Winter, S. (2002). Enriching wayfinding instructions with local landmarks. In M. J. Egenhofer & D. M. Mark (Eds.), Geographic information science (Vol. 2478, pp. 243–259). Berlin: Springer.
  • Sorrows, M. E., & Hirtle, S. C. (1999). The nature of landmarks for real and electronic spaces. In C. Freksa & D. M. Mark (Eds.), Spatial information theory. Cognitive and computational foundations of geographic information science (pp. 37–50). Berlin: Springer.
  • Spiers, H. J. (2008). The dynamic nature of cognition during wayfinding. Journal of Environmental Psychology, 28(3), 232–249. doi:10.1016/j.jenvp.2008.02.006
  • Steck, S. D., & Mallot, H. A. (2000). The role of global and local landmarks in virtual environment navigation. Presence: Teleoperators and Virtual Environments, 9(1), 69–83. doi:10.1162/105474600566628
  • Tatler, B. W., Hayhoe, M. M., Land, M. F., & Ballard, D. H. (2011). Eye guidance in natural vision: Reinterpreting salience. Journal of Vision, 11(5), 5. doi:10.1167/11.5.5
  • Titus, P. A., & Everett, P. B. (1996). Consumer wayfinding tasks, strategies, and errors: An exploratory field study. Psychology and Marketing, 13(3), 265–290. doi:10.1002/(SICI)1520-6793(199605)13:3<265::AID-MAR2>3.0.CO;2-A
  • Tomko, M. (2004). Case study-assessing spatial distribution of web resources for navigation services. In Y.-J. Kwon, A. Bouju, & C. Claramunt (Eds.), Proceedings of the 4th international workshop on web and wireless geographical information systems (pp. 90–104), Koyang, Korea.
  • Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113(4), 766–786. doi:10.1037/0033-295X.113.4.766
  • Tuan, Y. F. (1979). Space and place: Humanistic perspective. In S. Gale & G. Olsson (Eds.), Philosophy in geography (pp. 387–427). Dordrecht, Netherlands: Springer.
  • Walther, D., & Koch, C. (2006). Modeling attention to salient proto-objects. Neural Networks, 19(9), 1395–1407. doi:10.1016/j.neunet.2006.10.001
  • Wang, C., Chen, Y., Zheng, S., & Liao, H. (2019). Gender and age differences in using indoor maps for wayfinding in real environments. ISPRS International Journal of Geo-Information, 8(1), 11. doi:10.3390/ijgi8010011
  • Wenczel, F., Hepperle, L., & von Stülpnagel, R. (2017). Gaze behavior during incidental and intentional navigation in an outdoor environment. Spatial Cognition & Computation, 17(1–2), 121–142. doi:10.1080/13875868.2016.1226838
  • Zhao, W., Li, Q., & Li, B. (2011). Extracting hierarchical landmarks from urban POI data. Yaogan Xuebao- Journal of Remote Sensing, 15(5), 973–988.
  • Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., & Torralba, A. (2017). Scene parsing through ade20k dataset. In R. Chellappa, A. Hoogs, & Z. Zhang (Eds.), Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 633–641). doi:10.1177/1753193416680561
  • Zhou, B., Zhao, H., Puig, X., Xiao, T., Fidler, S., Barriuso, A., & Torralba, A. (2019). Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision, 127(3), 302–321. doi:10.1007/s11263-018-1140-0

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.