662
Views
2
CrossRef citations to date
0
Altmetric
Research Article

A geospatial image based eye movement dataset for cartography and GIS

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon show all
Pages 96-111 | Received 03 May 2022, Accepted 25 Nov 2022, Published online: 04 Jan 2023

References

  • Al Maadeed, S., Ayouby, W., Hassaine, A., Aljaam, J. M., & IEEE. (2012). QUWI: An Arabic and English handwriting dataset for offline writer identification. In 13th International Conference on Frontiers in Handwriting Recognition (ICFHR).
  • Bargary, G., Bosten, J. M., Goodbourn, P. T., Lawrance-Owen, A. J., Hogg, R. E., & Mollon, J. D. (2017). Individual differences in human eye movements: An oculomotor signature? Vision Research, 141, 157–169. https://doi.org/10.1016/j.visres.2017.03.001
  • Bebis, G., Egbert, D., & Shah, M. (2003). Review of computer vision education. IEEE Transactions on Education, 46(1), 2–21. https://doi.org/10.1109/te.2002.808280
  • Bednarik, R., Busjahn, T., Gibaldi, A., & Ahadi, A. (2020). EMIP: The eye movements in programming dataset. Science of Computer Programming, 198, Article 102520. https://doi.org/10.1016/j.scico.2020.102520
  • Borji, A., & Itti, L. (2015). CAT2000: A large scale fixation dataset for boosting saliency research. ArXiv, abs/1505 03581. https://doi.org/10.48550/arXiv.1505.03581
  • Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32. https://doi.org/10.1023/a:1010933404324
  • Burian, J., Popelka, S., & Beitlova, M. (2018). Evaluation of the cartographical quality of urban plans by eye-tracking. ISPRS International Journal of Geo-Information, 7(5), Article 192. https://doi.org/10.3390/ijgi7050192
  • Bylinskii, Z., Isola, P., Bainbridge, C., Torralba, A., & Oliva, A. (2015). Intrinsic and extrinsic effects on image memorability. Vision Research, 116, 165–178. https://doi.org/10.1016/j.visres.2015.03.005
  • Bylinskii, Z., Judd, T., Oliva, A., Torralba, A., & Durand, F. (2019). What do different evaluation metrics tell us about saliency models? IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(3), 740–757. https://doi.org/10.1109/tpami.2018.2815601
  • Chen, T. Q., Guestrin, C., & Assoc Comp, M. (2016, August 13–17). Xgboost: A scalable tree boosting system. In 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD).
  • Chen, J., Li, R., Dong, W., Ge, Y., Liao, H., & Cheng, Y. (2015). GIS-based borderlands modeling and understanding: A perspective. ISPRS International Journal of Geo-Information, 4(2), 661–676. https://doi.org/10.3390/ijgi4020661
  • Christ, M., Kempa-Liehr, A. W., & Feindt, M. (2016). Distributed and parallel time series feature extraction for industrial big data applications. arXiv e-prints, arXiv:1610.07717. https://ui.adsabs.harvard.edu/abs/2016arXiv161007717C
  • Coltekin, A., Heil, B., Garlandini, S., & Fabrikant, S. I. (2009). Evaluating the effectiveness of interactive map interface designs: A case study integrating usability metrics with eye-movement analysis. Cartography and Geographic Information Science, 36(1), 5–17. https://doi.org/10.1559/152304009787340197
  • Cong, R. M., Lei, J. J., Fu, H. Z., Cheng, M. M., Lin, W. S., & Huang, Q. M. (2019). Review of visual saliency detection with comprehensive information. IEEE Transactions on Circuits and Systems for Video Technology, 29(10), 2941–2959. https://doi.org/10.1109/tcsvt.2018.2870832
  • Cybulski, P. (2020). Spatial distance and cartographic background complexity in graduated point symbol map-reading task. Cartography and Geographic Information Science, 47(3), 244–260. https://doi.org/10.1080/15230406.2019.1702102
  • Davies, C., Tompkinson, W., Donnelly, N., Gordon, L., & Cave, K. (2006). Visual saliency as an aid to updating digital maps. Computers in Human Behavior, 22(4), 672–684. https://doi.org/10.1016/j.chb.2005.12.014
  • Dobson, M. W. (1977). Eye movement parameters and map reading. The American Cartographer, 4(1), 39–58. https://doi.org/10.1559/152304077784080022
  • Dong, W. H., & Liao, H. (2016). Eye tracking to explore the impacts of photorealistic 3d representations in pedstrian navigation performance. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLI-B2, 641–645. https://doi.org/10.5194/isprsarchives-XLI-B2-641-2016
  • Dong, W. H., Liao, H., Zhan, Z., Liu, B., Wang, S. K., & Yang, T. Y. (2019). New research progress of eye tracking-based map cognition in cartography since 2008. Acta Geographica Sinica, 74(3), 599–614. https://doi.org/10.11821/dlxb201903015
  • Dong, W. H., Wu, Y. L., Qin, T., Bian, X. R., Zhao, Y., He, Y. R., Xu, Y., & Yu, C. (2021). What is the difference between augmented reality and 2D navigation electronic maps in pedestrian wayfinding? Cartography and Geographic Information Science, 48(3), 225–240. https://doi.org/10.1080/15230406.2021.1871646
  • Dong, W. H., Yang, T. Y., Liao, H., & Meng, L. Q. (2020). How does map use differ in virtual reality and desktop-based environments? International Journal of Digital Earth, 13(12), 1484–1503. https://doi.org/10.1080/17538947.2020.1731617
  • Dong, W. H., Zheng, L., Liu, B., & Meng, L. Q. (2018). Using eye tracking to explore differences in map-based spatial ability between geographers and non-geographers. ISPRS International Journal of Geo-Information, 7(9), 337. https://doi.org/10.3390/ijgi7090337
  • Ehinger, K. A., Hidalgo-Sotelo, B., Torralba, A., & Oliva, A. Modelling search for people in 900 scenes: A combined source model of eye guidance. (2009). Visual cognition, 17(6–7), 945–978, Article Pii 912725330. https://doi.org/10.1080/13506280902834720
  • Fang, S., Li, J., Tian, Y. H., Huang, T. J., & Chen, X. W. (2017). Learning discriminative subspaces on random contrasts for image saliency analysis. IEEE Transactions on Neural Networks and Learning Systems, 28(5), 1095–1108. https://doi.org/10.1109/tnnls.2016.2522440
  • Fan, S. J., Shen, Z. Q., Jiang, M., Koenig, B. L., Xu, J., & Kankanhalli, M. S. (2018). Emotional attention: A study of image sentiment and visual attention. In 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 7521–7531). https://doi.org/10.1109/cvpr.2018.00785 (2018 IEEE/cvf conference on computer vision and pattern recognition (cvpr)) (IEEE Conference on Computer Vision and Pattern Recognition).
  • Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139. https://doi.org/10.1006/jcss.1997.1504
  • Garlandini, S., & Fabrikant, S. I. (2009, September 21-25). Evaluating the effectiveness and efficiency of visual variables for geographic information visualization. Lecture notes in computer science [Spatial information theory, proceedings]. In 9th International Conference on Spatial Information Theory.
  • Griffith, H., Lohr, D., Abdulin, E., & Komogortsev, O. (2021). GazeBase, a large-scale, multi-stimulus, longitudinal eye movement dataset. Scientific Data, 8(1), Article 184. https://doi.org/10.1038/s41597-021-00959-y
  • Han, J. W., Zhou, P. C., Zhang, D. W., Cheng, G., Guo, L., Liu, Z. B., Bu, S., & Wu, J. (2014). Efficient, simultaneous detection of multi-class geospatial targets based on visual saliency modeling and discriminative learning of sparse coding. ISPRS Journal of Photogrammetry and Remote Sensing, 89, 37–48. https://doi.org/10.1016/j.isprsjprs.2013.12.011
  • Holland, C., & Komogortsev, O. V. (2011). Biometric identification via eye movement scanpaths in reading. In 2011 International Joint Conference on Biometrics (IJCB) (pp. 1–8). https://doi.org/10.1109/IJCB.2011.6117536
  • Hollenstein, N., Rotsztejn, J., Troendle, M., Pedroni, A., Zhang, C., & Langer, N. (2018). ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading. Scientific Data, 5(1), 180291. https://doi.org/10.1038/sdata.2018.291
  • Hosmer, D. W., Lemeshow, S., & Sturdivant, R. X. (2013). Applied logistic regression (3rd ed.). https://doi.org/10.1002/9781118548387
  • Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254–1259. https://doi.org/10.1109/34.730558
  • Jiang, M., Huang, S. S., Duan, J. Y., Zhao, Q., & IEEE. (2015, June 7–12). SALICON: Saliency in context. IEEE conference on computer vision and pattern recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • Judd, T., Durand, F., & Torralba, A. (2012). A benchmark of computational models of saliency to predict human fixations (MIT-CSAIL-TR-2012-001), Issue. http://hdl.handle.net/1721.1/68590
  • Kasneci, E., Kasneci, G., Appel, T., Haug, J., Wortha, F., Tibus, M., Trautwein, U., & Gerjets, P. (2021). TuEyeQ, a rich IQ test performance data set with eye movement, educational and socio-demographic information. Scientific Data, 8(1), Article 154. https://doi.org/10.1038/s41597-021-00938-3
  • Kiefer, P., Giannopoulos, I., Duchowski, A., & Raubal, M. (2016, September 27-30). Measuring cognitive load for map tasks through pupil diameter. Lecture notes in computer science. In 9th International Conference on Geographic Information Science (GIScience).
  • Kiefer, P., Giannopoulos, I., Martin, R., & Duchowski, A. (2017). Eye tracking for spatial research: Cognition, computation, challenges. Spatial Cognition & Computation, 17(1–2), 1–19. https://doi.org/10.1080/13875868.2016.1254634
  • Kim, B., & Park, J. (2020). The visual effect of signboards on the vitality of the streetscapes using eye-tracking. Sustainability, 13(1), 30. https://doi.org/10.3390/su13010030
  • Koehler, K., Guo, F., Zhang, S., & Eckstein, M. P. (2014). What do saliency models predict? Journal of Vision, 14(3), Article 14. https://doi.org/10.1167/14.3.14
  • Krafka, K., Khosla, A., Kellnhofer, P., Kannan, H., Bhandarkar, S., Matusik, W., & Torralba, A. (2016, June 27-30). Eye tracking for everyone. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • Krassanakis, V., & Cybulski, P. (2019). A review on eye movement analysis in map reading process: The status of the last decade. Geodesy and Cartography, 68(1), 191–209. https://doi.org/10.24425/gac.2019.126088
  • Krassanakis, V., & Cybulski, P. (2021). Eye tracking research in cartography: Looking into the future. ISPRS International Journal of Geo-Information, 10(6), Article 411. https://doi.org/10.3390/ijgi10060411
  • Krassanakis, V., Filippakopoulou, V., & Nakos, B. (2011). The influence of attributes of shape in map reading process. In 25th International Cartographic Conference. http://users.ntua.gr/bnakos/Data/Section%205-6/Pub_5-6-57.pdf
  • Krassanakis, V., Perreira Da Silva, M., & Ricordel, V. (2018). Monitoring human visual behavior during the observation of Unmanned Aerial Vehicles (UAVs) videos. Drones, 2(4), 36. https://doi.org/10.3390/drones2040036
  • Krejtz, K., Duchowski, A., & Coltekin, A. (2014). High-level gaze metrics from map viewing. Charting ambient/focal visual attention. CEUR Workshop Proceedings, 1241. https://doi.org/10.5167/uzh-104893
  • Kroner, A., Senden, M., Driessens, K., & Goebel, R. (2020). Contextual encoder-decoder network for visual saliency prediction. Neural Networks, 129, 261–270. https://doi.org/10.1016/j.neunet.2020.05.004
  • Krzywinski, M., & Altman, N. (2017). Classification and regression trees. Nature Methods, 14(8), 757–758. https://doi.org/10.1038/nmeth.4370
  • Liao, H., Dong, W. H., Huang, H. S., Gartner, G., & Liu, H. P. (2019). Inferring user tasks in pedestrian navigation from eye movement data in real-world environments. International Journal of Geographical Information Science, 33(4), 739–763. https://doi.org/10.1080/13658816.2018.1482554
  • Liao, H., Dong, W. H., & Zhan, Z. C. (2022). Identifying map users with eye movement data from map-based spatial tasks: User privacy concerns. Cartography and Geographic Information Science, 49(1), 50–69. https://doi.org/10.1080/15230406.2021.1980435
  • Liao, H., Wang, X. Y., Dong, W. H., & Meng, L. Q. (2019). Measuring the influence of map label density on perceived complexity: A user study using eye tracking. Cartography and Geographic Information Science, 46(3), 210–227. https://doi.org/10.1080/15230406.2018.1434016
  • Liao, H., Zhao, W. D., Zhang, C. B., & Dong, W. H. (2022). Exploring eye movement biometrics in real-world activities: A case study of wayfinding. Sensors, 22(8), Article 2949. https://doi.org/10.3390/s22082949
  • Li, W., & Chen, Y. F. (2012). Cartography eye movements study and the experimental parameters analysis [地图学眼动研究及实验参数解析]. Bulletin of Surveying and Mapping, 2012(10), 16–20.
  • Liu, B., Dong, W., Wang, Y., & Zhang, N. (2015). The influence of FOV and viewing angle on the visual information processing of 3D maps. Journal of Geo-Information Science, 17(12), 1490–1496, Article 1560-8999(2015)17:12<1490:Scjygc>2.0.Tx;2-e.
  • Maggi, S., Fabrikant, S. I., Imbert, J. -P., & Hurter, C. (2016). How do display design and user characteristics matter in animations? An empirical study with air traffic control displays. Cartographica: The International Journal for Geographic Information and Geovisualization, 51(1), 25–37. https://doi.org/10.3138/cart.51.1.3176
  • Ma, K. T., Sim, T., & Kankanhalli, M. (2013, October 22). VIP: A unifying framework for computational eye-gaze research. Lecture notes in computer science. In 4th International Workshop on Human Behavior Understanding (HBU).
  • Ooms, K., De Maeyer, P., & Fack, V. (2014). Study of the attentive behavior of novice and expert map users using eye tracking. Cartography and Geographic Information Science, 41(1), 37–54. https://doi.org/10.1080/15230406.2013.860255
  • Ooms, K., De Maeyer, P., Fack, V., Van Assche, E., & Witlox, F. (2012). Interpreting maps through the eyes of expert and novice users. International Journal of Geographical Information Science, 26(10), 1773–1788. https://doi.org/10.1080/13658816.2011.642801
  • Opach, T., Golebiowska, I., & Fabrikant, S. I. (2014). How do people view multi-component animated maps? The Cartographic Journal, 51(4), 330–342. https://doi.org/10.1179/1743277413y.0000000049
  • Perrin, A. -F., Krassanakis, V., Zhang, L., Ricordel, V., Perreira Da Silva, M., & Le Meur, O. (2020). EyeTrackuav2: A large-scalebinocular eye-tracking dataset for UAV videos. Drones, 4(1), 2. https://doi.org/10.3390/drones4010002
  • Popelka, S., & Brychtova, A. (2013). Eye-tracking study on different perception of 2D and 3D terrain visualisation. The Cartographic Journal, 50(3), 240–246. https://doi.org/10.1179/1743277413y.0000000058
  • Popelka, S., Burian, J., & Beitlova, M. (2022). Swipe versus multiple view: A comprehensive analysis using eye-tracking to evaluate user interaction with web maps. Cartography and Geographic Information Science, 49(3), 252–270. https://doi.org/10.1080/15230406.2021.2015721
  • Ren, X. X., & Kang, J. (2015). Interactions between landscape elements and tranquility evaluation based on eye tracking experiments (L). The Journal of the Acoustical Society of America, 138(5), 3019–3022. https://doi.org/10.1121/1.4934955
  • Rueopas, W., Leelhapantu, S., & Chalidabhongse, T. H. (2016, July 13–15). A corner-based saliency model. International joint conference on computer science and software engineering. In 13th International Joint Conference on Computer Science and Software Engineering (JCSSE).
  • Santos, J. R. A. (1999). Cronbach’s alpha: A tool for assessing the reliability of scales. Journal of Extension, 37(2), 2TOT3.
  • Tateosian, L. G., Glatz, M., Shukunobe, M., & Chopra, P. (2015, October 25). GazeGIS: A gaze-based reading and dynamic geographic information system. Mathematics and visualization [Eye tracking and visualization: Foundations, techniques, and applications, ETVIS 2015]. In 1st Workshop on Eye Tracking and Visualization (ETVIS).
  • Tavakoli, H. R., Rahtu, E., & Heikkila, J. (2011, May 23-27). Fast and efficient saliency detection using sparse sampling and kernel density estimation. Lecture notes in computer science. In 17th Scandinavian Conference on Image Analysis (SCIA).
  • Tavenard, R., Faouzi, J., Vandewiele, G., Divo, F., Androz, G., Holtz, C., & Woods, E. (2020). Tslearn, amachine learning toolkit for time series data. Journal of Machine Learning Research, 21(118), 1–6.
  • Tobii. (2012). The Tobii I-VT fixation filter: Algorithm description. https://www.tobiipro.com/siteassets/tobii-pro/learn-and-support/analyze/how-do-we-classify-eye-movements/tobii-pro-i-vt-fixation-filter.pdf
  • VoPham, T., Hart, J. E., Laden, F., & Chiang, Y. Y. (2018). Emerging trends in geospatial artificial intelligence (geoAI): Potential applications for environmental epidemiology. Environmental Health, 17(1), Article 40. https://doi.org/10.1186/s12940-018-0386-x
  • Wang, J. Y. (1991). The designing characteristics of the atlas for officers. Journal of Geomatics Science and Technology, (3), 33–42.
  • Wang, C., Chen, Y., Zheng, S., & Liao, H. (2019). Gender and age differences in using indoor maps for wayfinding in real environments. ISPRS International Journal of Geo-Information, 8(1), 11. https://doi.org/10.3390/ijgi8010011
  • Wang, X. J., Zeng, Y., Shu, J., Zhang, C., Yan, B., & IEEE. (2018, July 28–30). Eye fixation related cognitive activities for detecting targets in remote sensing images. In 3rd International Conference on Computational Intelligence and Applications (ICCIA).
  • Williams, L. G. (1971). The role of the user in the map communication process: Obtaining information from displays with discrete elements. Cartographica: The International Journal for Geographic Information and Geovisualization, 8(2), 29–34. https://doi.org/10.3138/a724-2k5v-2887-p200
  • Xia, G. S., Bai, X., Ding, J., Zhu, Z., Belongie, S., Luo, J., & Zhang, L. (2018, June 18–23). DOTA: A large-scale dataset for object detection in aerial images. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT.
  • Yang, Y., & Newsam, S. (2010). Bag-of-visual-words and spatial extensions for land-use classification. Sigspatial International Conference on Advances in Geographic Information Systems (pp. 270–279). https://doi.org/10.1145/1869790.1869829
  • Yuan, T. L., Zhu, Z., Xu, K., Li, C. J., Mu, T. J., & Hu, S. M. (2019). A large Chinese text dataset in the wild. Journal of Computer Science and Technology, 34(3), 509–521. https://doi.org/10.1007/s11390-019-1923-y
  • Zamir, A. R., & Shah, M. (2014). Image geo-localization based on multiplenearest neighbor feature matching usinggeneralized graphs. Pattern Analysis and Machine Intelligence, IEEE Transactions On, 36, 1546–1558. https://doi.org/10.1109/TPAMI.2014.2299799
  • Zhang, H. (2004). The optimality of Naive Bayes. In International Flairs Conference.
  • Zhang, X., Liu, L. Y., Wu, C. S., Chen, X. D., Gao, Y., Xie, S., & Zhang, B. (2020). Development of a global 30m impervious surface map using multisource and multitemporal remote sensing datasets with the Google Earth Engine platform. Earth System Science Data, 12(3), 1625–1648. https://doi.org/10.5194/essd-12-1625-2020
  • Zhang, X. C., Sugano, Y., Fritz, M., & Bulling, A. (2019). Mpiigaze: Real world dataset and deep appearance-based gaze estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(1), 162–175. https://doi.org/10.1109/tpami.2017.2778103
  • Zhang, L., Tong, M., Marks, H., Tim, K., Shan, H., & Cottrell, G. (2008). SUN: A Bayesian framework for saliency using nature statistics. Journal of Vision, 8, 32,31-20. https://doi.org/10.1167/8.7.32

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.