1,989
Views
35
CrossRef citations to date
0
Altmetric
Research Articles

Graph convolutional autoencoder model for the shape coding and cognition of buildings in maps

ORCID Icon, ORCID Icon, ORCID Icon &
Pages 490-512 | Received 06 Sep 2019, Accepted 08 May 2020, Published online: 25 May 2020
 

ABSTRACT

The shape of a geospatial object is an important characteristic and a significant factor in spatial cognition. Existing shape representation methods for vector-structured objects in the map space are mainly based on geometric and statistical measures. Considering that shape is complicated and cognitively related, this study develops a learning strategy to combine multiple features extracted from its boundary and obtain a reasonable shape representation. Taking building data as example, this study first models the shape of a building using a graph structure and extracts multiple features for each vertex based on the local and regional structures. A graph convolutional autoencoder (GCAE) model comprising graph convolution and autoencoder architecture is proposed to analyze the modeled graph and realize shape coding through unsupervised learning. Experiments show that the GCAE model can produce a cognitively compliant shape coding, with the ability to distinguish different shapes. It outperforms existing methods in terms of similarity measurements. Furthermore, the shape coding is experimentally proven to be effective in representing the local and global characteristics of building shape in application scenarios such as shape retrieval and matching.

Acknowledgments

Special thanks go to the editor and anonymous reviewers for their insightful comments and constructive suggestions that substantially improved the quality of the paper.

Data and codes availability statement

The data and codes that support the findings of this study are available in Figshare at http://doi.org/10.6084/m9.figshare.11742507.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the National Natural Science Foundation of China [41531180, 41871377]; National Key Research and Development Program of China [2017YFB0503500].

Notes on contributors

Xiongfeng Yan

Xiongfeng Yan received the B.S. and Ph.D. degrees in cartography from Wuhan University in 2015 and 2019, respectively. He is currently a Postdoctoral with the College of Surveying and Geo-Informatics, Tongji University, Shanghai, China. His research interests include cartography and machine learning with special focus on the graph-structured spatial data.

Tinghua Ai

Tinghua Ai is a Professor at the School of Resource and Environmental Sciences, Wuhan University, Wuhan, China. He received the Ph.D. degree in cartography from the Wuhan Technical University of Surveying and Mapping in 2000. His research interests include multi-scale representation of spatial data, map generalization, spatial cognition, and spatial big data analysis.

Min Yang

Min Yang is an Associate Professor at the School of Resource and Environmental Sciences, Wuhan University, Wuhan, China. He received the B.S. and Ph.D. degrees in cartography from Wuhan University in 2007 and 2013, respectively. His research interests include change detection of spatial data, map generalization, and spatial big data analysis.

Xiaohua Tong

Xiaohua Tong is a Professor at the College of Surveying and Geo-Informatics, Tongji University, Shanghai, China. He received the Ph.D. degree in geoscience from Tongji University in 1999. His research interests include photogrammetry and remote sensing, trust in spatial data, and image processing for high-resolution satellite images.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.