ABSTRACT
Urban model retrieval has wide applications in the geoscience field, and it is also a very challenging research topic due to the blur and background clutter in query images and the large spatial inconsistencies between query and database images. In this study, a feature extraction and similarity metric-learning framework for urban model retrieval is proposed. In the method, the selective search voting algorithm is presented to automatically localize and segment a query object from an input image with the help of the top-ranked retrieved database images. Then, the local features of object images are extracted via sparse coding, and the global features are learned using the spatial constrained convolutional neural network. We utilize a new similarity metric to match the database images with a query object image. Finally, similar 3D models are retrieved. Both qualitative and quantitative experimental results indicate that the proposed framework can localize and segment a query object from an input image precisely and that the retrieval results are better than those of other related approaches.
Acknowledgements
The authors would like to thank Prof. Bo Huang and the reviewers for their thoughtful and detailed comments which have helped them to improve the scientific contribution as well as the presentation of this paper. This work was supported by the National Natural Science Foundation of China: [Grant Number 41371324], and China Postdoctoral Science Foundation: [Grant Number 2016M600953].
Disclosure statement
No potential conflict of interest was reported by the authors.