664
Views
90
CrossRef citations to date
0
Altmetric
Articles

Visual descriptors for content-based retrieval of remote-sensing images

ORCID Icon
Pages 1343-1376 | Received 08 Feb 2017, Accepted 17 Oct 2017, Published online: 24 Nov 2017
 

ABSTRACT

In this article, we present an extensive evaluation of visual descriptors for the content-based retrieval of remote-sensing (RS) images. The evaluation includes global hand-crafted, local hand-crafted, and convolutional neural networks (CNNs) features coupled with four different content-based image retrieval schemes. We conducted all the experiments on two publicly available datasets: the 21-class University of California (UC) Merced Land Use/Land Cover (LandUse) dataset and 19-class High-resolution Satellite Scene dataset (SceneSat). The content of RS images might be quite heterogeneous, ranging from images containing fine grained textures, to coarse grained ones or to images containing objects. It is, therefore, not obvious in this domain, which descriptor should be employed to describe images having such a variability. Results demonstrate that CNN-based features perform better than both global and local hand-crafted features whatever is the retrieval scheme adopted. Features extracted from a residual CNN suitable fine-tuned on the RS domain, shows much better performance than a residual CNN pre-trained on multimedia scene and object images. Features extracted from Network of Vector of Locally Aggregated Descriptors (NetVLAD), a CNN that considers both CNN and local features, works better than others CNN solutions on those images that contain fine-grained textures and objects.

Acknowledgements

The author is grateful to Professor Raimondo Schettini for the valuable comments and stimulating discussions, and he would like to thank the reviewers for their valuable comments and effort to improve the manuscript.

Disclosure statement

No potential conflict of interest was reported by the author.

Additional information

Funding

We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for doing part of the experiments included in this research.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 689.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.