979
Views
30
CrossRef citations to date
0
Altmetric
Research Article

Fast and accurate land-cover classification on medium-resolution remote-sensing images using segmentation models

, &
Pages 3277-3301 | Received 25 Nov 2019, Accepted 06 Dec 2020, Published online: 07 Feb 2021
 

ABSTRACT

Land-cover classification especially global mapping has become a new trend in recent years. Traditional convolutional neural network (CNN) methods for land-cover classification are usually patch based and have problems of high computation cost and low efficiency, which hinder their wide applications when timely and accurately mapping land covers. Fortunately, methods based on the fully convolutional network (FCN) have achieved state-of-the-art performance on the semantic segmentation task, which provides a new possibility for efficient land-cover classification. Many works have been done for land-cover classification but are almost focused on very high-resolution remote-sensing images and few research works are implemented on medium-resolution images. In this paper, six representative state-of-the-art segmentation models including ‘U’-shaped network (U-Net), fully convolutional DenseNet (FC-DenseNet), full-resolution residual network (FRRN), bilateral segmentation network (BiSeNet), DeepLab version 3 plus (DeepLabV3+), and pyramid scene parsing network (PSPNet) are selected to compare their performances on the land-cover classification of Land Remote-Sensing Satellite System)-5 satellite remote-sensing images. Based on the analysis of their performances, an improved model named atrous spatial pyramid pooling U-Net (ASPP-U-Net) is proposed for classification. Methods including support vector machine, patch-based CNN, and U-Net are also selected for comparison with the proposed model. Furthermore, to overcome the insufficiency of reference data when training deep models, an integration strategy based on two existing global land-cover products finer resolution observation and monitoring of global land cover of 2010 and global land-cover mapping at 30 m resolution is designed to produce reference data. Experimental results show that the encoder–decoder architecture especially U-Net is the most competitive network and is highly recommended for mapping land covers of medium-resolution images. The proposed ASPP-U-Net outperforms other compared methods not only in the classification accuracy but also in the inference time efficiency. In addition, it is advisable to use existing global land-cover products to produce reference data for segmentation models when the labelled datasets are insufficient.

Acknowledgements

The authors would like to thank the group of Professor Peng Gong from Ministry of Education Key Laboratory for Earth System Modeling, Centre for Earth System Science, Tsinghua University, Beijing, China and the group of Professor Jun Chen from National Geomatics Centre of China, Beijing, China for providing the global land-cover products FROM-GLC2010 and GlobeLand30. At the same time, the authors are grateful to the anonymous reviewers for their careful assessments, valuable comments, and suggestions that are helpful to the quality of this paper.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was funded by the National Natural Science Foundation of China under grant 41701397 and grant 41971396.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 689.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.