1,009
Views
0
CrossRef citations to date
0
Altmetric
Research Article

RS-DeepSuperLearner: fusion of CNN ensemble for remote sensing scene classification

Pages 121-142 | Received 06 Sep 2021, Accepted 02 Jan 2023, Published online: 26 Jan 2023
 

ABSTRACT

Scene classification is an important problem in remote sensing (RS) and has attracted a lot of research in the past decade. Nowadays, most proposed methods are based on deep convolutional neural network (CNN) models, and many pretrained CNN models have been investigated. Ensemble techniques are well studied in the machine learning community; however, few works have used them in RS scene classification. In this work, we propose an ensemble approach, called RS-DeepSuperLearner, that fuses the outputs of five advanced CNN models, namely, VGG16, Inception-V3, DenseNet121, InceptionResNet-V2, and EfficientNet-B3. First, we improve the architecture of the five CNN models by attaching an auxiliary branch at specific layer locations. In other words, the models now have two output layers producing predictions each and the final prediction is the average of the two. The RS-DeepSuperLearner method starts by fine-tuning the five CNN models using the training data. Then, it employs a deep neural network (DNN) SuperLearner to learn the best way for fusing the outputs of the five CNN models by training it on the predicted probability outputs and the cross-validation accuracies (per class) of the individual models. The proposed methodology was assessed on six publicly available RS datasets: UC Merced, KSA, RSSCN7, Optimal31, AID, and NWPU-RSC45. The experimental results demonstrate its superior capabilities when compared to state-of-the-art methods in the literature.

Acknowledgement

The authors would like to acknowledge funding this research by Researchers Supporting Project number (RSP2023R69), King Saud University, Riyadh, Saudi Arabia.

D isclosure s tatement

No potential conflict of interest was reported by the authors.

D ata a vailability s tatement

The six datasets used in this study, including UC Merced, KSA, RSSCN7, Optimal-31, AID, and NWPU-RSC45 are publicly available at: http://alhichri.36bit.com/research.html. The original dataset copies are located in the following URLs respectively: http://weegee.vision.ucmerced.edu/datasets/landuse.html http://alhichri.36bit.com/ksa_dataset.htmlhttps://github.com/palewithout/RSSCN7https://1drv.ms/u/s!Ags4cxbCq3lUguxW3bq0D0wbm1zCDQhttps://captain-whu.github.io/AID/ https://doi.org/10.6084/m9.figshare.19166525.v1

Additional information

Funding

The authors would like to acknowledge funding this research by Researchers Supporting Project number (RSP2023R69), King Saud University, Riyadh, Saudi Arabia.