1,066
Views
32
CrossRef citations to date
0
Altmetric
Research Article

Evaluating generative adversarial networks based image-level domain transfer for multi-source remote sensing image segmentation and object detection

, , , &
Pages 7343-7367 | Received 19 Jan 2020, Accepted 08 Apr 2020, Published online: 07 Jul 2020
 

ABSTRACT

Appearances and qualities of remote sensing images are affected by different atmospheric conditions, quality of sensors, and radiometric calibrations. This heavily challenges the generalization ability of a deep learning or other machine learning model: the performance of a model pretrained on a source remote sensing data set can significantly decrease when applied to a different target data set. The popular generative adversarial networks (GANs) can realize style or appearance transfer between a source and target data sets, which may boost the performance of a deep learning model through generating new target images similar to source samples. In this study, we comprehensively evaluate the performance of GAN-based image-level transfer methods on convolutional neural network (CNN) based image processing models that are trained on one dataset and tested on another one. Firstly, we designed the framework for the evaluation process. The framework consists of two main parts, the GAN-based image-level domain adaptation, which transfers a target image to a new image with similar probability distribution of source image space, and the CNN-based image processing tasks, which are used to test the effects of GAN-based domain adaptation. Second, the domain adaptation is implemented with two mainstream GAN methods for style transfer, the CycleGAN and the AgGAN. The image processing contains two major tasks, segmentation and object detection. The former and the latter are designed based on the widely applied U-Net and Faster R-CNN, respectively. Finally, three experiments, associated with three datasets, are designed to cover different application cases, a change detection case where temporal data is collected from the same scene, a two-city case where images are collected from different regions and a two-sensor case where images are obtained from aerial and satellite platforms respectively. Results revealed that, the GAN-based image transfer can significantly boost the performance of the segmentation model in the change detection case, however, it did not surpass conventional methods; in the other two cases, the GAN-based methods obtained worse results. In object detection, almost all the methods failed to boost the performance of the Faster R-CNN and the GAN-based methods performed the worst.

Disclosure statement

No potential conflict of interest was reported by the authors.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.