204
Views
2
CrossRef citations to date
0
Altmetric
Research Article

DOCSNet: a dual-output and cross-scale strategy for pan-sharpening

ORCID Icon, , , , , & show all
Pages 1609-1629 | Received 26 Nov 2021, Accepted 10 Feb 2022, Published online: 11 Mar 2022
 

ABSTRACT

Pan-sharpening aims to obtain a multi-spectral image of high resolution from inputs of a high spatial resolution panchromatic image and a low spatial resolution multi-spectral image. In recent years, pan-sharpening methods based on supervised learning have achieved superior performance over traditional methods. However, all these supervised pan-sharpening methods rest upon the assumption that performance of model trained on a coarse scale can generalize well on a finer one, which is not always the case. To address this problem, we propose a novel dual-output and cross-scale learning strategy DOCSNet for pan-sharpening. DOCSNet consists of two sub-networks, ReducedNet1 and FullNet2, which are both adapted from simple three convolutional layers and progressively cascaded. ReducedNet1 is first trained on the reduced-scale training set, its parameters are frozen, and then the whole network (fixed ReducedNet1 cascaded with FullNet2) adopts a cross-scale training strategy which involves simultaneously reduced and full resolution training samples. Each sub-network has an output terminal for reduced-scale and target-scale results, respectively. To the best of our knowledge, this is the first attempt to introduce a dual-output architecture to pan-sharpening framework. Extensive experiments on GaoFen-2 and WorldView-3 satellite images demonstrate that DOCSNet outperforms other state-of-the-art pan-sharpening methods in terms of qualitative visual effects and quantitative metrics evaluations.

Acknowledgement

This research was supported by the high performance computing (HPC) resourcesat Beihang University and the Supercomputing Platform of School of Mathematical Sciences at Beihang University. We really appreciate it. We would also be grateful to the Open Remote Sensing platform for providingthe MATLAB scripts of the pan-sharpening methods used in this study throughhttps://openremotesensing.net/. Our special thanks also go to the authors of(Zhou, Liu, and Wang 2021) for providing their source images and codes throughhttps://github.com/zhysora/PSGan-Family, the authors of (Giuseppe et al. 2016) forproviding the Matlab scripts of the PNN method through http://www.grip.unina.it, the authors of (Yang et al. 2017) for providing the TensorFlow scripts of the PanNet method through https://xueyangfu.github.io/projects/iccv2017.html and the authors of (Ma et al. 2020) for providing the TensorFlow scripts of the PanGan techniquethrough https://github.com/yuwei998/PanGAN.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The work was supported by the National Natural Science Foundation of China [61671002].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.