247
Views
7
CrossRef citations to date
0
Altmetric
Research Article

Multi-modal land cover mapping of remote sensing images using pyramid attention and gated fusion networks

, , & ORCID Icon
Pages 3509-3535 | Received 08 Oct 2021, Accepted 30 Jun 2022, Published online: 15 Jul 2022
 

ABSTRACT

Multi-modality data is becoming readily available in remote sensing (RS) and can provide complementary information about the Earth’s surface. Effective fusion of multi-modal information is thus important for various applications in RS, but also very challenging due to large domain differences, noise, and redundancies. There is a lack of effective and scalable fusion techniques for bridging multiple modality encoders and fully exploiting complementary information. To this end, we propose a new multi-modality network (MultiModNet) for land cover mapping of multi-modal remote sensing data based on a novel pyramid attention fusion (PAF) module and a gated fusion unit (GFU). The PAF module is designed to efficiently obtain rich fine-grained contextual representations from each modality with a built-in cross-level and cross-view attention fusion mechanism, and the GFU module utilizes a novel gating mechanism for early merging of features, thereby diminishing hidden redundancies and noise. This enables supplementary modalities to effectively extract the most valuable and complementary information for late feature fusion. Extensive experiments on two representative RS benchmark datasets demonstrate the effectiveness, robustness, and superiority of the MultiModNet for multi-modal land cover classification.

Acknowledgements

The benchmark datasets: the Vaihingen dataset was provided by the International Society for Photogrammetry and Remote Sensing (ISPRS); the Agriculture-Vision dataset was provided by UIUC, IntelinAir and University of Oregon. This work was supported by the foundation of the Research Council of Norway under Grant 272399 and Grant 309439.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1. There will be a third or even more supplementation modalities, we thus describe them using the i-th modality as illustrated in , and assume they are ordered depending on informational richness and significance, i.e. Input-1 Input-2 Input-i. In other words, each preceding modality can be seen as a primary modality with respect to the following (succeeding, if any) ones.

Additional information

Funding

All the authors are associated with the Centre for Research-based Innovation Visual Intelligence: http://visual-intelligence.no, funded by the Research Council of Norway and consortium partners. RJ, MK and QL are with the UiT Machine Learning Group: http://machine-learning.uit.no; Norges Forskningsråd under Grant [272399] and Grant [309439].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.