89
Views
0
CrossRef citations to date
0
Altmetric
Research Article

When complementarity meets consistency: weighted collaboration fusion constrained by consistency between views for multi-view remote sensing scene classification

ORCID Icon, , ORCID Icon, & ORCID Icon
Pages 7492-7514 | Received 18 Jul 2023, Accepted 09 Nov 2023, Published online: 06 Dec 2023
 

ABSTRACT

Remote sensing scene classification is the basis of advanced smart urban planning tasks such as urban functional zone division and land use type identification. In recent years, a wide range of emerging data sources is receiving progressive attention for urban features extraction, such as satellites, unmanned aerial vehicles (UAVs), and ground sensors. How to effectively utilize these multi-view data jointly to improve scene classification performance has become a hot topic of remote sensing challenge. Existing feature fusion methods tend to map data from different views into a common feature space, which is often difficult to find when the data between views differ greatly. Furthermore, because these methods require data from all views as input, they are not flexible enough to handle situations where there is only one view input when inferring. To address the aforementioned issues, a novel Coupled Parallel Architecture (CPA) using Weighted Collaboration Fusion Constrained by Consistency Between Views (CBV-WCF) is proposed in this paper. In the training phase, the CBV module reduces the impact of the heterogeneous gap across views by capturing the consistency information between views. Well the WCF module is used to fully mine and effectively fuse the complementary information between views to improve the performance of downstream tasks. In the inference phase, the proposed architecture can effectively improve the classification performance in both cases of multi-view and single-view input. Our method is evaluated on air-ground dual-view scene classification, which is a typical multi-view task with large image differences between views. Experimental results on two publicly available air-ground dual-view datasets demonstrate that the proposed framework significantly improves classification performance while bringing some inspiration and new solutions to multi-view tasks. The code of this paper will be published at: https://github.com/Forest-repo/CBV-WCF.

Acknowledgements

We would like to thank Qingdao University of Technology and Beijing Jiaotong University for their technical support, as well as all those who participated in this paper. This work was supported in part by the National Natural Science Foundation of China under Grant 62171247.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Author contributors

All the authors made significant contributions to this work. Project administration, S.H.; Innovations and original draft writing, K.Z.; Coding, S.L.; Review and editing, L.Z. and J.S. All authors have read and agreed to the published version of the manuscript.

Availability of data and code

The code and data that support the findings of this study are available from the corresponding author, Siyuan Hao (Email: [email protected]), upon reasonable request. Some useful information is also available at https://github.com/Forest-repo/CBV-WCF.

Notes

Additional information

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62171247.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 689.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.