325
Views
3
CrossRef citations to date
0
Altmetric
Research Article

A fusion-based contrastive learning model for cross-modal remote sensing retrieval

ORCID Icon, , &
Pages 3359-3386 | Received 23 Jan 2022, Accepted 10 Jun 2022, Published online: 30 Jun 2022
 

ABSTRACT

With the rapid growth of cross-modal data, cross-modal retrieval has become a research hotspot in the field of remote sensing, and remote-sensing image–text retrieval (RSITR) has attracted extensive attention for its flexible and efficient way to get the interested information and its available application. However, most of the existing methods cannot adequately extract fine unimodal features and are poor at exploring potential correlations between different modalities, leading to unsatisfactory performance. Besides, the majority of the existing datasets and methods for image–text retrieval are based on English, and few researches are focused on Chinese captions, but the application of image–text retrieval in the remote-sensing field should not be restricted by the language. In this article, we introduce a novel fusion-based contrastive learning model (FBCLM) for RSITR to cope with the problems of unimodal feature extracting and correlation exploring of remote-sensing image–text pairs, and the model is available for image–text retrieval on both English and Chinese caption datasets. Our model employs the unimodal encoder containing the self-attention module to extract the fine-grained features of the single modal and further utilizes the cross-modal fusion module to improve the discriminative ability of feature representation, which uses the cross attention mechanism. Furthermore, contrastive loss is applied to the method to enhance the image–text retrieval performance by exploring the underlying semantic relationship between visual and textual representations. In addition, we construct several remote-sensing image Chinese caption datasets for RSITR. The experimental results on several public RSITR datasets and the proposed datasets demonstrate the outperformance of our model in the cross-modal remote-sensing image–text retrieval task.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61790554 and Grant 62001499

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 689.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.