ABSTRACT
In scene-level classification of remote sensing, fusion of multi-feature can significantly boost the performance. However, most methods directly fuse the features of different modalities without considering the importance of each feature modality. Based on the above considerations, in this work, multi-modality features weighted residual fusion method is proposed. First, the extracted high-level and low-level features of the scene image are encoded into a unified feature representation. Then the reconstruction residuals of each modality of each scene class are calculated based on two representation-based classification, i.e. sparse representation (SR) and collaborative representation (CR). After fusing the weighted reconstruction residuals of these two modalities with SR and CR, the class label is assigned to the category with the smallest residual. We make extensive evaluations on two challenging remote sensing data sets. The comparison with the state-of-the-art methods demonstrates the effectiveness of our proposed method.
Disclosure statement
No potential conflict of interest was reported by the authors.
ORCID
Feng’an Zhao http://orcid.org/0000-0002-0331-3649