272
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Weighted residual self-attention graph-based transformer for spectral–spatial hyperspectral image classification

, , , ORCID Icon, & ORCID Icon
Pages 852-877 | Received 18 Sep 2022, Accepted 13 Jan 2023, Published online: 01 Mar 2023
 

ABSTRACT

Recently, deep learning for hyperspectral image classification has been successfully applied, and some convolutional neural network (CNN)-based models already achieved attractive classification results. Since hyperspectral data is a spectral-spatial cube data that can generally be considered as sequential data along with the spectral dimension, CNN models perform poorly on such a sequential data. Unlike convolutional neural networks (CNNs) that mainly concern with local relationship models in images, transformer has been shown to be a powerful structure for qualifying sequential data. In the SA (self-attention) module of ViT, each token is updated through aggregating all token’s features based on the self-attention graph. Through this, tokens can exchange information sufficiently among each other which provides a powerful representation capability. However, as the layers become deeper, the transformer model suffers from network degradation. Therefore, in order to improve the layer-to-layer information exchange and alleviate the network degradation problem, we propose a Weighted Residual Self-attention Graph-based Transformer (RSAGformer) model for hyperspectral image classification with respect to the self-attention mechanism. It effectively solves the network degradation problem of deep transformer model by fusing the self-attention information between adjacent layers and extracts the information of data effectively. Extensive experiment evaluation with six public hyperspectral datasets shows that the RSAGformer yields competitive results for classification.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported in part by the Beijing Natural Science Foundation (4214062); and by the Beijing Municipal Science and Technology Project (KM202210005023, KM202110005026); and the National Natural Science Foundation of China (61902282, 62006009)

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.