85
Views
1
CrossRef citations to date
0
Altmetric
Research Article

An efficient dual feature fusion model of feature extraction for hyperspectral images

, ORCID Icon, , &
Pages 4217-4238 | Received 05 Mar 2023, Accepted 24 Jun 2023, Published online: 21 Jul 2023
 

ABSTRACT

Conventional feature extraction (FE) and spatial or context-preserving filters have been extensively studied when applying hyperspectral images (HSIs). However, there still exists some issues to resolve, such as the destruction of the complete structural information because of unfolded 2-D matrix before extracting, the unchanged or even increased number of resulting features from the additional context or spatial-preserving filters, and overreliance on experience to choose retained dimensionality that significantly improves the operating time. This article presents an efficient FE framework, i.e., a dual feature fusion model (DFFM), to address these issues. Specifically, a novel two-order feature fusion (FF) weighted on partial Shannon’s entropy is proposed to hold low dimensional characteristics. Then, a valid three-order FF using constrained Tucker compression is performed on the resulting elements, containing the intact spatial structure and saving computing costs. It can also automatically select a suitable number of features to keep and is robust to noise and training sets. Comparative experiments on three benchmark HSIs were performed to verify the efficiency of DFFM in cases of different training sizes. All the experimental results show that this framework is robust and effective, outperforming several state-of-the-art techniques in classification precision and execution time.

Acknowledgements

The authors are grateful for the comments and contributions of the editors, anonymous reviewers, and editorial team members.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This study was supported in part by the Natural Science Foundation of Hunan Province under Grant 2023JJ10054, in part by the National Natural Science Foundation of China under Grants 41875061 and 51609254, and in part by the Postgraduate Research & Practice Innovation Program of Jiangsu Province under Grant SJCX22_0246.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.