ABSTRACT
In this letter, a new deep learning framework for spectral–spatial classification of hyperspectral images is presented. The proposed framework serves as an engine for merging the spatial and spectral features via suitable deep learning architecture: stacked autoencoders (SAEs) and deep convolutional neural networks (DCNNs) followed by a logistic regression (LR) classifier. In this framework, SAEs is aimed to get useful high-level features for the one-dimensional features which is suitable for the dimension reduction of spectral features, while DCNNs can learn rich features from the training data automatically and has achieved state-of-the-art performance in many image classification databases. Though the DCNNs has shown robustness to distortion, it only extracts features of the same scale, and hence is insufficient to tolerate large-scale variance of object. As a result, spatial pyramid pooling (SPP) is introduced into hyperspectral image classification for the first time by pooling the spatial feature maps of the top convolutional layers into a fixed-length feature. Experimental results with widely used hyperspectral data indicate that classifiers built in this deep learning-based framework provide competitive performance.
Acknowledgement
The authors would like to thank the anonymous reviewers who contributed to the quality of this letter by providing helpful suggestions.
Disclosure statement
No potential conflict of interest was reported by the authors.