Publication Cover
Canadian Journal of Remote Sensing
Journal canadien de télédétection
Volume 49, 2023 - Issue 1
365
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Hyperspectral Image Classification Based on the Gabor Feature with Correlation Information

Classification d’images hyperspectrales basée sur le paramètre de Gabor avec des informations de corrélation

, &
Article: 2246158 | Received 30 Nov 2022, Accepted 08 Jun 2023, Published online: 25 Aug 2023

Abstract

Gabor filter is widely used to extract spatial texture features of hyperspectral images (HSI) for HSI classification; however, a single Gabor filter cannot obtain the complete image features. In the paper, we propose an HSI classification method that combines the Gabor filter (GF) and domain-transformation standard convolution (DTNC) filter. First, we use the Gabor filter to extract spatial texture features from the first two principal components of the dimensionality-reduction HSI with PCA. Second, we use the DTNC filter to extract spatial correlation features from HSI in all bands. Finally, the Large Margin Distribution Machine (LDM) uses the linear fusion of the two kinds of spatial features to classify HSI. The experimental results show that the classification accuracy of Indian Pines, Pavia University, and Kennedy Space Center data sets is 96.64, 98.23, and 98.95% with only 4, 3, and 6% training samples, respectively; and these accuracies are 2–20% higher than the other tested methods. Compared with the hyperspectral information based on SVM, EPF, IFRF, PCA-EPFs, LDM-FL, and GFDN method, the proposed method, GFDTNCLDM, significantly improves the accuracy of HSI classification.

RÉSUMÉ

Le filtre Gabor est largement utilisé pour extraire la texture spatiale des images hyperspectrales (HSI) lors d’une classification HSI; toutefois, un seul filtre Gabor ne peut pas obtenir les caractéristiques texturales de toute une image. Dans cet article, nous proposons une méthode de classification HSI qui combine le filtre de Gabor (GF) et le filtre de convolution standard de transformation de domaine (DTNC). Tout d’abord, nous utilisons le filtre Gabor pour extraire la texture spatiale des deux premières composantes principales (PCA) afin de réduire la dimensionnalité des HSI. Deuxièmement, nous utilisons le filtre DTNC pour extraire les caractéristiques de la corrélation spatiale entre toutes les bandes spectrales. Enfin, l’algorithme Large Margin Distribution Machine (LDM) utilise la fusion linéaire des deux types de caractéristiques spatiales des images HSI pour la classification. Les résultats expérimentaux montrent que la précision de classification de trois ensembles de données, Indian Pines, Pavia University et Kennedy Space Center, est respectivement de 96,64, 98,23 et 98,95% avec seulement 4, 3 et 6% d’échantillons d’entraînement. Ces précisions sont de 2 à 20% plus élevées que celle des autres méthodes testées. Par rapport à l’information hyperspectrale extraite des méthodes SVM, EPF, IFRF, PCA-EPF, LDM-F et GFDN, la méthode proposée, GFDTNCLDM, améliore considérablement la précision de la classification HSI.

Introduction

Hyperspectral remote sensing images have the characteristics of high spectral resolution, low spatial resolution, highly correlated spectral information, and high redundancy (Luo et al. Citation2022; Sellami and Tabbone Citation2022). The combination of spatial and spatial features for improving the classification accuracy of HSI has become a hot research topic, and the core problem is texture feature extraction and an effective combination of spectral information and spatial features. Currently, the spatial feature extraction methods used in HSI classification include morphological filtering (Guo et al. Citation2022; Tan et al. Citation2021), Markov random field (Cao et al. Citation2022; Fatemighomi et al. Citation2022), and image segmentation (Cao et al. Citation2020; Sun et al. Citation2021). Many scholars use various filters to obtain spatial texture features of HSI for pixel classification of hyperspectral remote sensing images, such as Bilateral Filter (BF) (Shen and Bai Citation2006; Kotwal and Chaudhuri Citation2010), Gabor Filter (GF) (Shen and Bai Citation2006), and Guided Filter (GDF) (He et al. Citation2012).

BF is a non-linear filter that can preserve the edge and noise-reduction smoothness of HSIs, and it is widely used to extract spatial texture features of HSI for pixel classification (Liao et al. Citation2019b). However, most scholars only use it to extract spatial texture features to assist the classifier, which leads to limited classification results. First, Xia et al. (Citation2016) randomly select several subsets from the original feature space to obtain the independent components of the spectrum using the ICA method, then use the effective EPF method to generate spatial features, and finally, use a random forest or rotating forest classifier to classify spectral-spatial features. Kang et al. (Citation2014b) apply a spectral space classification method based on edge-preserving filtering (EPF) to classify HSI using SVM, and the resulting classification map can be expressed as multiple probability maps. The classification results are obtained using BF or GDF on each probability map, and the first PCA component or the first three PCA components of the HSI are used as the guide image. To obtain a better classification effect, Kang et al. (Citation2017) improve the EPF method and propose a PCA-based EPF (PCA-EPFs) HSI classification method, which stacks the constructed spatial information with the spatial features extracted by applying edge-preserving filters to fuze into a new feature. Through PCA dimensionality-reduction, it achieves HSI classification of SVM. Hu et al. (Citation2022) use PCA to reduce the dimensionality of HSI, use multiple principal components as the spatial and distance domain information of BF, and use an extreme learning machine for classification.

GDF can effectively maintain the edge and do non-iterative computations, and it is always used to extract spatial texture features to assist the classifier in realizing classification. However, most scholars used it to extract spatial texture features without considering the fusion of other features, leading to poor classification accuracy. Shambulinga and Sadashivappa (Citation2019) propose an HSI SVM classification method based on GDF and PCA, and they use PCA to extract and reduce spectral features in hyperspectral data and classify them using SVM. Guo et al. (Citation2018a, Citation2018b) combine the K-Means algorithm with a guided filter, and they use the former and the latter GDF to extract spatial information and optimize the HSI classification results, respectively. Liu et al. (Citation2022) use GDF for feature filtering, which eliminates redundant features, and they propose a GDF and enhancement strategy model to classify HSI.

Some scholars use recursive filtering (Vaddi and Manoharan Citation2020) to extract spatial texture features, and then directly use classifiers to classify them; however, a single spatial one cannot improve classification accuracy significantly. Kang et al. (Citation2014a) divide hyperspectral data into subsets and fuze them, then they use recursive filtering to obtain spatial information and submit it to SVM for classification and finally propose the IFRF method. Zhan et al. (Citation2016) implement an HSI classification method by combining recursive filters and LDM.

The frequency and direction representation of the GF is close to that of the human visual system, and it can extract spatial local frequency features. It is an effective texture detection filter, and many scholars use GF to obtain texture features to assist HSI classification. Bau et al. (Citation2010) design a 3D GF filter bank to capture energy in spectral-spatial data at different orientations and scales. Shen and Jia (Citation2011) design a set of GF filters with different frequencies and directions to extract the variance of the HSI spatial-spectral signal, and perform feature selection and fusion to reduce the redundancy between Gabor features. Wang et al. (Citation2014) use GF filtering to obtain better spatial features, combine it with the active learning method to simplify the spatial neighborhood information of labeled training samples, and propose a spatial-spectral HSI classification algorithm and a semi-supervised HSI classification algorithm based on label propagation. Jia et al. (Citation2016), based on multi-task joint sparse representation, use Gabor cubes for HSI classification and the Fisher discriminant criterion of GF to extract the most representative HSI cubes for each class. Rajadell et al. (Citation2013) use the HSI texture features obtained by the GF to reduce the number of spectra required for dimensionality-reduction and propose a spectral-spatial pixel representation method. Li and Du (Citation2014) couple the nearest subspace classification with distance-weighted Tikhonov regularization, and they use the spatial features extracted from GF in the nearest regularized subspace classifier, implementing the HSI classification method. He et al. (Citation2017) study discriminative low-rank GFs for HSI classification of spatial-spectral combinations, decomposing a standard three-dimensional spectral space GF into eight sub-filters corresponding to different combinations of low-pass and band-pass single-rank filters, realizing that each sub-filter filter to extract appropriate features. Ye et al. (Citation2016) extract features from HSI by using GF embedded in principal component analysis, reduce the dimensionality of spatial features by using local Fisher discriminant analysis and local protective non-negative matrix separation methods, and propose two HSI classification algorithms. Imani and Ghassemian (Citation2016) extract spatial texture features, shape features, and pixel neighborhood information size using the gray level co-occurrence matrix, GF, and morphological filter; and find the optimal classification algorithm combining different features. Jia et al. (Citation2018) propose an HSI classification method based on 3D Gabor wavelet phase coding and Hamming distance matching frame, using the directional Gabor phase features and quadrant bit coding scheme. Kang et al. (Citation2018) extract spectral and Gabor features from the first three PCA principal components of HSI by GF, and realize an HSI classification method that fuses Gabor features and deep network learning methods. Ghassemi et al. (Citation2021) use a 3D GF to perform finite element analysis on the input data to extract spatial features, including textures and edges, and use an SVD-QR-optimized CNN for classification. Bhatti et al. (Citation2022) use a two-dimensional GF to extract spatial features from the dimensionality-reduced hyperspectral data, then uses CNN to generate spectral features, and finally, use a dual optimization classifier to classify the final extracted features. Pan et al. (Citation2022) generate a Gabor feature data cube with joint spatial-spectral features by filtering three-dimensional Gabor-filtered HSI. In addition, the Gabor feature data cube is input into co-selection self-training, resulting in the labeled samples, and a co-selection strategy method is proposed. Xiao et al. (Citation2022) use the least square method to obtain a set of pixel probability maps from the input data, then filtered these probability maps by GF to extract spatial features and input them to the standard generalized learning system for classification. Huang et al. (Citation2022) use a Gabor ensemble filter that filters each input channel by some fixed Gabor filters and learnable filters simultaneously, to extract deep features for HSI classification with CNN. In the past, many scholars used Gabor filters to extract a large number of spatial texture features by adjusting frequencies and directions, but they only considered Gabor space without combining spatial correlation features to improve the classification accuracy of HSIs.

In the past, many advances in extracting spatial features for HSI classification have been made. However, these methods obtain spatial texture features through only a single filter, often ignore spatial correlation features, and cannot obtain complete HSI features. Facts have proved that integrating spatial features into the classifier can significantly improve classification accuracy; therefore, more effective spatial feature mining and fusion methods need to be further studied.

In summary, the research on extracting spatial features of HSI for classification has some achievements, but there are also some shortcomings: (1) A single Gabor feature cannot obtain the complete spatial texture features of ground objects; (2) It is prone to losing the spatial correlation information using the GF to extract texture features; (3) The existing methods do not consider the fusion of Gabor features and spatial correlation features to form a complete spatial feature. Therefore, the existing methods of HSI classification based on spatial features need to be further improved.

This paper extracts Gabor features from the extracted hyperspectral information and obtains spatial correlation features to obtain better spatial features and provide better training samples for correlation classifiers. We propose a Gabor filtering algorithm with correlation information for HSI classification (GFDTNCLDM). The experimental results show that the fusion of spatial features extracted by GF and spatial correlation features can effectively assist LDM classification performance and significantly improve classification performance.

Methods

Gabor filter

GF is an edge feature extraction filter, and its frequency and direction expression characteristics are similar to those of the human visual system, which is suitable for extracting texture features of images. The GF kernel function of HSI at some bands is represented as (Haghighat et al. Citation2015): (1) ψc,d(x,y,fc,θd,γ,σ)=exp(x2+γ2y22σ2)cos(2πfcx+ϕ)              x=xcosθd+ysinθdy=xsinθd+ycosθd(1)

By adjusting the frequency and direction of the filter, the GF filter bank can be expressed as follows: (2) ψc,d(x,y,fc,θd,γ,σ),fc=fmax/2c,θd=dDπc=0,,C1,d=0,,D1(2) where C and D represent the number of frequencies and directions, respectively. Convolving ψc,d(x,y) with the i-th band image of HSI to obtain spatial texture features: (3) Fc,di(x,y)=Ri(x,y)ψc,d(x,y)(3)

To extract better spatial features, this paper first carries out PCA dimension-reduction on HSI to ensure that most of the information concentrates in the front principal components, and then carries out Gabor filtering on the reduced principal components, respectively. The filter size is 45×45, C=5, D=6, thus generating 35 filter groups for each principal component and forming 35 filter elements for each component. As shown in the figure, each filter group produces 35 filtered images ().

Figure 1. Generation diagram of GF group.

Figure 1. Generation diagram of GF group.

As shown in , the ground object image of the HSI and the first three principal component maps of PCA are synthesized. In addition, the partial Gabor filtered images of the first, second, and third principal components of PCA are synthesized, and the adjustment values of c and d are 10 (c = 0, d = 0), 20 (c = 1, d = 1), 30 (c = 2, d = 2), and 40 (c = 3, d = 3), respectively, as shown in .

Figure 2. Indian Pines (a) ground feature composition, (b) 1st principal component composition of PCA, (c) 2nd principal component composition of PCA, and (d) 3rd principal component composition of PCA.

Figure 2. Indian Pines (a) ground feature composition, (b) 1st principal component composition of PCA, (c) 2nd principal component composition of PCA, and (d) 3rd principal component composition of PCA.

Figure 3. Filtering results of PCA first principal component (a) c = 0, d = 0, (b) c = 1, d = 1, (c) c = 2, d = 2, and (d) c = 3, d = 3.

Figure 3. Filtering results of PCA first principal component (a) c = 0, d = 0, (b) c = 1, d = 1, (c) c = 2, d = 2, and (d) c = 3, d = 3.

Figure 4. Filtering result of PCA second principal component (a) c = 0, d = 0, (b) c = 1, d = 1, (c) c = 2, d = 2, and (d) c = 3, d = 3.

Figure 4. Filtering result of PCA second principal component (a) c = 0, d = 0, (b) c = 1, d = 1, (c) c = 2, d = 2, and (d) c = 3, d = 3.

Figure 5. Filtering result of PCA third principal component (a) c = 0, d = 0, (b) c = 1, d = 1, (c) c = 2, d = 2, and (d) c = 3, d = 3.

Figure 5. Filtering result of PCA third principal component (a) c = 0, d = 0, (b) c = 1, d = 1, (c) c = 2, d = 2, and (d) c = 3, d = 3.

To determine the number of principal components, this paper uses Indian dataset to increase the number of principal components for a series of verification experiments, and randomly selects 5% as training dataset and the other 95% as test dataset. First, the first principal component is used for filtering, then the filtered image is handed over to SVM for classification, and then the first two to the first eight principal components are filtered and classified. The experimental results show that 80 filtered images generated by the first two principal components have the best effect, and the overall classification accuracy OA is 95.8%. Whereas the filtered images generated by one component have too little information and more than three components have too high production dimension, which makes the classification effect unsatisfactory. Therefore, this paper uses the first two PCA principal components for filtering and classification experiments ().

Figure 6. Gabor filter classification experiment.

Figure 6. Gabor filter classification experiment.

We normalize the HSI and then filter the former principal components after PCA dimension-reduction. The SVM classification algorithm in combination with GF is as follows:

Domain transform normalized convolution filter

DTNCF can convert a two-dimensional image filter into a one-dimensional one and obtain good spatial correlation features for the HSI classification (Gastal and Oliveira Citation2012; Liao and Wang Citation2020). For a uniform discretization S(Ω) of the original domain Ω, we can get the DTNCF function using EquationEquation (4) for the HSI R at the ith band, (4) Fj(e)=1wee,fS(Ω)Ri(f)K(ζ(e),ζ(f))(4) where we and K() are the normalized factor of e and the kernel filter, respectively. In addition, EquationEquation (5) indicates that the neighborhood pixels are on the same ground, and δ() in EquationEquation (6) is the Boolean function. Therefore, DTNCF has spatial correlation retention characteristics. (5) we=fD(Ω)K(ξ(e),ξ(f))(5) (6) K(ξ(e),ξ(f))=δ{|ξ(e)ξ(f)|r}(6) (7) δ(q)={1            q is true0                otherwise(7) where ξ(h) transforms an image into a one-dimension vector and can be written as EquationEquation (8), which integrates the partial differential of the image and converts it to an enhanced function, and r is the filter radius. (8) ξ(h)=0h1+σsσrl=1c|Rl(x)|dx(8) (9) σr=3σJ(9) (10) σJd=σs32Md4M1(10) where σs denotes the standard of space and σr the one of range, M is the total iteration number. σJd is the amount of the dth iteration. ξ(h), σr, and σJd correspond to EquationEquations (9) and Equation(10), respectively.

Because of the spatial correlation-maintaining characteristics, DTNCF can make up for the incompleteness of MCF about the spatial feature extraction ().

LDM classification method

LDM improves the SVM classification performance with the central idea of simultaneously maximizing the margin mean and variance. SVM predicts unlabeled data by maximizing the minimum margin hyperplane (Zhang and Zhou Citation2014). Optimizing the margin distribution can achieve better generalization performance. Bai et al. (Citation2022) proposed a large margin distribution machine LDMM with the optimized margin distribution, which maximizes the average margin and minimizes the margin variance. The classification hyperplane of SVM and LDM is shown in , which shows that the hyperplane of SVM is the maximization of the smallest margin among all samples (Liao et al. Citation2019b). LDM hyperplane considers maximization of the margin mean and minimization of the margin variance. Compared with the SVM hyperplane, the LDM hyperplane is more effective for classification.

Figure 7. Hyperplanes of LDM and SVM.

Figure 7. Hyperplanes of LDM and SVM.

To further show the superiority of LDM, we use symbols “○” and “△” to draw two kinds of HSI. The orange line represents the hyperplane of SVM, HSVM, while the purple line represents the hyperplane of LDM, HLDM. It shows that the hyperplane of SVM is the maximization of the smallest margin among all samples. LDM hyperplane considers maximization of the margin mean and minimization of the margin variance. Compared with the SVM hyperplane, the LDM hyperplane is more effective for classification.

Hyperspectral classification method based on Gabor features

There is a strong spatial correlation between hyperspectral pixels. In the past, the hyperspectral image classification method implemented by the Gabor filter focused more on the texture information extraction of ground objects. Although the filter can extract better texture information, it is often easy to lose spatial correlation information of ground objects. To compensate for the current deficiencies, this paper uses DTNCF to supplement spatial correlation features, and we integrate Gabor features and related features to achieve LDM classification, forming the GFDTNCLDM method, which is given as follows:

First, we use the GF and DTNCF to extract spatial texture features from the first two principal components of the PCA-reduced HSI and spatial correlation features for all hyperspectral bands, respectively. LDM realizes classification and outputs the best classification results through comparison. The algorithm is expressed as follows: (11) relt=maxOALDM(DGabor+Dc)   (11) where relt is the highest classification accuracy, OA represents the whole classification accuracy of classification results, DGabor is the spatial texture feature extracted by Gabor filter, Dc is the spatial correlation feature extracted by DTNCF, and LDM refers to the classification optimization by large margin distribution machine

The detailed flowchart of the GFDTNCLDM algorithm is shown in . The algorithm consists of seven execution steps: (1) normalize HSI; (2) use PCA to reduce HSI; (3) use Gabor filter to extract spatial texture features from the first two principal components after PCA dimensionality-reduction; (4) use DTNCF to extract spatial correlation features from HSI full-band data sets; (5) fuze spatial features; (6) use LDM to classify the fused spatial features; (7) output better classification results.

Figure 8. Flowchart of GFDTNCLDM.

Figure 8. Flowchart of GFDTNCLDM.

The detailed implementation process of GFDTNCLDM algorithm is as follows:

Experiments

Hyperspectral data description

This paper uses 3 HSI datasets to verify the effectiveness of GFDTNCLDM. The first dataset of Indian Pines (Liao et al. Citation2019b; Bai et al. Citation2022; Hao et al. Citation2022) was obtained in 1992 by an airborne visible infrared imaging spectrometer (AVIRIS) sensor over the Indian Pines region in Northwestern Indiana. This dataset contains 220 spectral bands with a spatial size of 145 × 145 pixels and removes the 20 spectral bands due to noise and water absorption. As shown in , the image consists of 16 classes.

Table 1. Comparison of classification precision (in percent) provided by different approaches (Indian Pines dataset).

The second dataset is the University of Pavia (Cai et al. Citation2021), obtained from the Spectrometer (Reflective Optics System Imaging Spectrometer), hyperspectral remote sensing images taken at the University of Pavia, including 610 × 340 pixels and 115 bands, and 12 spectral bands removed due to noise and other factors. The remaining 103 bands contain nine categories, and shows the specific feature categories and sample numbers.

Table 2. Comparison of classification accuracies (in percent) provided by different approaches (Pavia University dataset).

The third dataset is the Kennedy Space Center dataset (Lei et al. Citation2021), which is a hyperspectral image taken on March 23, 1996, by the Airborne Visible/Infrared Imaging Spectrometer (NASA AVIRIS) at Kennedy Space Center, Florida. A total of 224 bands were collected with a spectral resolution of 10 nm and a central wavelength of 400–2500 nm. The images were taken at an altitude of about 20 km with a spatial resolution of 18 m. After water absorption and noise are removed, the remaining 176 bands are analyzed. The image includes 13 types of ground objects, and shows the specific types of ground objects and the number of samples.

Table 3. Comparison of classification accuracies (in percent) provided by different methods (Kennedy Space Center).

Parameter setting

To verify the superiority of the proposed method, the followings methods are used to compare with GFDTNCLDM, such as:

  1. SVM: according to the raw features of hyperspectral images, SVM is applied to the Gaussian radial basis function kernel (Hao et al. Citation2022).

  2. EPF: in this method, hyperspectral images are classified by SVM. Then, Edge-Preserving Filter is conducted for each probabilistic map. Finally, the class of every pixel is selected based on the maximum probability (Hao et al. Citation2022).

  3. IFRF: This method attains the classified results with SVM based on the image fusion and recursive filter (Hao et al. Citation2022).

  4. LDM: Based on the raw features of hyperspectral images, Gaussian radial basis function kernel is applied here (Liao and Wang Citation2020).

  5. LDM-FL: This method obtains the classified results with LDM from the recursive filter (Liao and Wang Citation2020).

  6. PCA-EPFs: The spatial information first constructed by applying edge-preserving filters is stacked to form the fused feature and the dimension is reduced by PCA for the classifier of SVM (Hao et al. Citation2022).

  7. GFDN: This method extracts the spatial features by GF on the first three principal components of the hyperspectral image to form the fused features, and then combines the original features for the deep network classification (Liao and Wang Citation2020).

  8. DGEF: This method uses a Gabor ensemble filter that filters each input channel utilizing some fixed Gabor filters and learnable filters to extract features for HSI classification.

  9. GF-SVM: The hyperspectral dimensionality is reduced with PCA, and the first 10% of principal components are selected for SVM based on GF (Liao et al. Citation2019a).

  10. GF-LDM: The hyperspectral dimensionality is lowered with PCA, and the first 10% of principal components are diminished for LDM based on GF (Liao et al. Citation2019a).

  11. DTNCF-SVM: The hyperspectral dimensionality has been lessened with PCA, and the first 10% of principal components are picked for SVM based on DTNCF (Liao et al. Citation2019a).

  12. DTNCF-LDM: The hyperspectral dimensionality has been diminished with PCA, and the first 10% of principal components are taken for LDM based on DTNCF (Liao et al. Citation2019a).

  13. GFC-LDM: The advanced method in this paper.

  14. GFDTCF-SVM: The advanced method in this paper other than the classification results are generated by SVM.

In this paper, we use Overall Accuracy (OA), Average Accuracy (AA), and Kappa statistic (Kappa) to measure classification accuracy. To avoid the biased estimation, we perform 12 independent tests using the computer program MATLAB R2021b, based on the configuration of i9-10900 CPU, NVIDIA GeForce RTX 3080 GPU, and 32 GB RAM.

Investigation of the proposed method

Experiment of Indian Pines

To evaluate the classification performance of the GFDTNCLDM method, we used fourteen methods to classify and verify the data from Indian Pines, as follows: shows the distribution of Indian Pines datasets, and the dataset selected all 16 categories, of which 4% (about 420) samples and the remaining samples were selected as the training set and the test set, respectively, and 16% of the three types of grounds number of Indian Pines were not abundant for training. shows the classification precision resulting from fifteen approaches, as shown in .

Figure 9. Classification maps of different methods on the Indian Pines dataset. (a) Ground (b) SVM, OA=75.95% (c) LDM, OA=78.15% (d) EPF, OA=87.22% (e) IFRF, OA=89.99% (f) PCA-EPFs, OA=89.90% (g) LDM-FL, OA=93.21% (h) GFDN, OA=96.33% (i) DGEF, OA=94.37% (j) GF-SVM, OA=94.42% (k) DTNCF-SVM, OA=93.05% (l) GF-LDM, OA=94.18% (m) DTNCF-LDM, OA=96.10 % (n) GFDTNCF-SVM, OA=95.71% (o) GFDTNCLDM, OA=96.64%.

Figure 9. Classification maps of different methods on the Indian Pines dataset. (a) Ground (b) SVM, OA=75.95% (c) LDM, OA=78.15% (d) EPF, OA=87.22% (e) IFRF, OA=89.99% (f) PCA-EPFs, OA=89.90% (g) LDM-FL, OA=93.21% (h) GFDN, OA=96.33% (i) DGEF, OA=94.37% (j) GF-SVM, OA=94.42% (k) DTNCF-SVM, OA=93.05% (l) GF-LDM, OA=94.18% (m) DTNCF-LDM, OA=96.10 % (n) GFDTNCF-SVM, OA=95.71% (o) GFDTNCLDM, OA=96.64%.

shows the classification results of Indian Pines. shows the OA, AA, and Kappa for each class using different methods, demonstrating that GFDTNCLDM reaches excellent accuracy, e.g., OA = 96.64%, AA = 94.81%, and Kappa = 96.17%. Besides, the accuracy of GFDTNCLDM reaches 97% in seven classes. This experimental result indicates that the classification performance of GFDTNCLDM is significantly improved compared with other approaches.

In addition, shows that the OA values of Indian Pines GFDTNCLDM 20.69, 18.49, 9.41, 6.65, 6.74, 3.43, 0.31, 2.27, 2.22, 3.59, 2.46, 0.54, and 0.93% are higher than that of SVM, LDM, EPF, IFRF, PCA-EPFs, LDM-FL, GFDN, DGEF, GF-SVM, DTNCF-SVM, GF-LDM, DTNCF-LDM, and GFDTNCF-SVM, respectively. Therefore, the experiment fully verified the validity of GFDTNCLDM in the hyperspectral classification.

Experiment of Pavia University

shows the distribution with grounds of the Pavia University dataset, in which nine classes were selected, with 3% of the samples as the training set and the rest 97% as the test set. lists the classification accuracy of the Pavia University dataset using the different methods, and shows the classification effects.

Figure 10. Classification maps of different methods on the Pavia University dataset. (a) Ground, (b) SVM, OA = 83.73%, (c) LDM, OA = 80.57%, (d) EPF, OA = 90.41%, (e) IFRF, OA = 92.97%, (f) PCA-EPFs, OA = 96.33%, (g) LDM-FL, OA = 96.95%, (h) GFDN, OA = 98.67%, (i) DGEF, OA = 97.76%, (j) GF-SVM, OA = 94.85%, (k) DTNCF-SVM, OA = 96.86%, (l) GF-LDM, OA = 95.10%, (m) DTNCF-LDM, OA = 96.55%, (n) GFDTNCF-SVM, OA = 99.14%, (o) GFDTNCLDM, OA = 99.23%.

Figure 10. Classification maps of different methods on the Pavia University dataset. (a) Ground, (b) SVM, OA = 83.73%, (c) LDM, OA = 80.57%, (d) EPF, OA = 90.41%, (e) IFRF, OA = 92.97%, (f) PCA-EPFs, OA = 96.33%, (g) LDM-FL, OA = 96.95%, (h) GFDN, OA = 98.67%, (i) DGEF, OA = 97.76%, (j) GF-SVM, OA = 94.85%, (k) DTNCF-SVM, OA = 96.86%, (l) GF-LDM, OA = 95.10%, (m) DTNCF-LDM, OA = 96.55%, (n) GFDTNCF-SVM, OA = 99.14%, (o) GFDTNCLDM, OA = 99.23%.

shows the classification results for Pavia University, and indicates OA, AA, Kappa, and accuracy for each of the different methods. Moreover, gives the best accuracy obtained by GFDTNCLDM, with OA = 99.23%, AA = 98.54%, and Kappa = 98.98%. In addition, the accuracy of four classes exceeds 99% for GFDTNCLDM. Compared with other classification methods, the proposed method can enhance classification performance.

Besides, the OA values of GFDTNCLDM are respectively higher than that of SVM, LDM, EPF, IFRF, PCA-EPFs, LDM-FL, GFDN, DGEF, GF-SVM, DTNCF-SVM, GF-LDM, DTNCF-LDM, and GFDTNCF-SVM by 15.50, 18.66, 8.82, 6.26, 2.90, 2.28, 0.56, 1.47, 4.38, 2.37, 4.13, 2.68, and 0.09%. The hyperspectral classification validates the effectiveness of GFDTNCLDM.

Experiment of Kennedy Space Center

shows the distribution based on the Kennedy Space Center dataset. We chose all 16 categories, of which 6% (about 313) samples are training sets and the rest 94% test sets. lists the classification accuracy of the Kennedy Space Center dataset for different methods. shows the classification effect.

Figure 11. Classification maps of different methods on the Kennedy Space Center dataset. (a) Ground, (b) SVM, OA = 85.84%, (c) LDM, OA = 87.07%, (d) EPF, OA = 91.67%, (e) IFRF, OA = 96.16%, (f) PCA-EPFs, OA = 96.71%, (g) LDM-FL, OA = 93.84%, (h) GFDN, OA = 96.14%, (i) DGEF, OA = 97.20%, (j) GF-SVM, OA = 92.80%, (k) DTNCF-SVM, OA = 97.70%, (l) GF-LDM, OA = 95.88%, (m) DTNCF-LDM, OA = 98.33%, (n) GFDTNCF-SVM, OA = 98.11%, (o) GFDTNCLDM, OA = 98.95%.

Figure 11. Classification maps of different methods on the Kennedy Space Center dataset. (a) Ground, (b) SVM, OA = 85.84%, (c) LDM, OA = 87.07%, (d) EPF, OA = 91.67%, (e) IFRF, OA = 96.16%, (f) PCA-EPFs, OA = 96.71%, (g) LDM-FL, OA = 93.84%, (h) GFDN, OA = 96.14%, (i) DGEF, OA = 97.20%, (j) GF-SVM, OA = 92.80%, (k) DTNCF-SVM, OA = 97.70%, (l) GF-LDM, OA = 95.88%, (m) DTNCF-LDM, OA = 98.33%, (n) GFDTNCF-SVM, OA = 98.11%, (o) GFDTNCLDM, OA = 98.95%.

shows the classification results of the Kennedy Space Center datasets, and lists the OA, AA, and Kappa accuracy for each method. The best accuracy for GFDTNCLDM is OA = 98.95%, AA = 98.67%, and Kappa = 98.83%, respectively. Moreover, the accuracy of the four classes of GFDTNCLDM has reached 100%. The experiment demonstrates that the classification performance is improved compared with other classification methods.

Also, the GFDTNCLDM OA values are respectively higher than that of SVM, LDM, EPF, IFRF, PCA-EPFs, LDM-FL, GFDN, DGEF, GF-SVM, DTNCF-SVM, GF-LDM, DTNCF-LDM, and GFDTNCF-SVM by 13.11, 11.88, 7.28, 2.79, 2.24, 5.11, 2.81, 1.75, 6.15, 1.25, 3.06, 0.62, and 0.83%. The experimental results validate the GFDTNCLDM method and verify its out-performance ().

Figure 12. Classification maps of different methods for HSI (a) Indian Pines, (b) Salinas Valley, and (c) Kennedy Space Center.

Figure 12. Classification maps of different methods for HSI (a) Indian Pines, (b) Salinas Valley, and (c) Kennedy Space Center.

Comparison of running time

shows the comparison of running time in seconds, which includes training time and testing time. GFDTNCLDM combines Gobor and DTNCF with LDM for classification. It can be seen from the table that the time-consuming of the algorithm is mainly concentrated on DTNCF and LDM. DTNCF transforms two-dimensional image filtering into one dimension, which greatly improves the filtering efficiency in hyperspectral images. In addition, compared with the SVM classifier, LDM has a longer running time; however, there is still a certain advantage compared with DGEF by using a deep learning method.

Table 4. Comparison of classification running time (in second) provided by different approaches (Indian Pines dataset).

Analysis

As shown in , we can have the following conclusions. First, the GFDTNCLDM for hyperspectral image achieves better classification results for three datasets, among which the OA of the Indian Pines dataset, Pavia University dataset, and Kennedy Space Center dataset are 96.64, 98.85, and 98.95%, respectively, which are 13–20% higher than that of SVM. The result shows that after fuzing Gabor features, LDM can achieve high-precision classification performance, which abundantly verifies the effectiveness of the GFDT NCLDM algorithm in HSI classification.

Second, the OA of the GFDTNCLDM HSI classification algorithm for the three datasets is 7–9% higher than that of the EPF and 2–6% higher than that of the PCA-EPFs algorithm, which shows that it is better than the edge-preserving algorithm and the improved algorithm. The OA of the GFDTNCLDM HSI classification algorithm for the four datasets is 2–6% higher than that of the IFRF algorithm, which shows that the classification effect is better than that of band fusion and recursive filtering, and abundantly verifies the effectiveness of the GFDTNCLDM algorithm in HSI classification.

Third, the OA values of GFDTNCLDM for the Indian Pines dataset, Pavia University dataset, and Kennedy Space Center dataset are 3.59, 2.06, and 1.25% higher than DTNCF-SVM, respectively. It indicates that LDM classification with the fused spatial features is more effective than SVM classification using only spatial correlation features, which sufficiently verifies the effectiveness of the GFDTNCLDM algorithm.

Forth, the OA values of GFDTNCLDM for the Indian Pines dataset, Pavia University dataset, and Kennedy Space Center dataset are 0.93, 0.04, and 0.83% higher than that of GFDTNCF-SVM, respectively. It indicates that LDM can realize high-precision classification with Gabor features and spatial correlation features fused and is more effective than SVM. Therefore, the experimental results sufficiently verify the effectiveness of GFDTNCLDM.

Fifth, the OA values of GFDTNCLDM in the Indian Pines dataset, Pavia University dataset, and Kennedy Space Center dataset are 0.31, 0.17, and 2.81% higher than that of GFDN, respectively. In addition, the OA values of GFDTNCLDM for the three datasets are 0.93, 2.27, and 1.75% higher than that of DEFG, respectively. It indicates that GFDTNCLDM is more effective in improving the classification than the deep learning classification method with a single spatial feature, proving that GFDTNCLDM also has better performance than the classification methods based on deep learning.

Last but not least, the OA values of GFDTNCLDM in Indian Pines, Salinas Valley, and Kennedy Space Center are 3.43, 3.04, and 5.11% higher than that of LDM-FL, respectively. It shows that GFDTNCLDM, which fuses the two types of features, can achieve better classification performance than that with a single spatial one. Therefore, the spatial features extracted by DTNCF and GF can improve the hyperspectral classification.

Conclusion

In this paper, we proposed a classification algorithm for HSI based on the Gabor filter (GFDTNCLDM). The experimental results show that the classification accuracy of Indian Pines, Pavia University, and Kennedy Space Center data sets is 96.64, 98.23, and 98.95% with only 4, 3, and 6% training samples, respectively; and it is 2–20% higher than the other methods. Compared with the hyperspectral information-based SVM method, EPF, IFRF, PCA-EPFs, LDM-FL, and GFDN, the proposed method significantly improves the accuracy of HSI classification. The experimental results show that GFDTNCLDM has superior performance, and the classification accuracy of the GFDTNCLDM algorithm outperforms that of EPF, IFRF, PCA-EPFs, LDM-FL, GFDN, and DGEF, respectively. The results show that the Gabor filter can extract better spatial texture features, and the spatial features obtained by spatial correlation can help LDM to improve classification accuracy. The algorithm proposed in this paper has the following characteristics:

  1. Gabor filtering uses different frequencies and directions to generate a Gabor group composed of rich features, which can extract various and more comprehensive spatial texture features in the same PCA principal component, thereby obtaining better spatial texture features of HSI;

  2. Using DTNC to extract spatial features of hyperspectral images can make up for the deficiency of the Gabor filter to obtain spatial texture features, which significantly improves the classification performance of LDM;

  3. The algorithm achieves lower training samples and higher classification accuracy.

In addition, this paper focuses on more effective mining of hyperspectral spatial features to improve classification accuracy in the future.

Additional information

Funding

This work was supported by Natural Science Foundation of Guangdong (Grant No 2021A1515011701), Special Projects in Key Areas of Guangdong Province (Grant No 2020ZDZX3084), and National Natural Science Foundation of China (Grant No 62071084).

References

  • Bai, J., Ding, B., Xiao, Z., Jiao, L., Chen, H., and Regan, A.C. 2022. “Hyperspectral image classification based on deep attention graph convolutional network.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 60: pp. 1–16. doi:10.1109/TGRS.2021.3066485.
  • Bau, T.C., Sarkar, S., and Healey, G. 2010. “Hyperspectral region classification using a three-dimensional Gabor filterbank.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 48(No. 9): pp. 3457–3464. doi:10.1109/TGRS.2010.2046494.
  • Bhatti, U.A., Yu, Z., Chanussot, J., Zeeshan, Z., Yuan, L., Luo, W., Nawaz, S.A., Bhatti, M.A., Ain, Q.U., and Mehmood, A. 2022. “Local similarity-based spatial-spectral fusion hyperspectral image classification with deep CNN and Gabor filtering.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 60: pp. 1–15. doi:10.1109/TGRS.2021.3090410.
  • Cai, W., Liu, B., Wei, Z., Li, M., and Kan, J. 2021. “TARDB-Net: triple-attention guided residual dense and BiLSTM networks for hyperspectral image classification.” Multimedia Tools and Applications, Vol. 80(No. 7): pp. 11291–11312. doi:10.1007/s11042-020-10188-x.
  • Cao, L., He, J., Gao, L., Zhong, Y., Hu, X., and Li, Z. 2022. “LWIR hyperspectral image classification based on a temperature-emissivity residual network and conditional random field model.” International Journal of Remote Sensing, Vol. 43(No. 10): pp. 3744–3768. doi:10.1080/01431161.2022.2105667.
  • Cao, X., Wang, D., Wang, X., Zhao, J., and Jiao, L. 2020. “Hyperspectral imagery classification with cascaded support vector machines and multi-scale superpixel segmentation.” International Journal of Remote Sensing, Vol. 41(No. 12): pp. 4530–4550. doi:10.1080/01431161.2020.1723172.
  • Fatemighomi, H.S., Golalizadeh, M., and Amani, M. 2022. “Object-based hyperspectral image classification using a new latent block model based on hidden Markov random fields.” Pattern Analysis and Applications, Vol. 25(No. 2): pp. 467–481. doi:10.1007/s10044-021-01050-3.
  • Gastal, E.S., and Oliveira, M.M. 2012. “Adaptive manifolds for real-time high-dimensional filtering.” ACM Transactions on Graphics, Vol. 31(No. 4):pp. 1–13. doi:10.1145/2185520.2185529.
  • Ghassemi, M., Ghassemian, H., and Imani, M. 2021. “Hyperspectral image classification by optimizing convolutional neural networks based on information theory and 3D-Gabor filters.” International Journal of Remote Sensing, Vol. 42(No. 11): pp. 4380–4410. doi:10.1080/01431161.2021.1892854.
  • Guo, Y., Cao, H., Han, S., Sun, Y., and Bai, Y. 2018a. “Spectral–spatial hyperspectral image classification with k-nearest neighbor and guided filter.” IEEE Access, Vol. 6: pp. 18582–18591. doi:10.1109/ACCESS.2018.2820043.
  • Guo, Y., Han, S., Li, Y., Zhang, C., and Bai, Y. 2018b. “K-Nearest Neighbor combined with guided filter for hyperspectral image classification.” Procedia Computer Science, Vol. 129: pp. 159–165. doi:10.1016/j.procs.2018.03.066.
  • Guo, Z., Zhang, M., Jia, W., Zhang, J., and Li, W. 2022. “Dual-concentrated network with morphological features for tree species classification using hyperspectral image.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 15: pp. 7013–7024. doi:10.1109/JSTARS.2022.3199618.
  • Haghighat, M., Zonouz, S., and Abdel-Mottaleb, M. 2015. “CloudID: Trustworthy cloud-based and cross-enterprise biometric identification.” Expert Systems with Applications, Vol. 42(No. 21): pp. 7905–7916. doi:10.1016/j.eswa.2015.06.025.
  • Hao, S., Liu, R., Lin, X., Li, C., Guo, H., Ye, Z., and Wang, C. 2022. “Configuration design and gait planning of a six-bar tensegrity robot.” Applied Sciences, Vol. 12(No. 22): pp. 11845. doi:10.3390/app122211845.
  • He, K., Sun, J.., and Tang, X. 2012. “Guided image filtering.” IEEE transactions on pattern analysis and machine intelligence, Vol. 36(No. 6): pp. 1397–1409. doi:10.1109/TPAMI.2012.213.
  • He, L., Li, J., Plaza, A., and Li, Y. 2017. “Discriminative low-rank Gabor filtering for spectral–spatial hyperspectral image classification.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 55(No. 3): pp. 1381–1395. doi:10.1109/TGRS.2016.2623742.
  • Hu, Q., Xu, W., Liu, X., Cai, Z., and Cai, J. 2022. “Hyperspectral image classification based on bilateral filter with multispatial domain.” IEEE Geoscience and Remote Sensing Letters, Vol. 19: pp. 1–5. doi:10.1109/LGRS.2021.3058182.
  • Huang, K.K., Ren, C.X., Liu, H., Lai, Z.R., Yu, Y.F., and Dai, D.Q. 2022. “Hyperspectral image classification via discriminant Gabor ensemble filter.” IEEE Transactions on Cybernetics, Vol. 52(No. 8): pp. 8352–8365. doi:10.1109/TCYB.2021.3051141.
  • Imani, M., and Ghassemian, H. 2016. GLCM, Gabor, and morphology profiles fusion for hyperspectral image classification. 2016 24th Iranian Conference on Electrical Engineering (ICEE), pp. 460–465. IEEE. doi:10.1109/IranianCEE.2016.7585566.
  • Jia, S., Hu, J., Xie, Y., Shen, L., Jia, X., and Li, Q. 2016. “Gabor cube selection based multitask joint sparse representation for hyperspectral image classification.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 54(No. 6): pp. 3174–3187. doi:10.1109/TGRS.2015.2513082.
  • Jia, S., Shen, L., Zhu, J., and Li, Q. 2018. “A 3-D Gabor phase-based coding and matching framework for hyperspectral imagery classification.” IEEE Transactions on Cybernetics, Vol. 48(No. 4): pp. 1176–1188. doi:10.1109/TCYB.2017.2682846.
  • Kang, X., Li, C., Li, S., and Lin, H. 2018. “Classification of hyperspectral images by Gabor filtering based deep network.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 11(No. 4): pp. 1166–1178. doi:10.1109/JSTARS.2017.2767185.
  • Kang, X., Li, S., and Benediktsson, J.A. 2014a. “Feature extraction of hyperspectral images with image fusion and recursive filtering.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 52(No. 6): pp. 3742–3752. doi:10.1109/TGRS.2013.2275613.
  • Kang, X., Li, S., and Benediktsson, J.A. 2014b. “Spectral–spatial hyperspectral image classification with edge-preserving filtering.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 52(No. 5): pp. 2666–2677. doi:10.1109/TGRS.2013.2264508.
  • Kang, X., Xiang, X., Li, S., and Benediktsson, J.A. 2017. “PCA-based edge-preserving features for hyperspectral image classification.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 55(No. 12): pp. 7140–7151. doi:10.1109/TGRS.2017.2743102.
  • Kotwal, K., and Chaudhuri, S. 2010. “Visualization of hyperspectral images using bilateral filtering.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 48(No. 5): pp. 2308–2316. doi:10.1109/TGRS.2009.2037950.
  • Lei, R., Zhang, C., Liu, W., Zhang, L., Zhang, X., Yang, Y., Huang, J., Li, Z., and Zhou, Z. 2021. “Hyperspectral remote sensing image classification using deep convolutional capsule network.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 14: pp. 8297–8315. doi:10.1109/JSTARS.2021.3101511.
  • Li, W., and Du, Q. 2014. “Gabor-filtering-based nearest regularized subspace for hyperspectral image classification.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 7(No. 4): pp. 1012–1022. doi:10.1109/JSTARS.2013.2295313.
  • Liao, J., and., and Wang, L. 2020. “Adaptive hyperspectral image classification based on the fusion of manifolds filter and spatial correlation features.” IEEE Access, Vol. 8: pp. 90390–90409. doi:10.1109/ACCESS.2020.2993864.
  • Liao, J., and., and Wang, L. 2020. “Multiple spatial features extraction and fusion for hyperspectral images classification.” Canadian Journal of Remote Sensing, Vol. 46(No. 2): pp. 193–213. doi:10.1080/07038992.2020.1768837.
  • Liao, J., Wang, L., Hao, S., and Zhao, G. 2019a. “Hyperspectral image classification based on fusion of guided filter and domain transform interpolated convolution filter.” Canadian Journal of Remote Sensing, Vol. 44(No. 5): pp. 476–490. doi:10.1080/07038992.2018.1546571.
  • Liao, J., Wang, L., Zhao, G., and Hao, S. 2019b. “Hyperspectral image classification based on bilateral filter with linear spatial correlation information.” International Journal of Remote Sensing, Vol. 40(No. 17): pp. 6861–6883. doi:10.1080/01431161.2019.1597301.
  • Liu, R., Cai, W., Li, G., Ning, X., and Jiang, Y. 2022. “Hybrid dilated convolution guided feature filtering and enhancement strategy for hyperspectral image classification.” IEEE Geoscience and Remote Sensing Letters, Vol. 19: pp. 1–5. doi:10.1109/LGRS.2021.3100407.
  • Luo, F., Zou, Z., Liu, J., and Lin, Z. 2022. “Dimensionality reduction and classification of hyperspectral image via multistructure unified discriminative embedding.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 60: pp. 1–16. doi:10.1109/TGRS.2021.3128764.
  • Pan, H., Liu, M., Ge, H., and Chen, S. 2022. “Semi-supervised spatial–spectral classification for hyperspectral image based on three-dimensional Gabor and co-selection self-training.” Journal of Applied Remote Sensing, Vol. 16(No. 2): pp. 028501–028501. doi:10.1117/1.JRS.16.028501.
  • Rajadell, O., Garcia-Sevilla, P., and Pla, F. 2013. “Spectral–spatial pixel characterization using Gabor filters for hyperspectral image classification.” IEEE Geoscience and Remote Sensing Letters, Vol. 10(No. 4): pp. 860–864. doi:10.1109/LGRS.2012.2226426.
  • Sellami, A., and Tabbone, S. 2022. “Deep neural networks-based relevant latent representation learning for hyperspectral image classification.” Pattern Recognition, Vol. 121: pp. 108224. doi:10.1016/j.patcog.2021.108224.
  • Shambulinga, M., and Sadashivappa, G. 2019. “Hyperspectral image classification using support vector machine with guided image filter.” International Journal of Advanced Computer Science and Applications, Vol. 10(No. 10) pp. 271–276. doi:10.14569/IJACSA.2019.0101038.
  • Shen, L., and Bai, L. 2006. “MutualBoost learning for selecting Gabor features for face recognition.” Pattern Recognition Letters, Vol. 27(No. 15): pp. 1758–1767. doi:10.1016/j.patrec.2006.02.005.
  • Shen, L., and Jia, S. 2011. “Three-dimensional Gabor wavelets for pixel-based hyperspectral imagery classification.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 49(No. 12): pp. 5039–5046.
  • Sun, H., Zheng, X., and Lu, X. 2021. “A supervised segmentation network for hyperspectral image classification.” IEEE Transactions on Image Processing, Vol. 30: pp. 2810–2825. doi:10.1109/TIP.2021.3055613.
  • Tan, X., Gao, K., Liu, B., Fu, Y., and Kang, L. 2021. “Deep global-local transformer network combined with extended morphological profiles for hyperspectral image classification.” Journal of Applied Remote Sensing, Vol. 15(No. 03): pp. 038509–038509. doi:10.1117/1.JRS.15.038509.
  • Vaddi, R., and Manoharan, P. 2020. “CNN based hyperspectral image classification using unsupervised band selection and structure-preserving spatial features.” Infrared Physics & Technology, Vol. 110: pp. 103457. doi:10.1016/j.infrared.2020.103457.
  • Wang, L., Hao, S., Wang, Q., and Wang, Y. 2014. “Semi-supervised classification for hyperspectral imagery based on spatial-spectral label propagation.” ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 97: pp. 123–137. doi:10.1016/j.isprsjprs.2014.08.016.
  • Wang, L., Hao, S., Wang, Y., Lin, Y., and Wang, Q. 2014. “Spatial–spectral information-based semisupervised classification algorithm for hyperspectral imagery.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 7(No. 8): pp. 3577–3585. doi:10.1109/JSTARS.2014.2333233.
  • Xia, J., Bombrun, L., Adalı, T., Berthoumieu, Y., and Germain, C. 2016. “Spectral–spatial classification of hyperspectral images using ICA and edge-preserving filter via an ensemble strategy.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 54(No. 8): pp. 4971–4982. doi:10.1109/TGRS.2016.2553842.
  • Xiao, G., Wei, Y., Yao, H., Deng, W., Xu, J., and Pan, D. 2022. “Hierarchical broad learning system for hyperspectral image classification.” IET Image Processing, Vol. 16(No. 2): pp. 554–566. doi:10.1049/ipr2.12371.
  • Ye, Z., Bai, L., and Nian, Y.J. 2016. “Hyperspectral image classification algorithm based on Gabor feature and locality-preserving dimensionality reduction.” Acta Optica Sinica, Vol. 36(No. 10): pp. 1028003.
  • Zhan, K., Wang, H., Huang, H., and Xie, Y. 2016. “Large margin distribution machine for hyperspectral image classification.” Journal of Electronic Imaging, Vol. 25(No. 6): pp. 063024. doi:10.1117/1.JEI.25.6.063024.
  • Zhang, T., and Zhou, Z. H. 2014. Large margin distribution machine. Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 313–322. ACM.