98
Views
1
CrossRef citations to date
0
Altmetric
Original Articles

Land use/land cover mapping from airborne hyperspectral images with machine learning algorithms and contextual information

ORCID Icon & ORCID Icon
 

Abstract

Land use and Land cover (LULC) mapping is one of the most important application areas of remote sensing which requires both spectral and spatial resolutions in order to decrease the spectral ambiguity of different land cover types. Airborne hyperspectral images are among those data which perfectly suits to that kind of applications because of their high number of spectral bands and the ability to see small details on the field. As this technology has newly developed, most of the image processing methods are for the medium resolution sensors and they are not capable of dealing with high resolution images. Therefore, in this study a new framework is proposed to improve the classification accuracy of land use/cover mapping applications and to achieve a greater reliability in the process of mapping land use map using high resolution hyperspectral image data. In order to achieve it, spatial information is incorporated together with spectral information by exploiting feature extraction methods like Grey Level Co-occurrence Matrix (GLCM), Gabor and Morphological Attribute Profile (MAP) on dimensionally reduced image with highest accuracy. Then, machine learning algorithms like Random Forest (RF) and Support Vector Machine (SVM) are used to investigate the contribution of texture information in the classification of high resolution hyperspectral images. In addition to that, further analysis is conducted with object based RF classification to investigate the contribution of contextual information. Finally, overall accuracy, producer’s/user’s accuracy, the quantity and allocation based disagreements and location and quantity based kappa agreements are calculated together with McNemar tests for the accuracy assessment. According to our results, proposed framework which incorporates Gabor texture information and exploits Discrete Wavelet Transform based dimensionality reduction method increase the overall classification accuracy up to 9%. Amongst individual classes, Gabor features boosted classification accuracies of all the classes (soil, road, vegetation, building and shadow) to 7%, 6%, 6%, 8%, 9%, and 24% respectively with producer’s accuracy. Besides, 17% and 10% increase obtained in user’s accuracy with MAP (area) feature in classifying road and shadow classes respectively. Moreover, when the object based classification is conducted, it is seen that the OA of pixel based classification is increased further by 1.07%. An increase between 2% and 4% is achieved with producer’s accuracy in soil, vegetation and building classes and an increase between 1% and 3% is achieved by user’s accuracy in soil, road, vegetation and shadow classes. In the end, accurate LULC map is produced with object based RF classification of gabor features added airborne hyperspectral image which is dimensionally reduced with DWT method.

Acknowledgement

The authors would like to thank to Turkish General Directorate of Mapping for providing Headwall Hyperspec VNIR airborne hyperspectral dataset. The authors would also like to thank the anonymous reviewer for insightful comments and suggestions which enhance this article further.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.