195
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Mapping fine-spatial-resolution vegetation spring phenology from individual Landsat images using a convolutional neural network

, ORCID Icon, , &
Pages 3059-3081 | Received 05 Sep 2022, Accepted 14 May 2023, Published online: 24 May 2023
 

ABSTRACT

Monitoring and mapping vegetation dynamics using remote sensing data are essential for our understanding of land surface processes. Most current satellite-based methods process vegetation index time-series data from a series of images to retrieve key points that correspond to vegetation phenophases. As deep learning approaches have been found to be powerful in processing individual images, we tested the applicability of convolutional neural network (CNN) in mapping vegetation growth days (VGD) and the start of growing season (SOS) from each Landsat image at fine-spatial-resolution. To provide references for both model training and testing, we applied the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) to fuse image pairs of Landsat 8 Operational Land Imager (OLI) and Moderate-resolution Imaging Spectroradiometer (MODIS) for four Landsat tiles in China. We then applied a first derivative method to retrieve VGD for the fused satellite data at fine-spatial-resolution. The CNN model was trained using each fused individual image as the model inputs and derived VGD as the targets. The trained model was further used to map VGD from individual Landsat image. The result could match the reference map well as indicated by the evaluation metrics. In terms of VGD, the method achieved a coefficient of determination of 0.85 and a root mean squared error of 8.17 days. In terms of SOS, the method achieved a coefficient of determination of 0.75 and a root mean squared error of 4.09 days. Compared with existing methods that require time series of satellite data spanning the entire growth cycles to retrieve phenological metrics, this study provides an alternative method to map VGD as well as SOS using individual Landsat image. Our study highlights the power of deep learning models in extracting phenological features from individual remote sensing images. Researchers can use our methods to predict near real-time VGD and SOS in the future.

Acknowledgements

This research is supported by the Natural Science Foundation of China (grant nos. 41875122 and U1811464), National Key R&D Program of China (grant no. 2017YFA0604300), Natural Science Foundation of Guangdong Province (grant no. 2021A1515011429) and Western Talents (grant no. 2018XBYJRC004). We thank the researchers and investigators who were involved in collecting and sharing the Landsat 8, MODIS and GlobalLand30 data, and Xiaolin Zhu for providing the source code of ESTARFM.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The work was supported by the Natural Science Foundation of Guangdong Province [2021A1515011429]; Western Talents [2018XBYJRC004]; National Key R&D Program of China [2017YFA0604300]; Natural Science Foundation of China [41875122, U1811464].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 689.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.