ABSTRACT
The accurate identification of the location, intensity, and spread of wildfires is an essential early-stage precaution for reducing wildfire damage. Satellite imaging platforms, particularly those with high revisiting frequencies and fine spatial resolutions, represent the most efficient possible means of monitoring wildfires dynamically. However, the extraction of accurate fire-related information from satellite images remains challenging, and few studies have investigated the use of remote sensing data from satellites with geostationary orbits. The present work addresses these issues by applying over 5,000 images obtained from the geostationary Himawari-8 satellite of a severe Australian wildfire occurring from November 2019 to February 2020 to train and test a fully connected convolutional neural network (CNN) for identifying the location and intensity of wildfires. The proposed CNN model obtains a detection accuracy greater than 80%, which greatly exceeds that of other machine learning algorithms, such as support vector machine and k-means clustering. Moreover, the CNN model can be trained in a relatively short period, even when employing large training datasets, and predictions can be made in just one or two minutes. The proposed model provides insight into the application of deep learning methodologies for wildfire monitoring based on the imagery provided by geostationary satellites, and support for developing similar satellite missions.
Key policy highlights
Provides insights into the application of deep learning technology for wildfire monitoring using geostationary satellites, and support for developing similar satellite missions.
Acknowledgments
The authors thank the editors and the reviewers for their valuable comments to improve our manuscript. We thank LetPub (www.letpub.com) for linguistic assistance and pre-submission expert review.
Author contributions
Conceptualization, Changcheng Ding.; Formal analysis, Changcheng Ding.; Investigation, Changcheng Ding.; Methodology, Changcheng Ding; Resources, Chang-cheng Ding.; Software, Changcheng Ding.; Supervision, Xiaoyu Zhang, Jianyu Chen; Visualization, Chang-cheng Ding.; Writing – original draft, Changcheng Ding.; Writing – review & editing, Xiaoyu Zhang, Jianyu Chen, Shuchang Ma, Yanfang Lu, Wencong Han.
Disclosure statement
No potential conflict of interest was reported by the authors.
Data availability statement
Data are available for research purposes upon request to the authors’ institutions.