ABSTRACT
A challenging problem for post-disaster emergency services is how to quickly and precisely acquire building damage with limited time and computation resources. With the development of unmanned aerial vehicle (UAV) technology and convolutional neural networks (CNN), using drone video and CNNs to determine the building damage has become one effective solution. In this research, we propose a stacked lightweight damage assessment model to overcome the current challenge in getting fast and precise post-disaster information based on aerial video datasets and artificial intelligence. Specifically, we developed an instance-level recognition building damage dataset based on aerial video. Then, we proposed a stacked lightweight ShuffleNet architecture that includes location and classification models to assess the building damage state. With regards to the location model, the lightweight network achieves a reduction in the model training time by approximately 37% and in the detection time by 44%, achieving similar location accuracy. As for the classification model, the trained model can achieve classification accuracy of approximately 83% for different damage levels by optimizing ShuffleNet. For post-disaster emergencies with limited time and computation resources, the proposed framework provides a valuable solution to better balance fast and precise building damage assessment.
Acknowledgements
This research was funded by the Natural Science Foundation of Hubei Province of China (2023AFB351), and the National Natural Science Foundation of China Major Program (42192580). All the experimental images are from the public media video on the Internet. The authors would like to thank all of providers of original data.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Data Availability statement
The data and the code of this study are available from the corresponding author upon request ([email protected]).