162
Views
1
CrossRef citations to date
0
Altmetric
Research article

Efficient building damage assessment from post-disaster aerial video using lightweight deep learning models

, &
Pages 6954-6980 | Received 25 May 2023, Accepted 18 Oct 2023, Published online: 13 Nov 2023
 

ABSTRACT

A challenging problem for post-disaster emergency services is how to quickly and precisely acquire building damage with limited time and computation resources. With the development of unmanned aerial vehicle (UAV) technology and convolutional neural networks (CNN), using drone video and CNNs to determine the building damage has become one effective solution. In this research, we propose a stacked lightweight damage assessment model to overcome the current challenge in getting fast and precise post-disaster information based on aerial video datasets and artificial intelligence. Specifically, we developed an instance-level recognition building damage dataset based on aerial video. Then, we proposed a stacked lightweight ShuffleNet architecture that includes location and classification models to assess the building damage state. With regards to the location model, the lightweight network achieves a reduction in the model training time by approximately 37% and in the detection time by 44%, achieving similar location accuracy. As for the classification model, the trained model can achieve classification accuracy of approximately 83% for different damage levels by optimizing ShuffleNet. For post-disaster emergencies with limited time and computation resources, the proposed framework provides a valuable solution to better balance fast and precise building damage assessment.

Acknowledgements

This research was funded by the Natural Science Foundation of Hubei Province of China (2023AFB351), and the National Natural Science Foundation of China Major Program (42192580). All the experimental images are from the public media video on the Internet. The authors would like to thank all of providers of original data.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data Availability statement

The data and the code of this study are available from the corresponding author upon request ([email protected]).

Additional information

Funding

The work was supported by the Natural Science Foundation of Hubei Province of China [2023AFB351]; National Natural Science Foundation of China Major Program [42192580].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.