626
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Automatic pavement crack detection using multimodal features fusion deep neural network

, , , & ORCID Icon
Article: 2086692 | Received 09 Nov 2021, Accepted 01 Jun 2022, Published online: 17 Jun 2022
 

ABSTRACT

The existing pavement crack detection algorithms are all dealing with detection and segmentation separately, ignoring the feature correlation between the bounding box coordinates and mask information, and there are still problems such as low crack detection accuracy, incomplete detection, and segmentation fracture in practical application. In view of the above situation, this paper took the detection bounding box coordinates and mask information as multimodal features of the same pavement crack area, and proposed a One-Stage MFFNet (Multimodal Feature Fusion Network), which improved the pavement crack detection accuracy and segmentation integrity obviously. And experimental results of different models were compared on self-collected datasets and two public datasets (CFD and CRACK500) respectively. Compared with Mask R-CNN, the average detection accuracy and average segmentation accuracy were improved by 2.6% and 4.7%, respectively. Compared with the optimised model of RDSNet, the detection accuracy and processing speed were improved by 1.8% and 2FPS, respectively. In addition, MFFNet had significantly improved the integrity of pavement crack segmentation results. The results showed that the proposed MFFNet model achieved the best detection and segmentation accuracy, and was an effective and high-precision pavement crack detection model.

Acknowledgements

The authors would like to acknowledge all the researchers who have contributed efforts to this paper, and thank for their valuable comments and support.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Additional information

Funding

This study was supported by the financial support of National Natural Science Foundation of China (grant number: 52108403,51978071), National Key R & D program of China (grant number: 2021YFB2600600, 2021YFB2600604,2018YFB1600202), and the Fundamental Research Funds for the Central University of China (grant number: 2242022R10054,300102249306, 300102249301).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.