151
Views
2
CrossRef citations to date
0
Altmetric
Research Articles

A welding defect detection method based on multiscale feature enhancement and aggregation

, ORCID Icon &
Pages 1295-1314 | Received 25 Apr 2023, Accepted 23 Aug 2023, Published online: 04 Sep 2023

References

  • Senck S, Happl M, Reiter M, et al. Additive manufacturing and non-destructive testing of topology-optimised aluminum components. Case Stud NondestrTest Eval. 2020;35(3):1–13. doi: 10.1080/10589759.2020.1774582
  • Kalaiselvi V, Aravindhar DJ. An efficient weld image classification system using wavelet and support vector machine. In: 2019 3rd International Conference on Computing and Communications Technologies (ICCCT); Chennai, India; 2019. p. 46–49.
  • Zhang M, Shi H, Zhang Y, et al. Deep learning-based damage detection of mining conveyor belt. Measurement. 2021;175:109130. doi: 10.1016/j.measurement.2021.109130
  • Yao Z, He D, Chen Y, et al. Inspection of exterior substance on high-speed train bottom based on improved deep learning method. Measurement. 2020;163:108013. doi: 10.1016/j.measurement.2020.108013
  • Luo Q, Fang X, Su J, et al. Automated visual defect classification for flat steel surface: a survey. IEEE Trans Instrum Meas. 2020;69(12):9329–9349. doi: 10.1109/TIM.2020.3030167
  • Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Columbus, OH, USA; 2014.
  • Lecun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proc IEEE. 1998;86(11):2278–2324. doi: 10.1109/5.726791
  • He K, Zhang X, Ren S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell. 2015;37(9):1904–1916. doi: 10.1109/TPAMI.2015.2389824
  • Girshick R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (ICCV); Santiago, Chile; 2015.
  • Ren S, He K, Girshick R, et al. Faster r-cnn: towards real-time object detection with region proposal networks. In: Cortes C, Lawrence N, Lee D, Sugiyama M, and Garnett R, editors Advances in Neural Information Processing Systems. Vol. 28. Montreal Canada: Curran Associates, Inc; 2015.
  • He K, Gkioxari G, Dollar P, et al. Mask r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV); Venice, Italy; 2017.
  • Li Y, Chen Y, Wang N, et al. Scale-aware trident networks for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV); Seoul, South Korea; 2019.
  • Zhang H, Chang H, Ma B, et al. Dynamic r-cnn: towards high quality object detection via dynamic training. In: Vedaldi A, Bischof H, Brox T Frahm JM, editors. Computer Vision - ECCV 2020. Cham: Springer International Publishing; 2020. pp. 260–275. doi: 10.1007/978-3-030-58555-6_16
  • He D, Zou Z, Chen Y, et al. Rail transit obstacle detection based on improved cnn. IEEE Trans Instrum Meas. 2021;70:1–14. doi: 10.1109/TIM.2021.3116315
  • He D, Qiu Y, Miao J, et al. Improved mask r-cnn for obstacle detection of rail transit. Measurement. 2022;190:110728. doi: 10.1016/j.measurement.2022.110728
  • Xing J, Jia M. A convolutional neural network-based method for workpiece surface defect detection. Measurement. 2021;176:109185. doi: 10.1016/j.measurement.2021.109185
  • Xu X, Lei Y, Yang F. Railway subgrade defect automatic recognition method based on improved faster r-cnn, scientific programming. 2018;2018:1–12. doi: 10.1155/2018/4832972
  • Li P, Dong Z, Shi J, et al. Detection of small size defects in belt layer of radial tire based on improved faster r-cnn. In: 2021 11th International Conference on Information Science and Technology (ICIST); Virtual meeting; 2021. p. 531–538.
  • Ji W PW, Du M. Research on gear appearance defect recognition based on improved faster r-cnn. J Syst Simul. 2019;31:2198–2205.
  • Liu W, Anguelov D, Erhan D, et al. SSD: Single shot multibox detector. In: Leibe B, Matas J, Sebe N Welling M, editors. Computer Vision-ECCV 2016. Cham: Springer International Publishing; 2016. pp. 21–37. doi: 10.1007/978-3-319-46448-0_2
  • Redmon J, Divvala S, Girshick R, et al. You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA; 2016.
  • Redmon J, Farhadi A. Yolo9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Honolulu, HI, USA; 2017.
  • Redmon J, Farhadi A. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767. 2018. http://arxiv.org/abs/1804.02767
  • Bochkovskiy A, Wang C, Liao HM. Yolov4: optimal speed and accuracy of object detection. 2020. https://arxiv.org/abs/2004.10934
  • Ge Z, Liu S, Wang F, et al. YOLOX: exceeding YOLO series in 2021. 2021. https://arxiv.org/abs/2107.08430
  • Li C, Cui G, Zhang W, et al. Defect detection in vehicle mirror nonplanar surfaces with multi-scale atrous single-shot detect mechanism. IP Adv. 2021;11(7):075202. doi: 10.1063/5.0053851
  • Yu Z, Shen Y, Shen C. A real-time detection approach for bridge cracks based on YOLOv4-FPM. Autom Constr. 2021;122:103514. doi: 10.1016/j.autcon.2020.103514
  • Ying Z, Lin Z, Wu Z, et al. A modified-yolov5s model for detection of wire braided hose defects. Measurement. 2022;190:110683. doi: 10.1016/j.measurement.2021.110683
  • Zhang M, Yin L. Solar cell surface defect detection based on improved yolov5. IEEE Access. 2022;10:80804–80815. doi: 10.1109/ACCESS.2022.3195901
  • Jin R, Niu Q, Spagnolo P. Automatic fabric defect detection based on an improved yolov5. Math Probl Eng 2021. 2021;2021:1–13. doi: 10.1155/2021/7321394
  • Wang CY, Bochkovskiy A, Liao HYM, Yolov7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors, 2022. URL:https://arxiv.org/abs/2207.02696.
  • Jin B, Hu Y, Tang Q, et al. Exploring spatial-temporal multi-frequency analysis for high-fidelity and temporal-consistency video prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Seattle, WA, USA; 2020.
  • Liu Y, Shao Z, Hoffmann N. Global attention mechanism: retain information to enhance channel-spatial interactions. arXiv Preprint arXiv: 2112 05561. 2021. https://arxiv.org/abs/2112.05561
  • Gao Z, Xie J, Wang Q, et al. Global second-order pooling convolutional networks. In: proceedings of the IEEE/CVF Conference on computer vision and pattern recognition; Long Beach, CA, USA; 2019. p. 3024–3033.
  • Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR); Salt Lake City, UT, USA; 2018.
  • Wang Q, Wu B, Zhu P, et al. Eca-net: efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Seattle, WA, USA; 2020.
  • Woo S, Park J, Lee JY, et al. Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV); Munich, Germany; 2018. p. 3–19.
  • Lin TY, RoyChowdhury A, Maji S. Bilinear cnn models for fine-grained visual recognition. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV); Santiago, Chile; 2015.
  • Ma N, Zhang X, Zheng HT, et al. Shufflenet v2: practical guidelines for efficient cnn architecture design. In: Proceedings of the European Conference on Computer Vision (ECCV); Munich, Germany; 2018.
  • Sunkara R, Luo T. No more strided convolutions or pooling: a new cnn building block for low-resolution images and small objects. In: Amini MR, Canu S, Fischer A, Guns T, Novak PK Tsoumakas G, editors. Machine Learning and Knowledge Discovery in Databases. Nature Switzerland, Cham: Springer Nature Switzerland; 2023. pp. 443–459. doi: 10.1007/978-3-031-26409-2_27

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.