167
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Cascaded adaptive global localisation network for steel defect detection

, , , , &
Pages 4884-4901 | Received 19 Dec 2022, Accepted 18 Sep 2023, Published online: 21 Nov 2023
 

Abstract

Defect detection is crucial in ensuring the quality of steel products. This paper proposes a novel deep neural network, cascaded adaptive global location network (CAGLNet), for detecting steel surface defects. The main objective of this study is to address the challenges associated with the irregular shape and dense spatial distribution of defects on steel. To achieve this goal, CAGLNet integrates a feature extraction network that combines residual and feature pyramid networks, a cascade adaptive tree-structure region proposal network (CAT-RPN) that eliminates the need for prior knowledge, and a global localisation regression for steel defect detection. This paper evaluates the effectiveness of CAGLNet on the NEU-DET dataset and demonstrates that the proposed model achieves an average accuracy of 85.40% with a fast frames per second of 10.06, outperforming those state-of-the-art methods. These results suggest that CAGLNet has the potential to significantly improve the effectiveness of defect detection in industrial production processes, leading to increased production yield and cost savings.

Abbreviations: AT-RPN, adaptive tree-structure region proposal network; CAGLNet, cascaded adaptive global location network; CAT-RPN, cascade adaptive tree-structure region proposal network; CNN, convolutional neural network; DNN, deep neural network; EPNet, edge proposal network; FPN, feature pyramid network; FCOS, fully convolutional one-stage detector; FPS, frames per second; GMM, Gaussian mixture model; IoU, intersection-over-union; ROIAlign, region of interest align; RPN, region proposal network; ResNet, residual network; ResNet50_FPN, residual network and feature pyramid network; SABL, side aware boundary localisation; SSD, single-shot multiBox detector; TPE, Tree-structured Parzen estimator

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are available from the corresponding author, [author initials], upon reasonable request.

Additional information

Funding

This research was supported by the National Natural Science Foundation of China [grant number 92167107], Science and Technology Innovation Action Plan of Shanghai Science and Technology Commission [grant number 22N21900100], Fundamental Research Funds for the Central Universities [grant number 22120220575], and Open Fund for National Aerospace Intelligence Control Technology Laboratory.

Notes on contributors

Jianbo Yu

Jianbo Yu received the B.Eng. degree from the Department of Industrial Engineering, Zhejiang University of Technology, Zhejiang, China, in 2002, the M.Eng. degree from the Department of Mechanical Automation Engineering, Shanghai University, Shanghai, China, 2005, and the Ph.D. degree from the Department of Industrial Engineering and Management, Shanghai Jiaotong University, Shanghai, China, 2009. From 2009-2013, he worked as associate professor at the Department of Mechanical Automation Engineering, Shanghai University, Shanghai, China. Since 2016, he worked as professor at the School of Mechanical Engineering, Tongji University, Shanghai, China. His current research interests include intelligent condition-based maintenance, machine learning, quality control and statistical analysis. Dr. Yu is associate editor of IEEE Transactions on Instrumentation and Measurement.

Yanshu Wang

Yanshu Wang received the B.Eng. degree from the School of Industrial Engineering Sichuan University, Sichuan, China, in 2020. He is currently an M.Eng. degree in the Department of Industrial Engineering, Tongji University, Shanghai, China. His research interests include Machine learning and visual detection and recognition.

Qingfeng Li

Qingfeng Li is an associate researcher at the Research Center of Big Data and Computational Intelligence, Hangzhou Innovation Institute, Beihang University. He received the M.S. and B.S. degrees in Computer Software from Zhengzhou University, Zhengzhou, China, in 2014 and 2017, respectively. His research interests include computer vision, image processing, and intelligent manufacturing.

Hao Li

Hao Li is a Senior Engineer and Chief Engineer of Level 3 at the Institute of Aeronautical Manufacture Technology, COMAC Shanghai Aircraft Manufacturing Co., Ltd., graduated with Master’s Degree of Engineering from Huazhong University of Science and Technology, with his main research field in experimental verification and Additive Manufacturing.

Mingyan Ma

Mingyan Ma is a R&D Engineer, Institute of Aeronautical Manufacture Technology, COMAC Shanghai Aircraft Manufacturing Co., Ltd. Obtained Master’s Degree of Mechanical Engineering from University of Windsor in 2018. The major areas of his work and research are Quality Control, General Test & Verification Technology, etc. Has participated in a number of civil aircraft manufacturing process research projects.

Peilun Liu

Peilun Liu is a Research Technician at the Institute of Aeronautical Manufacture Technology, COMAC Shanghai Aircraft Manufacturing Co., Ltd., graduated with a Master Degree of Engineering from the College of Information Engineering and Automation, Civil Aviation University of China in 2021, with his main research field in Digital Image Processing, Mode Recognition and Generic Technology Research and Development of Process Experiment of Civil Aircraft.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.