1,420
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Monocular vision based on the YOLOv7 and coordinate transformation for vehicles precise positioning

, , &
Article: 2166903 | Received 21 Oct 2022, Accepted 05 Jan 2023, Published online: 28 Jan 2023
 

Abstract

Logistics tracking and positioning is a critical part of the discrete digital workshop, which is widely applied in many fields (e.g. industry and transport). However, it is distinguished by dispersed manufacturing machinery, frequent material flows, and complicated noise environments. The positioning accuracy of the conventional radio frequency positioning approach is severely impacted. The latest panoramic vision positioning technology relies on binocular cameras. And that cannot be used for monocular cameras in industrial scenarios. This paper proposes a monocular vision positioning method based on YOLOv7 and coordinate transformation to solve the problem of positioning accuracy in the digital workshop. Positioning beacons are placed on the top of the moving vehicle with a uniform height. The coordinate position of the beacon on the image is obtained through the YOLOv7 model based on transfer learning. Then, coordinate transformation is applied to obtain the real space coordinates of the vehicle. Experimental results show that the proposed single-eye vision system can improve the positioning accuracy of the digital workshop. The code and pre-trained models are available on https://github.com/ZS520L/YOLO_Positioning.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by National Natural Science Foundation of China: [grant no 51874010]; University of Science and Technology of China: [grant no 51874010].