332
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Synthetic laparoscopic video generation for machine learning-based surgical instrument segmentation from real laparoscopic video and virtual surgical instruments

, , , , , , & show all
Pages 225-232 | Received 30 Sep 2020, Accepted 07 Oct 2020, Published online: 10 Nov 2020
 

ABSTRACT

This paper proposes a synthetic laparoscopic image generation for machine-learning-based surgical instrument segmentation from laparoscopic videos. Recently surgical instrument extraction methods from laparoscopic videos have been studied using deep learning, fuelling the creation of a large amount of training data for better performance. However, it is difficult to collect a massive amount of data on surgical instruments that are used infrequently during surgery. Their recognition accuracy may be reduced by the lack of training data. This paper solves this problem by increasing the training data with an image synthesis technique. Pairs of synthetic laparoscopic videos and their labelled data are automatically generated by superimposing 3D virtual surgical instrument models on real laparoscopic videos. The synthetic laparoscopic images are translated using CycleGAN so that the appearance of the surgical instruments closely resembles those in the real laparoscopic videos. Additionally, we extracted surgical instruments from laparoscopic videos using 2D U-Net based network. This network was trained using both the synthetic laparoscopic images and the manually labelled, real laparoscopic video data. The trained model extracted the surgical instruments from the laparoscopic videos. Our experimental result showed that the recognition accuracy of the surgical instruments, which are used infrequently during surgery, was improved using synthetic laparoscopic images generated by our proposed method.

Acknowledgements

The authors thank our colleagues for their suggestions and advice.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Japan Agency for Medical Research and Development [JP18lk1010028]; Japan Agency for Medical Research and Development [JP20he2102001]; Japan Society for the Promotion of Science [JP26108006]; Japan Society for the Promotion of Science [JP17H00867].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.