6,485
Views
10
CrossRef citations to date
0
Altmetric
Research Article

VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery

ORCID Icon, , , &
Pages 331-338 | Received 11 Sep 2020, Accepted 07 Oct 2020, Published online: 21 Dec 2020

References

  • Alexopoulos K, Nikolakis N, Chryssolouris G. 2020. Digital twin-driven supervised machine learning for the development of artificial intelligence applications in manufacturing. Int J Comput Integr Manuf. 33(5):429–439. doi:10.1080/0951192X.2020.1747642.
  • Chang J, Chen Y 2018. Pyramid stereo matching network. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Utah, United States. p. 5410–5418. doi:10.1109/CVPR.2018.00567.
  • Chen H, Sun K, Tian Z, Shen C, Huang Y, Yan Y 2020a. Blendmask: top-down meets bottom-up for instance segmentation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). p. 8570–8578. doi:10.1109/CVPR42600.2020.00860.
  • Chen Q, Nguyen V, Han F, Kiveris R, Tu Z 2020b. Topology-aware single-image 3d shape reconstruction. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). p. 1089–1097. doi:10.1109/CVPRW50498.2020.00143.
  • Eigen D, Puhrsch C, Fergus R 2014. Depth map prediction from a single image using a multi-scale deep network. In: Advances in Neural Information Processing Systems 27. Montreal, Canada: Curran Associates, Inc. p. 2366–2374. http://papers.nips.cc/paper/5539-depth-map-prediction-from-a-single-image-using-a-multi-scale-deep-network.pdf.
  • EndoVis. 2015–2020. Endoscopic vision challenge. [accessed 2020 Jan 19]. https://endovis.grand-challenge.org/.
  • Gao W, Tedrake R. 2018. Surfelwarp: efficient non-volumetric single view dynamic reconstruction. CoRR Robot Sci Sys XIV. 14:29–38. http://www.roboticsproceedings.org/rss14/p29.html.
  • Garg R, Bg VK, Carneiro G, Reid I. 2016. Unsupervised cnn for single view depth estimation: geometry to the rescue. In: Leibe B, Matas J, Sebe N, Welling M, editors. Computer Vision – ECCV 20. Cham: Springer International Publishing; p. 740–756. doi:10.1007/978-3-319-46484-8_45
  • GeigerA, Roser M, Urtasun R. 2011. Efficient large-scale stereo matching. In: Kimmel R, Klette R, Sugimoto A, editors. Computer Vision – ACCV 20. Berlin (Heidelberg): Springer Berlin Heidelberg; p. 25–38. doi:10.1007/978-3-642-19315-6_3
  • Giannarou S, Visentini-Scarzanella M, Yang G. 2013. Probabilistic tracking of affine-invariant anisotropic regions. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (Volume: 35, Issue: 1, Jan. 2013). doi:10.1109/TPAMI.2012.81.
  • Godard C, Aodha OM, Brostow GJ 2017. Unsupervised monocular depth estimation with left-right consistency. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Hawaii, United States. p. 6602–6611. doi:10.1109/CVPR.2017.699.
  • Haim H, Elmalem S, Giryes R, Bronstein A, Marom E. 2018. Depth estimation from a single image using deep learned phase coded mask. IEEE Trans Comput Imaging. 4:298–310. doi:10.1109/TCI.2018.2849326
  • Huang B, Tsai YY, Cartucho J, Vyas K, Tuch D, Giannarou S, Elson DS. 2020. Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe. Int J Comput Assist Radiol Surg. 15(8):1389–1397. doi:10.1007/s11548-020-02205-z.
  • Kazanzides P, Chen Z, Deguet A, Fischer GS, Taylor RH, DiMaio SP May 2014. An open-source research kit for the da vinci® surgical system. In: IEEE Intl. Conf. on Robotics and Auto. (ICRA), Hong Kong, China. p. 6434–6439. doi:10.1109/ICRA.2014.6907809.
  • Koh DH, Jang WS, Park JW, Ham WS, Han WK, Rha KH, Choi YD. 2018. Efficacy and safety of robotic procedures performed using the da vinci robotic surgical system at a single institute in korea: experience with 10000 cases. Yonsei Med J. 59(8):975–981. doi:10.3349/ymj.2018.59.8.975.
  • Liu S, Chen C, Kehtarnavaz N. 2016. A computationally efficient denoising and hole-filling method for depth image enhancement. In: Real-time image and video processing 2016. Vol. 9897, International Society for Optics and Photonics;SPIE Photonics Europe, 2016, Brussels, Belgium, p. 235–243. doi:10.1117/12.2230495.
  • Lu J, Jayakumari A, Richter F, Li Y, Yip MC 2020. Super deep: A surgical perception framework for robotic tissue manipulation using deep learning for feature extraction. arXiv preprint arXiv:200303472. [accessed 2019 Jan 19]. https://arxiv.org/abs/2003.03472v1.
  • Newcombe RA, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison AJ, Kohi P, Shotton J, Hodges S, Fitzgibbon A 2011. Kinectfusion: real-time dense surface mapping and tracking. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality, Basel, Switzerland. p. 127–136. doi:10.1109/ISMAR.2011.6092378.
  • Newell A, Deng J 2020. How useful is self-supervised pretraining for visual tasks? In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). p. 7343–7352. doi:10.1109/CVPR42600.2020.00737.
  • Pilzer A, Lathuilière S, Xu D, Puscas MM, Ricci E, Sebe N. 2020. Progressive fusion for unsupervised binocular depth estimation using cycled networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 42 (10), October 1. doi:10.1109/TPAMI.2019.2942928.
  • Qiu W, Yuille A. 2016. Unrealcv: connecting computer vision to unreal engine. In: Hua G, Jégou H, editors. Computer vision – ECCV 201. Cham: Springer International Publishing; p. 909–916. doi:10.1007/978-3-319-49409-8_75
  • Ranjan A, Hoffmann DT, Tzionas D, Tang S, Romero J, Black MJ. 2020. Learning multi-human optical flow. Int J Comput Vis. 128(4):873–890. doi:10.1007/s11263-019-01279-w.
  • Veldhuizen B 2018. Blender for computer vision machine learning. [accessed 2020 Jan 19]. https://www.blendernation.com/2018/05/28/blender-for-computer-vision-machine-learning/.
  • Villegas R, Yang J, Ceylan D, Lee H 2018. Neural kinematic networks for unsupervised motion retargetting. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Utah, United States. p. 8639–8648. doi:10.1109/CVPR.2018.00901.
  • Zhan J, Cartucho J, Giannarou S. 2020. Autonomous tissue scanning under free-form motion for intraoperative tissue characterisation. arXiv Preprint arXiv:200505050. https://arxiv.org/abs/2005.05050v3.
  • Zhang L, Ye M, Giataganas P, Hughes M, Bradu A, Podoleanu A, Yang GZ. 2017a. From macro to micro: autonomous multiscale image fusion for robotic surgery. IEEE Robot Autom Mag. 24(2):63–72. doi:10.1109/MRA.2017.2680543.
  • Zhang L, Ye M, Giataganas P, Hughes M, Yang G 2017b. Autonomous scanning for endomicroscopic mosaicing and 3d fusion. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), Marina Bay Sands Singapore, Singapore. p. 3587–3593. doi:10.1109/ICRA.2017.7989412.
  • Zioma R 2017. Ml-imagesynthesis. [accessed 2020 Jan 19]. https://github.com/U3DC/Image-Synthesis-for-Machine-Learning.