406
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Visual kinematic force estimation in robot-assisted surgery – application to knot tying

, , , &
Pages 414-420 | Received 27 Sep 2020, Accepted 04 Oct 2020, Published online: 27 Oct 2020
 

ABSTRACT

Robot-assisted surgery has potential advantages but lacks force feedback, which can lead to errors such as broken stitches or tissue damage. More experienced surgeons can judge the tool-tissue forces visually and an automated way of capturing this skill is desirable. Methods to measure force tend to involve complex measurement devices or visual tracking of tissue deformation. We investigate whether surgical forces can be estimated simply from the discrepancy between kinematic and visual measurement of the tool position. We show that combined visual and kinematic force estimation can be achieved without external measurements or modelling of tissue deformation. After initial alignment when no force is applied to the tool, visual and kinematic estimates of tool position diverge under force. We plot visual/kinematic displacement with force using vision and marker-based tracking. We demonstrate the ability to discern the forces involved in knot tying and visualize the displacement force using the publicly available JIGSAWS dataset as well as clinical examples of knot tying with the da Vinci surgical system. The ability to visualize or feel forces using this method may offer an advantage to those learning robotic surgery as well as adding to the information available to more experienced surgeons..

Acknowledgments

This work was supported by the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) at UCL (203145Z/16/Z), EPSRC (EP/P027938/1, EP/R004080/1) and the H2020 FET (GA 863146). Danail Stoyanov is supported by a Royal Academy of Engineering Chair in Emerging Technologies (CiET1819/2/36) and an EPSRC Early Career Research Fellowship (EP/P012841/1). We are also grateful to technical support provided by Intuitive Surgical, Inc. under a research agreement.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed here

Additional information

Funding

This work was supported by the Engineering and Physical Sciences Research Council [203145Z/16/Z]; Wellcome Trust [203145Z/16/Z].

Notes on contributors

P. J. ‘Eddie’ Edwards

Eddie Edwards received his BA(hons) in Physics from Balliol College, Oxford and his Masters in Medical Physics from the University of Surrey. He has worked for over 25 years in the field of image-guided surgery. His work on image registration and augmented reality guidance as a research assistant at Kings College, London resulted in his Ph.D. (2002) entitled “Alignment and Visualisation in Image-guided Intervention“. As a Lecturer at Imperial College for nearly 10 years, he focused on augmented reality in robot-assisted surgery including image segmentation and video image analysis. After several years in industry including work in retinal image analysis, he joined UCL in 2018 as a senior research associate within the UCL Robotics Institute as part of the surgical robot vision group, WEISS, Department of Computer Science. He is well known within the MICCAI community, has chaired several workshops and conferences (MICCAI AMI/ARCS 2008, 2009 and AE-CAI 2011, ISBMS symposium 2008, MIAR 2010) and has over 80 journal and conference publications in the field. His current work concentrates on computer vision applied to robotic surgery, with a specific interest in anatomical and tool reconstruction incorporating kinematics.

Emanuele Colleoni

Emanuele Colleoni started his studies at Politecnico di Milano, Italy, obtaining a Bachelor’s degree in Automation and Control Engineering (September 2017) and a Master’s degree in Biomedical Engineering (April 2019). He developed his master thesis within the Centre for Medical Image Computing (CMIC) at University College London, where his project focussed on deep learning techniques for 2-D pose estimation of surgical tools in laparoscopic videos. Currently, he holds a RAEng funded Ph.D. studentship position within the surgical robot vision group at WEISS, Department of Computer Science, UCL, London.  His research covers the employment of surgical robot simulators as well as deep learning techniques for 3-D pose estimation of surgical tools, which is a critical but challenging step towards computer-assisted interventions.

Aswhin Sridhar

Dr. Ashwin Sridhar’s main interest is in minimally invasive surgery, image-guided surgery in Urology, digitization of surgery and surgical education. He completed an MSC (Surgical Technology) with distinction from Imperial College London, successfully defending his thesis on augmented reality image guidance for robotic prostatectomy. He is also currently pursuing an MA in clinical education with a view to developing a novel interactive environment incorporating established educational theories for the development of surgical skills. He is currently a consultant Urologist within the pelvic cancer team at UCLH focusing on bladder and prostate cancer as well as reconstructive surgery with a special interest in optimizing outcomes post treatment. He completed the EAU/ERUS curriculum and has performed more than 600 independent robotic procedures in pelvic cancer. His aim is to provide world-class cancer care for patients with pelvic urological cancers, those needing robotic pelvic urological reconstructive surgery, and to establish a multispecialty robotic training environment for continuous professional development.

John D. Kelly

Professor John Kelly is a consultant urological surgeon specializing in robotic surgery for bladder and prostate cancer. He is the lead for the London Cancer Urology Surgery Centre and the robotic surgery program at UCLH. John is the Professor of Uro-Oncology at UCL and his research group explores how new therapies for bladder and prostate cancer can improve outcomes for patients. He is the Clinical Lead for Urology at Westmoreland Street Hospital, one of the largest departments in Europe that delivers cutting edge surgery using the latest technologies. He moved to UCLH in 2009 from Cambridge University having worked at Addenbrookes in complex cancer surgery and as a fellow at the Cornell University Hospital, New York. He is known internationally for his pioneering work in robotic surgery and is the Director of the Chitra Sethia Minimal Access Centre at UCLH which trains robotic surgeons from the UK and abroad. John has been the Chairman of the UK National Cancer Research Institute, Bladder Clinical Studies Group, and is currently the Chairman of the Scientific Committee of The Urology Foundation. John holds the Orchid Chair of Male Genito-Urinary Oncology with the Orchid Charity.

Danail Stoyanov

Dan Stoyanov is a Professor within the Department of Computer Science at University College London (UCL) specializing in robot vision for surgical applications.  Dan is also the Director of the Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), a member of the Centre for Medical Image Computing (CMIC) at UCL, and a Royal Academy of Engineering Chair in Emerging Technologies.  Dan’s research interests and expertise are in surgical vision and computational imaging, surgical robotics, image-guided therapies, and surgical process analysis. Dan first studied electronics and computer systems engineering at King's College London before completing a Ph.D. in Computer Science at Imperial College London where he specialized in medical image computing. Dan works on vision problems in minimally invasive surgery especially related to non-rigid structure from motion, scene flow, and photometric and geometric camera calibration. His work is applied towards developing image guidance, computational biophotonic imaging modalities, and quantitative measurements during robotic-assisted minimally invasive procedures.  Dan is Chief Scientific Officer at Digital Surgery Ltd and Co-Founder of Odin Medical, both companies specializing in developing AI products for interventional healthcare.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access
  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart
* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.