662
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Virtual reality for synergistic surgical training and data generation

, , , , , , , , & show all
Pages 366-374 | Received 22 Oct 2021, Accepted 25 Oct 2021, Published online: 26 Nov 2021
 

ABSTRACT

Surgical simulators not only allow planning and training of complex procedures, but also offer the ability to generate structured data for algorithm development, which may be applied in image-guided computer assisted interventions. While there have been efforts on either developing training platforms for surgeons or data generation engines, these two features, to our knowledge, have not been offered together. We present our developments of a cost-effective and synergistic framework, named Asynchronous Multibody Framework Plus (AMBF+), which generates data for downstream algorithm development simultaneously with users practicing their surgical skills. AMBF+ offers stereoscopic display on a virtual reality (VR) device and haptic feedback for immersive surgical simulation. It can also generate diverse data such as object poses and segmentation maps. AMBF+ is designed with a flexible plugin setup that allows for unobtrusive extension for simulation of different surgical procedures. We show one use case of AMBF+ as a virtual drilling simulator for lateral skull-base surgery, where users can actively modify the patient anatomy using a virtual surgical drill. We further demonstrate how the data generated can be used for validating and training downstream computer vision algorithms.

Acknowledgments

This work was supported by: 1) an agreement LCSR and MRC, 2) Cooperative Control Robotics and Computer Vision: Development of Semi-Autonomous Temporal Bone and Skull Base Surgery K08DC019708, 3) Intuitive research grant, 4) a research contract from Galen Robotics, 5) Johns Hopkins University internal funds.

Disclosure statement

Russel H. Taylor is a paid consultant to Galen Robotics and has an equity interest in that company. These arrangements have been reviewed and approved by JHU in accordance with its conflict of interest policy.

Notes

Additional information

Funding

This work was supported by the Intuitive Inc. [N/A]; Johns Hopkins University [N/A]; An agreement LCSR and MRC [90088566]; Galen Robotics Inc. [90072284]; Johns Hopkins Hospital [K08DC019708].

Notes on contributors

Adnan Munawar

Adnan Munawar is a Postdoctoral Research Fellow at the Lab for Computational Sensing and Robotics (LCSR) at the Johns Hopkins University. He received his Ph.D. and M.S. from Worcester Polytechnic Institute (WPI) while working at the Automation in Interventional Medicine (AIM) Laboratory. He was the recipient of the Fulbright Scholarship during his M.S. program. His research interests broadly include medical robotics, haptics and force control, simulation and animation, and shared teleoperation systems.

Zhaoshuo Li

Zhaoshuo Li is currently a Computer Science PhD student at Johns Hopkins University. He received his bachelor degree from University of British Columbia (hornor). He has published in academic conferences including ICCV, IROS, ICRA, MICCAI and IPCAI. He is a former intern at Reality Labs, Meta Inc and Intuitive Inc. His current research interests include computer vision, deep leanring and 3D reconstruction.

Punit Kunjam

Punit Kunjam is a Computer Science graduate and passionate about developing and evaluating new technologies using VR/AR. His career aspiration is to become a research scientist. His research interest lies in the area of Human-Computer Interaction, Augmented Reality, Virtual Reality, Computer Graphics, and Robotics. He is particularly interested in studying problems with practical impact, and he enjoys the process of exploring applications in different problem domains. He always seeks the ability to connect real-world applications with theoretical models, which is crucial to successful research. It includes abstracting a problem to its essence and then devising effective techniques for its solution.

Nimesh Nagururu

Nimesh Nagururu is currently an M.D. candidate at the Johns Hopkins University School of Medicine. He graduated from the University of Miami in 2020 with a degree in Biomedical Engineering. His current research interests include surgical robotics, image processing and sensory dysfunction.

Andy S. Ding

Andy S. Ding is a fourth-year medical student at Johns Hopkins University School of Medicine pursuing a career in academic neurotology. He is a member of the Laboratory for Computational Sensing and Robotics and the Hunterian Neurosurgical Research Laboratory. His primary research interests focus on advancing robotic microsurgical systems in otolaryngology and neurosurgery.

Peter Kazanzides

Peter Kazanzides received the Ph.D. degree in electrical engineering from Brown University in 1988 and began work on surgical robotics as a postdoctoral researcher at the IBM T.J. Watson Research Center. He co-founded Integrated Surgical Systems (ISS) in November 1990 to commercialize the robotic hip replacement research performed at IBM and the University of California, Davis and served as Director of Robotics and Software. Dr. Kazanzides joined Johns Hopkins University in 2002 and is currently appointed as a Research Professor of Computer Science. His research interests include medical robotics, space robotics, and mixed reality.

Thomas Looi

Thomas Looi is the Posluns Innovator and Project Director for The Wilfred and Joyce Posluns Centre for Image Guided Innovation and Therapeutic Intervention at the Hospital for Sick Children. He is also appointed in the Department of Mechanical and Industrial Engineering at the University of Toronto. He is an engineering by training where he received a PhD in Biomedical Engineering from University of Toronto and Master of Business Administration from Rotman School of Business. Previously, he had spent a number of years in the aerospace sectoring on space robotics projects. His research interests include minimally invasive surgical tools, surgical/medical robotics and MRI-guided robotics.

Francis X. Creighton

Dr. Francis Creighton is a fellowship trained neurotologist and lateral skull base surgeon. His clinical practice specializes in surgical and medical treatment of middle ear, inner ear, skull base and facial nerve disorders. These include skull base tumors, vestibular schwannomas (acoustic neuromas), hearing loss, cholesteatoma, cochlear implantation, stapedectomy, CSF leaks and ear drum perforations. He is trained in minimally invasive and endoscopic approaches to the ear for cholesteatoma and ear drum perforations, which reduces the need for visible incisions. His research focuses on the integration of robotic and augmented reality platforms to improve surgical safety and efficiency.

Russell H. Taylor

Russell H. Taylor received the Ph.D. degree in computer science from Stanford University, in 1976. After working as a Research Staff Member and Research Manager with IBM Research from 1976 to 1995, he joined Johns Hopkins University, where he is the John C. Malone Professor of Computer Science with joint appointments in Mechanical Engineering, Radiology, and Surgery and is also the Director of the Laboratory for Computational Sensing and Robotics. He a member of the US National Academy of Engineering and is an author of over 500 peer-reviewed publications and over 90 issued US and International patents. He is also a Fellow of the IEEE, of the MICCAI Society, of the National Academy of Inventors, and of the AIMBE. His research interests include robotics, human-machine cooperative systems, medical imaging & modeling, and computer-integrated interventional systems.

Mathias Unberath

Mathias Unberath is an Assistant Professor in the Department of Computer Science at Johns Hopkins University and affiliated with the Laboratory for Computational Sensing and Robotics and the Malone Center for Engineering in Healthcare. With his group, the ARCADE Lab, he develops collaborative intelligent systems that support clinical workflows to increase the access to – and expand the possibilities of highest-quality healthcare. Through synergistic advancement of imaging, computer vision, machine learning, and interaction design, he pioneers human-centered solutions that are embodied in emerging technology such as mixed reality and robotics.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.