1,655
Views
0
CrossRef citations to date
0
Altmetric
Editorial

The future of virtual reality in cataract surgical training

ORCID Icon, , , &
Pages 193-196 | Received 31 Jan 2020, Accepted 09 Jun 2020, Published online: 30 Jun 2020

Over the last decade we have seen a rapid proliferation of simulation technologies and in the next ten years these will permeate almost every facet of cataract surgical training. Artificial intelligence, big data along with immersive tools are also key, and will complement enhancements to the present delivery methods.

Current high-fidelity virtual reality surgery simulators such as the EyeSi (VRmagic Holding AG, Mannheim, Germany) replicate the ergonomics and skills required for surgical practice, offering generic or task specific modules, and standardized scores which are repeatable, reproducible and correlate well with real life surgical skills. Simulator training provides improvements to surgeons’ operating room performance and shortens the learning curve [Citation1Citation7]. An ever-greater array of simulated surgical complications and clinically challenging pathologies will be added and the future will also see the introduction of full cataract cases. User accounts, which can be remotely accessed, will allow easier monitoring but also teaching from afar.

Tactile feedback, reproducing that experienced in live patient surgery, is being developed using vibration and sensory substitution [Citation8]. Help Me See (HelpMeSee, New York, USA) is building a simulator for micro-incision cataract surgery, which provides this haptic feedback. The purpose of the simulator is to train surgeons in small incision cataract surgery in order to increase cataract surgical rates in low-resource countries [Citation9,Citation10]. Lower fidelity, wet and dry lab courses will also progress, as advances in material science and design improve the proprioceptive feedback, bio-fidelity and operating room (OR) integration of such systems [Citation11Citation13]. These will run in parallel to simulators with augmented reality and visual overlay, introducing more realistic ‘intra-operative’ challenges such as bleeding, defocus or conjunctival chemosis.

Integration of 3-D viewing platforms into simulators (in addition to the fixed microscopic oculars) are also emerging [Citation14Citation17]. These are occurring in parallel to OR developments such as the Ngenuity 3-D Visualization System (Alcon Inc, Fort Worth, TX, USA) and Trenion 3-D HD (Carl Zeiss Meditec, Jena, Germany), which will allow for enhanced trainer-trainee viewing exchange, multiple potential participants, better physiological posturing and the possibility of remote mentoring [Citation18].

Virtual platforms have a wider scope for two-way exchange, with different forms of telecommunication or telementoring, which also exist with robotic surgical interfaces, both for pedagogical and delivery purposes [Citation19Citation22]. These systems can display real-time information in the virtual space, for example OCT data, hazard alerts along with relaying preceptor instructions [Citation23]. The latter could include a video overlay demonstrating where to mobilize instruments or areas to avoid. Virtual interfaces may also be one gateway into ophthalmic robotics where additional degrees of freedom, stability and machine augmentation could be realized. Feasibility and proof of concept studies for robot-assisted cataract surgery [Citation24] penetrating keratoplasty [Citation25], corneal laceration [Citation26], pterygium surgery [Citation27,Citation28], and amniotic membrane transplantation [Citation29] have been demonstrated and as the field progresses tremor filtering, remote surgery and smart surgical tools (able to act as a virtual assistant) may all be added to improve performance and reduce error rates [Citation21,Citation30,Citation31].

Cognitive mentoring platforms, such as the Boston Virtual Mentor Cataract Surgery Trainer (Adobe Systems Inc., San Jose, CA, USA) provide online virtual reality simulation to practice surgical decision-making, error recognition and situational awareness [Citation32]. Much broader virtual reality based versions of these tools are emerging, some immersed in the 3-D virtual space, others allowing interactive team coaching and yet others prepping for high cognitive loading, with forward planning of the surgical procedure involving mind mapping (visually organizing the information in a hierarchical fashion). Web and App based surgical simulation, such as that provided by TouchSurgery (DigitalSurgery Inc., London, U.K.), can address this precise need both by providing free, interactive, touch-screen surgical content to improve comprehension and through mixed-reality (blended virtual and real) instruction via the HoloLens (Microsoft Inc., Redmond, U.S.). These allow the users to undertake the virtual operation with prompts and visual displays along with modular instruction on a step-by-step basis. Virtual/augmented/mixed reality will also be used to improve a student’s attention, satisfaction and motivation with the cataract surgical learning process enabling users to enhance skills and knowledge transfer, while immersive simulation technologies will also shape safety and human factors training (HFT) for surgery in the near future [Citation33Citation36]. HFT has been piloted in ophthalmic surgery but is very expensive however virtual reality tech will allow individual and whole team mentoring in this domain accessibly and affordably [Citation37].

The availability of surgical training data through electronic portfolios, electronic medical records (both locally and more broadly e.g. the national ophthalmic database), and the digitization of other surgical data will make for much more powerful analysis in the future [Citation38]. The automation of objective structured rubrics e.g. the ICO-OSCAR:phaco and Machine Learning tracker algorithms will also allow real time challenges to be pinpointed, and targeted virtual reality tutoring shall stem from this [Citation39Citation42]. Networked simulators will provide benchmarking and all this information aggregation will allow big data analytics with tailored virtual reality surgical instruction for the individual.

Building on previous Machine Vision work, Artificial Intelligence (AI) will also enter the fray. Deep Learning (AI) algorithms will be deployed for real-time analysis in theaters, with a range of generated metrics allowing more individualized training, targeting which aspects of performance require input [43-46Citation43,Citation44]. DigitalSurgery (Digital surgery Inc., London, U.K.), a software and content creator for surgical training, is developing such a system with Moorfields Eye Hospital and this will allow cataract complication risk estimation and reduction by providing pre and peri-operative case and surgeon specific data and alerts, interactive team training along with simulation cross fertilization. These offer new skills assimilation options for intermediate and advanced surgeons too.

Microsurgery is a highly technical skill, taking years to master in an ergonomically challenging environment (requiring all four limbs to function independently), all whilst using an indirect viewing platform [Citation45]. The backbone of current surgical education is still the mentor-apprentice Halstead model, and this will remain albeit with significant enhancements [Citation46]. Transformational changes to cataract surgical tutorage, and the virtual reality technology deployed to enhance it, will emerge in the near future [Citation47,Citation48]. Facilitated by accelerating developments in this sphere, these technologies will, in our opinion, offer a rapidly improved array of skills acquisition options, in a personalized fashion, delivered flexibly and remotely, reflecting a deeper understanding of modern learning theories.

Expert Opinion

High fidelity virtual reality simulators, which afford surgical skills transfer and replicate operative ergonomics, have proliferated over the last decade. In the near future technical simulation will be complemented by big data analytics of trainee and patient outcomes, artificial intelligence algorithms, interactive web and app-based platforms, immersive displays and real time data exchange systems, cognitive training and virtual mentoring to enhance the learning of surgery.

Declaration of interest

George Saleh was supported by the National Institute for Health Research (NIHR) Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and the UCL Institute of Ophthalmology. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed

Reviewer disclosures

Peer reviewers on this manuscript have no relevant financial or other relationships to disclose.

Additional information

Funding

This paper was not funded.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.