1,704
Views
21
CrossRef citations to date
0
Altmetric
Editorial

Are augmented reality headsets in surgery a dead end?

, , & ORCID Icon
Pages 999-1001 | Received 27 Aug 2019, Accepted 13 Nov 2019, Published online: 22 Nov 2019

1. Introduction

In performing the surgical procedure, the surgeon must always mentally merge a huge amount of patient-specific information about the pathology and the surrounding healthy tissues. This whole process is based on the exploitation of the surgeon’s prior knowledge of the anatomy and on his/her understanding of the clinical data and the geometrical and mechanical relations gathered on the patient before and during the intervention with his/her own senses.

Today, the virtual reality(VR) based visualization of medical images allows physicians to obtain further potentially life-saving information on the 3D arrangement of the anatomy and to optimally plan the procedure (i.e. cutting lines trajectories) [Citation1].

Visual Augmented Reality (AR) technologies allow merging such patient-specific medical imaging data with complex surgical scenes in a consistent way directly into the view of the surgeon, without any subjective interpretation about their actual placement in the anatomy. Thus, AR can provide a sort of virtual ‘X-Ray view’ (i.e., by virtually projecting non-exposed tissues) or it can display the surgical planning information (i.e. cutting lines trajectories) as consistently aligned to the real patient. Starting from the early demonstrations of this concept in the ’90s [Citation2], there was a growing interest about AR in surgery [Citation3Citation6] related to the potential added value that such technology may bring in terms of clinical outcome.

Nowadays, some AR devices for surgical navigation are commercially available at least for endoscopy on rigid structures [Citation7], whereas for soft tissues and open surgery scientific and technological limitations still remain to be overcome. Image-guided surgery on soft tissues requires indeed accurate and robust deformable registration algorithms and/or intra operative real-time 3D scanners, either with AR or with more traditional VR-based surgical navigators [Citation8].

Wearable AR systems based on head-mounted displays (HMDs) are deemed as the most ergonomic and effective solutions to guide those procedures that are manually performed under the surgeon’s direct vision due to their ability to preserve the user’s egocentric viewpoint. Such devices can be integrated in traditional surgical navigators or they can act as the whole navigation system on their own since they often offer both display and tracking functionalities.

The availability of powerful and user-friendly AR HMDs leveraged the development of many demonstrators for open surgery but without any actual clinical result. In a recent paper, we showed that general-purpose AR HMDs are not yet ready to be routinely used in surgery [Citation9] and the media raised doubtful questions about the dead end of AR HMDs for surgery.

2. Expert commentary

In general, medical devices necessitate additional requirements in terms of quality of materials, safety and certification in respect to consumer products. In case of AR HMDs, there are additional requirements also in terms of their functionalities.

To obtain precise and safe guidance information in AR, the virtual information and the real anatomy must be perceived as coherently merged in space and time. At first, it is fundamental to coherently register the virtual information with the patient in a certain reference system. Such task, as already mentioned, is required in traditional VR-based surgical navigators too and it is not trivial especially in case of soft tissues.

Provided that the virtual information is properly registered with the patient, the use of video see-through (VST) HMDs, consisting in cameras anchored to wearable displays, allows a pixel-wise digital blending between the computer-generated information and the camera-mediated view of the real patient. Such approach, if well implemented, introduces only negligible chromatic, temporal, and perspective alterations in respect to the naked-eye view that generally do not affect the manual performances [Citation10], at least for short-term use, whereas, to the best of our knowledge, there are no experiences of their usage for long periods of time. Finally, the occlusion of the direct view of the real anatomy raises safety concerns in case of system fault.

By contrast, in optical see-through (OST) HMDs the direct view of the real world is optically merged, through an optical combiner, with the virtual content. In this way, the user sees through a semi-transparent display the real world in a natural way and this aspect confers a clear advantage in terms of visual comfort and safety over VST solutions. In case of OST HMDs, the tracking-registration-rendering cycle should run fast for not perceiving significant spatio-temporal discrepancies between real world and virtual content. Nonetheless, in commercial OST HMDs, the spatial coherence between the digital content and the real scene is still suboptimal due to the natural view of the real world itself.

In the real world, light rays (emitted, transmitted or reflected along every direction and characterized in terms of wavelength and polarization) can be in general modeled as a light field (LF) 5D function [Citation11], whereas light rays crossing a surface, as a semi-transparent display, can be parameterized with a 4D function (i.e., 2D for the incident position on the surface and 2D for the angles).

OST LF displays are potentially capable of providing a complete spatial overlay between computer-generated and real LF. However, prototype solutions, based on integral imaging technology or on staked LCD panels [Citation12,Citation13], are still characterized by a non-sufficient depth-of-field, a low spatial resolution, and a low light throughput of the display and for the two products that should be introduced in the worldwide market in the early future, Magic Leap and AVEGANT, the full specifications of their light field displays have not been clearly disclosed yet.

In commercially available non-LF HMDs, each eye sees the real world through a semi-transparent display, whereas the virtual content is optically projected on a 2D virtual surface focused at a pre-defined distance, which is outside the surgeon working distance (HMDs focus distance range from 2m up to infinite, whereas the surgeon normally works at 40–50 cm). The dimensional discrepancy between the 4D real world LF that crosses the semi-transparent display and the 2D virtual content, are intrinsically source of perceptual issues.

The human eye is not able to accommodate to proper focus both the real world and the virtual content, and this focus rivalry is incompatible with its use as aid to manual activities [Citation9].

In binocular HMDs, there is also the well-known perceptual mismatch that generates visual discomfort between the physiological stimulus to converging the eyes at the fixation point, somewhere close to the surgical table, and to focusing the virtual content projected farther [Citation14].

Finally, given a virtual information geometrically registered to the real patient for a certain eye position in respect to the display, the relative displacement of the current center of projection of the eye in respect to said point should be accurately determined [Citation15]. The estimation of such displacement allows compensating the parallax factor that may cause another source of misalignment in perceiving real and virtual information. Unfortunately, the robust and accurate estimation of the eye center of projection is hard to achieve and on available OST HMDs the total registration error on the surgical table is hardly lower than 5 mm [Citation16].

3. Five-year view

There are different technological alternatives to obtain commercial AR HMDs capable of guiding surgical procedures in five years.

LF displays are nowadays considered as a technological leap forward step, not only for AR, and there are growing interests and investments that bode well for the future about the availability of such LF OST displays for AR also in surgery.

A technological simpler alternative worth of being investigated is associated to the use of traditional 2D OST HMDs with the focus distance closer to the surgical table (i.e., the working area). Given the early and not yet published results of our group, such an approach should be able to significantly mitigate all the perceptual issues due to the dimensional discrepancy between the 4D real world LF and the 2D virtual content and it should allow the realization of profitable AR HMDs for surgery.

Finally, the VST approach should not be discarded either, because it is intrinsically able to offer, as required in some procedures, a magnified view of the surgical field (by setting a proper ratio between the field of view of camera and display). Today, surgeons are already used to wear magnification glasses, therefore in five years from now we believe that they will wear hybrid OST-VST HMDs, as the one that we are developing in our VOSTARS project (www.vostars.eu), capable to provide a comfortable AR guidance under OST modality and a potentially magnified view under VST modality. The possibility to wear and hybrid device will allow to merge the benefits, explained in these pages, of both OST and VST approaches.

Key issue

Bringing AR in front of the surgeon’s eyes by means of a HMD is a complex matter that should be faced, in a holistic way, taking into consideration the surgeon’s needs, the human visual perceptual mechanism, and the technological potentialities and pitfalls of the optoelectronic, optical, electronical, and software components.

Declaration of interest

The authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties.

Reviewer disclosures

One peer reviewer is a medical advisor for XVISION, an augmented reality navigation system for percutaneous pedicle screw placement. Peer reviewers on this manuscript have no other relevant financial relationships or otherwise to disclose.

Additional information

Funding

This paper was not funded.

References

  • Ferrari V, Carbone M, Cappelli C, et al. Value of multidetector computed tomography image segmentation for preoperative planning in general surgery. Surg Endosc. 2012;26(3):616–626.
  • Grimson W, Lozano-Pérez T, Wells W, et al., editors. Automated registration for enhanced reality visualization in surgery. Proceedings of the First International Symposium on Medical Robotics and Computer Assisted Surgery, Pittsburgh, Pennsylvania; 1994.
  • Meola A, Cutolo F, Carbone M, et al. Augmented reality in neurosurgery: a systematic review. Neurosurg Rev. 2017;40(4):537–548.
  • Fida B, Cutolo F, Di Franco G, et al. Augmented reality in open surgery. Updates Surg. 2018;70(3):389–400.
  • Kim Y, Kim H, Kim YO. Virtual reality and augmented reality in plastic surgery: a review. Arch Plast Surg. 2017;44(3):179.
  • Ma L, Fan Z, Ning G, et al. 3D visualization and augmented reality for orthopedics. Springer: Intelligent Orthopaedics; 2018. p. 193–205.
  • Citardi MJ, Agbetoba A, Bigcas JL, et al. Augmented reality for endoscopic sinus surgery with surgical navigation: a cadaver study. Int Forum Allergy Rhinol. 2016 May;6(5):523–528.
  • Ferrari V, Moglia A, Ferrari M Analytic description of the image to patient torso registration problem in image guided interventions. 2015.
  • Condino S, Carbone M, Piazza R, et al. Perceptual limits of optical see-through visors for augmented reality guidance of manual tasks. IEEE Trans Biomed Eng. 2019. doi: 10.1109/TBME.2019.2914517.
  • Cutolo F, Fontana U, Ferrari V. Perspective preserving solution for quasi-orthoscopic video see-through HMDs. Technologies. 2018;6(1):9.
  • Levoy M. Light Fields and Computational Imaging. Computer. 2006;39(8):46–55.
  • Maimone A, Fuchs H. Computational augmented reality eyeglasses. Int Sym Mix Augment. 2013;29–38. PubMed PMID: ISI:000331439800031.
  • Calabrò EM, Cutolo F, Carbone M, et al., editors. Wearable augmented reality optical see through displays based on integral imaging. International Conference on Wireless Mobile Communication and Healthcare, Milan, Italy; MobiHealth 2016 Nov 14–16. Springer.
  • Kramida G. Resolving the vergence-accommodation conflict in head-mounted displays. IEEE Trans Vis Comput Graph. 2016;22(7):1912–1931. Epub 2015/ 09/04. doi: PubMed PMID: 26336129.
  • Grubert J, Itoh Y, Moser K, et al. A survey of calibration methods for optical see-through head-mounted displays. IEEE Trans Vis Comput Graph. 2018 Sep;24(9):2649–2662. Epub 2017/09/30. PubMed PMID: 28961115.
  • Condino S, Turini G, Parchi PD, et al. How to build a patient-specific hybrid simulator for orthopaedic open surgery: benefits and limits of mixed-reality using the microsoft HoloLens. J Healthc Eng. 2018;2018:1–12.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.