1,591
Views
33
CrossRef citations to date
0
Altmetric
Biomedical Paper

Virtual reality-enhanced ultrasound guidance: A novel technique for intracardiac interventions

, , , &
Pages 82-94 | Received 30 Nov 2007, Accepted 19 Jan 2008, Published online: 06 Jan 2010

Abstract

Cardiopulmonary bypass surgery, although a highly invasive interventional approach leading to numerous complications, is still the most common therapy option for treating many forms of cardiac disease. We are currently engaged in a project designed to replace many bypass surgeries with less traumatic, minimally invasive intracardiac therapies. This project combines real-time intra-operative echocardiography with a virtual reality environment providing the surgeon with a broad range of valuable information. Pre-operative images, electrophysiological data, positions of magnetically tracked surgical instruments, and dynamic surgical target representations are among the data that can be presented to the surgeon to augment intra-operative ultrasound images. This augmented reality system is applicable to procedures such as mitral valve replacement and atrial septal defect repair, as well as ablation therapies for treatment of atrial fibrillation. Our goal is to develop a robust augmented reality system that will improve the efficacy of intracardiac treatments and broaden the range of cardiac surgeries that can be performed in a minimally invasive manner. This paper provides an overview of our interventional system and specific experiments that assess its pre-clinical performance.

Introduction

Almost all surgical procedures can be divided into two stages: the first stage is gaining access to the target tissue, and the second is the treatment or therapy to be delivered to the target. Most complications associated with surgical procedures do not arise from the therapy itself, but rather from the process of gaining direct access to the target. In the case of cardiac procedures, most of the surgical trauma is associated with the delivery system rather than the actual therapy. While some side-effects are temporary and present minimal risk (e.g., general anesthesia), others are associated with severe morbidity (e.g., permanent neurological damage, which is a potential side-effect of cardiopulmonary bypass [CBP] surgery) or even mortality. If most therapies could be achieved without the need for significant “surgery” to gain access to the site of disease, procedure time would be shortened, patient trauma and side-effects reduced, and healthcare costs correspondingly lowered.

The evolution of surgery has progressed rapidly towards minimally invasive procedures. However, minimizing invasiveness inevitably leads to more limited visual access to the target tissues. This is especially true for intracardiac treatments such as valve repair or replacement, atrial septal defect (ASD) closure, and ablation treatments for arrhythmias, where the goal is to minimize the physical invasiveness of a full sternotomy while eliminating the emerging risks associated with CPB. Since these procedures require reaching targets inside the heart, direct visual access is simply not possible while the heart is beating. Prior to the advent of CPB, procedures such as mitral commissurotomy and ASD repair were actually performed on the beating heart. Without direct line-of-sight information or image guidance technology, such procedures had very limited success and high morbidity rates Citation[1–3]. However, recent advances in modern imaging technologies have revived opportunities for off-pump intracardiac therapies. Access methods range from percutaneous Citation[4] to direct access, usually via the left or right atrial appendages Citation[5–8], and to apical access through the left ventricle Citation[9]. The imaging modalities employed for intra-procedure guidance typically involve X-ray fluoroscopy, magnetic resonance imaging (MRI), and two- and three-dimensional ultrasound (US).

In some cases, image guidance is limited to a simple display of imaging data via a monitor in the operating room (OR) Citation[7–10]. Such approaches typically require the surgeon to mentally reconstruct the three-dimensional (3D) geometry of both surgical tools and patient anatomy from two-dimensional (2D) image data. Three-dimensional ultrasound provides a more intuitive geometry, but is limited to a fairly small field of view, making it difficult to maintain adequate visualization of both surgical instruments and target anatomy. US is also subject to image quality issues regarding both interpretation of anatomical details and visualization of metallic instruments. Moreover, the surgeon is also required to interpret motion information (e.g., bringing a tool into contact with the target tissue) from a 2D monitor output and then transpose it to 3D motion patterns of real tools in the OR.

In response to these challenges, several groups have addressed some of these limitations by integrating imaging data with various virtual elements to add electrophysiological information and/or spatial context to the intra-procedure imagesCitation[6], Citation[11–13]. As a solution to this problem, our approach is to rely on echocardiography for intra-operative guidance, fuse it with pre-operative information, and make use of tracking technologies to provide a robust system for surgical guidance, thus eliminating any radiation associated with fluoroscopy.

The global objective of our augmented reality (AR) surgical platform is to provide the surgeon with a simple, intuitive system for surgical navigation in the absence of direct vision. Specifically, in the context of cardiac surgery, our goal is to build a sufficiently robust and reliable system to benefit most minimally invasive intracardiac therapies, although our current experience has focused on direct-access mitral valve replacement and ASD repairs.

Our long-term research goals can be illustrated by following a hypothetical patient through the prospective clinical routine associated with our proposed technique for minimally invasive beating-heart interventions. During procedure planning, a first step is to acquire a set of clinical cine MR images of the subject. A static 3D subject-specific cardiac model is constructed by registering a high-resolution average heart model containing the segmented surgical target and the surrounding anatomy to the mid-diastole clinical-quality subject image Citation[14]. Intra-operatively, trans-esophageal echocardiography (TEE) is used to provide the surgeon with the necessary real-time visualization information. To facilitate intra-procedure navigation, we augment the real-time US images with the 3D anatomical models obtained during planning, using a feature-based registration technique. Ultimately, we complement the “surgical space” with virtual representations of the surgical instruments tracked in real time, generating a comprehensive virtual reality (VR) environment that allows the surgeon to visualize and navigate inside the beating heart in the absence of direct vision.

In this paper we provide a generic overview of our VR-enhanced US image guidance system, including specific details on its engineering components, various clinical applications, and different investigations designed to assess pre-clinically the accuracy and feasibility of the system. presents a schematic representation of our system and its configuration in the OR.

Figure 1. Schematic displaying the layout of our VR-enhanced US guidance surgical system, its components, and its typical configuration within an OR. Integrated ultrasound and virtual reality data can be delivered to the surgeon either via a monitor over the operating table, or via head-mounted display (HMD) units.

Figure 1. Schematic displaying the layout of our VR-enhanced US guidance surgical system, its components, and its typical configuration within an OR. Integrated ultrasound and virtual reality data can be delivered to the surgeon either via a monitor over the operating table, or via head-mounted display (HMD) units.

Surgical guidance platform architecture: 1) AtamaiViewer

The AtamaiViewer (http://www.atamai.com) is a software platform designed to integrate all components necessary for image-guided surgery applications. It is portable across Windows, Linux, and OS-X operating systems, using a Python-based user interface and the Visualization Toolkit (VTK) for rendering and visualization. Python is a high-level scripting language that allows fast development of prototype algorithms, which can be easily transposed to C++ classes in VTK for further speed optimization. Moreover, the modular design of the platform facilitates the addition of new components for various applications.

Components

The AtamaiViewer integrates the visualization of pre-operative MR or CT images with intra-operative US, endoscopic data, tracked surgical tools, haptic devices, and virtual models. It provides the ability to selectively combine the different imaging components, set translucency levels for overlays, view the 3D data stereoscopically, and visualize volumetric data from orthogonal or oblique planes, as well as via cine sequences synchronized with the intra-operative ECG Citation[15]. In addition, it is also possible to combine optical and magnetic tracking systems within a common virtual workspace for a single application.

Applications

A wide variety of applications have been developed within the AtamaiViewer environment, several relating to intracardiac image-guided surgery and therapy. These projects include 4D electrophysiology mapping for atrial fibrillation therapy Citation[16], an augmented reality system for port placement Citation[17], and registration of intracardiac 2D US to pre-operative CT or MR data Citation[18]. The most challenging application implemented within the AtamaiViewer is the planning and guidance of mitral valve (MV) replacement and ASD repair.

Surgical guidance platform architecture: 2) Augmented reality environment

In this section we show how a series of AtamaiViewer components have been integrated to form an AR environment used to guide minimally invasive interventions. As much of our work has been motivated by the need for less invasive approaches to cardiac surgery, most applications have been designed and implemented in the context of the guidance and navigation of intracardiac procedures on the beating heart. Direct intracardiac access is achieved using the Universal Cardiac Introducer® Citation[5], which is affixed to the left atrial appendage of the heart via a left-anterior minithoracotomy. The UCI acts as an “air lock” between the blood-filled cavity and the chest, allowing for the introduction and manipulation of surgical instruments inside the beating heart with minimal blood loss Citation[5].

Intra-operative guidance: echocardiography

To compensate for the lack of direct vision inside the beating heart, our system primarily employs a 4- to 7.5-MHz trans-esophageal echocardiography (TEE) probe for intra-operative imaging. To further assist in visualization, a 3D trans-thoracic echocardiography (TTE) probe is also available to acquire 3D images with a larger field of view. Suematsu et al. Citation[7] also raised the necessity of employing 3D echocardiography as a superior technique to 2D US for guiding instruments within the beating heart. They reported their experience using only 3D US as the guidance platform in a laboratory environment, without the benefit of a virtual environment.

Based on our prior experience in intracardiac navigation, we concluded that the use of 2D TEE guidance had significant disadvantages when used as the sole modality for image guidance, even when complemented with 3D TTE. Both anatomical targets and surgical tools were poorly perceived in US images, making it impossible to assess their position and orientation during manipulation, especially since the 2D cross-sectional images do not provide the necessary context within the 3D cardiac anatomy. Some of the questions that frequently arose during the procedure related to whether the prosthetic valve was within the mitral orifice, or whether the valve skirt was in contact with the valve ring, and answering these questions was difficult even for an experienced surgical team (). Although 3D TEE may become a potential future solution, given its high image quality, its limited field of view may impose further challenges in visualizing the surgical tools and target in the same volume. However, in spite of these limitations with respect to image guidance, the Doppler capabilities of US are ideal for assessing the interventions and abnormalities in flow patterns, such as regurgitant flow through or around the mitral valve, or incomplete ASD repair.

Figure 2. a) Two-dimensional TEE image of the valve tool and clip applier inside a beating porcine heart. b) Three-dimensional US image of a similar scene. Note the difficulty in interpreting the anatomical features and surgical tools and the limited field of view with 3D US.

Figure 2. a) Two-dimensional TEE image of the valve tool and clip applier inside a beating porcine heart. b) Three-dimensional US image of a similar scene. Note the difficulty in interpreting the anatomical features and surgical tools and the limited field of view with 3D US.

To enhance intra-procedure guidance, we displayed the 2D US data within a more robust anatomical and surgical context. We integrated two main components within the AtamaiViewer: pre-operative cardiac models and virtual representations of surgical tools and the US probe, both tracked in real time, as described in the upcoming sections.

Pre-operative planning and guidance

Cardiac models

MRI is often considered the gold-standard modality for cardiac imaging, given its superior soft tissue contrast and 4D imaging capabilities. Due to their outstanding tissue characterization, pre-operative MR images can be used during the procedure planning stage to extract anatomical features of interest. These data can then be used to generate heart models that display the cardiac anatomy as 3D rendered surfaces. However, clinical MR images exhibit low spatial resolution, a low signal-to-noise ratio (SNR), and motion artifacts. Consequently, surgical targets extracted directly from these clinical images may not be sufficiently accurate for procedure planning and guidance.

A significant component of the work in our laboratory has been focused on addressing this concern. Moreover, driven by our ultimate objective to translate this research into the clinic, we developed and validated efficient techniques to construct cardiac models from pre-operative images using human subject data. We used an approach that employs a high-quality average heart model Citation[19] to characterize anatomical structures in low-quality subject images. This model was built from low-resolution MR images (6-mm slice thickness) of 10 subjects (6-mm slice thickness), in which various anatomical structures (i.e., surfaces of the cardiac chambers and valvular structures) were manually segmented from each image Citation[20], and the resulting data were then co-registered into a common high-resolution reference image (1.5-mm slice thickness) Citation[19]. The model consists of both an image and a geometry component: the image component represents a measure of the MR appearance of the heart, while the geometric component is represented by the shape variability of the segmented anatomical features ().

Figure 3. a) Image of the prior high-resolution average heart model at mid-diastole (MD). b) Prior model at mid-diastole showing two segmented features of interest: the left ventricle surface and the mitral valve annulus. [Color version available online.]

Figure 3. a) Image of the prior high-resolution average heart model at mid-diastole (MD). b) Prior model at mid-diastole showing two segmented features of interest: the left ventricle surface and the mitral valve annulus. [Color version available online.]

Subject-specific cardiac models can be easily obtained by fitting the average heart model to a subject-specific image dataset. A similar registration-based segmentation approach was also undertaken by Lorenzo-Valdès et al. Citation[21], who constructed and segmented an average heart model based on population images, and registered it to target images to automate segmentation. Despite the low resolution of the subject data, our models that were specific to the left ventricular myocardium (LV), the left atrium (LA), and the right atrium and ventricle (RAV) were previously shown to be accurate within 5.0 ± 1.0 mm, 4.7 ± 0.9 mm, and 5.3 ± 1.3 mm, respectively Citation[20]. After having demonstrated their feasibility, we are currently trying to adapt these modeling techniques to animal studies, in which pre-operative MR data can be acquired prior to the procedure Citation[22].

Surgical target localization

Using pre-operative modeling, we can also predict the location of dynamic surgical targets within the subject-specific models throughout the cardiac cycle. To illustrate this feature, we performed a study to characterize the location and geometry of the mitral valve annulus (MVA) using pre-operative information. Using the previously described subject-specific cardiac models animated with motion information extracted using non-rigid image registration Citation[14], we predicted the location and geometry of the MVA at four time-points in the cardiac cycle (MD, ED, MS, ES) with an overall accuracy of 3.1 ± 0.25 mm Citation[23]. The predicted MVA locations were validated against gold-standard locations extracted from 3D US images of the same subject using a segmentation tool similar to the TomTec 4D MV Assessment software available for clinical applications (TomTec, Unterschleissheim, Germany). A summary of these results is presented in .

Table I.  Root-mean-square (RMS) distance between the model-predicted MVA and the gold-standard (US-extracted) MVA, quantified at four phases throughout the cardiac cycle.

Pre- to intra-operative registration

To augment the intra-operative TEE data with the pre-operative cardiac models (), we employed a feature-based registration technique. This method is suitable for cardiac interventions, as the selected valvular structures are easily identifiable in both the pre-operative and intra-operative images, and they also ensure a good alignment of the pre-operative and intra-operative surgical targets. The registration algorithm consisted of aligning the aortic and mitral valve annuli (AVA and MVA, respectively) defined in the pre-operative model with those identified intra-operatively in the US images. An initial alignment between the pre-operative and intra-operative annuli was obtained by minimizing the distance between their centroids, as well as the tips of their corresponding normal unit vectors. Following initial alignment, the downhill simplex optimizer Citation[24] was used to further minimize the distance between the two sets of annuli. Our results show an RMS distance error of 5.2 mm, 4.1 mm, and 7.3 mm in aligning the pre-operative and intra-operative features located within 10 mm of the valvular region in each of the LV, LA and RAV surfaces, respectively Citation[25].

Figure 4. a) AtamaiViewer screen-shot showing real-time intra-operative TEE data augmented with anatomical context provided by the pre-operative heart model using the feature-based registration. [Color version available online.]

Figure 4. a) AtamaiViewer screen-shot showing real-time intra-operative TEE data augmented with anatomical context provided by the pre-operative heart model using the feature-based registration. [Color version available online.]

As shown, the modeling component of our surgical platform not only offers the feasibility of generating sufficiently accurate models of the subject's heart prior to the procedure, but also facilitates their integration within the intra-procedure environment by means of image registration, leading to an accurate virtual environment for procedure planning and guidance.

Surgical tool tracking

For all off-pump intracardiac procedures, it is crucial for the surgeon to know the position and orientation of the surgical tools with respect to the target at all times during the intervention. We achieve this by using a magnetic tracking system (MTS) — the NDI Aurora® (Northern Digital Inc., Waterloo, Ontario, Canada). This system consists of three components: a control unit, a magnetic field generator, and miniature 5- or 6-degree-of-freedom sensors fixed to the ultrasound transducer and surgical tools. The field generator can either be mounted above the patient or embedded in the OR mattress.

As an example, for a typical mitral valve implantation procedure, three virtual objects are required: one for the TEE probe, a second for the valve-guiding tool, and a third for the valve-fastening device. Prior to the surgery, we create virtual models of both surgical tools and the TEE transducer using VTK. illustrates a prosthetic valve attached to the valve-insertion tool, accompanied by its virtual representation. Similar virtual models were designed for the US probe and the valve-fastening tool. For the model of the US probe, the image plane automatically adjusts to changes in rotation angle and depth as they are manipulated by the sonographer in the OR ().

Figure 5. a) Physical representation of a mechanical mitral valve attached to the valve-insertion tool. b) Virtual representation of the valve and valve-insertion tool. c) Virtual surgical environment employed during a preliminary in vivo porcine study, including the pre-operative heart model, intra-operative TEE image, tracked TEE probe, and surgical tools. [Color version available online.]

Figure 5. a) Physical representation of a mechanical mitral valve attached to the valve-insertion tool. b) Virtual representation of the valve and valve-insertion tool. c) Virtual surgical environment employed during a preliminary in vivo porcine study, including the pre-operative heart model, intra-operative TEE image, tracked TEE probe, and surgical tools. [Color version available online.]

The AtamaiViewer software platform has many tools for calibration of both tracked surgical tools and US transducers. The tracked US probe is calibrated using either a Z-string device Citation[26] or a phantom-less calibration method, as described in the next section. The valve-guiding tool is calibrated by first defining a transform for the tool tip position and orientation before the valve is attached. A similar procedure is used to calibrate the valve-fastening tool. In addition, a reference MTS sensor is attached to a stationary region of the subject to avoid the need to recalibrate the “world” coordinate system in case of accidental motion of the subject or field generator.

System evaluation and assessment

Prior to implementing these applications in the clinic, we performed a series of tests to evaluate the surgical guidance system. In the section above headed Pre- and intraoperative registration, we estimated the accuracy of the pre-operative models in predicting the surgical target, as well as the accuracy of the feature-based registration method. Next, we evaluated two different calibration methods for the tracked TEE transducer, then performed a navigation accuracy assessment, and finally conducted a pre-clinical in vitro evaluation of the interventional system in the context of a mitral valve implantation procedure.

Accuracy assessment

US calibration accuracy

The first set of experiments was designed to evaluate and compare two commonly available calibration methods for the tracked US transducer: the Z-string phantom-based calibration and the phantom-less calibration Citation[27]. In addition, we also described the uncertainty of the system for three US transducers commonly employed in our laboratory (the TTE, adult TEE, and pediatric TEE probes), along with each of the calibration methods Citation[28]. To achieve this goal, we attempted to localize a point source (a 1.6-mm Teflon sphere) in the US image and measure its position. We assessed accuracy by computing the error between the measured position of the point source and its known position as determined prior to the experiment. A typical image of the target on top of a layer of US-energy absorbing material (Sorbothane®) is shown in . The point-source localization error was estimated as the root-mean-square (RMS) of the distance between the measured and true position of the Teflon sphere. summarizes these results.

Figure 6. Ultrasound images of a point-source Teflon sphere (approximately 1.6 mm in diameter). The point source is located on top of a layer of Sorbothane and is imaged upside-down using each of the three US transducers (TTE, adult TEE, and pediatric TEE).

Figure 6. Ultrasound images of a point-source Teflon sphere (approximately 1.6 mm in diameter). The point source is located on top of a layer of Sorbothane and is imaged upside-down using each of the three US transducers (TTE, adult TEE, and pediatric TEE).

Table II.  Point-source localization accuracy using three different US probes and two different calibration methods. The root-mean-square (RMS) of the distance errors is provided.

Surgical navigation accuracy

In the second set of experiments, the accuracy of the ultrasound-enhanced virtual reality system was assessed from the surgeon's point of view. Three surgical guidance modalities were tested: (i) 2D US image guidance only (“US only”); (ii) virtual reality guidance with tracked surgical tools (“VR only”); and (iii) 2D US image guidance augmented by virtual reality (“VR + US”). The user was asked to guide a probe tip onto a small target within a cardiac phantom. The only information available to the user was the US image, the VR interface, and the VR-augmented US interface for the “US only”, “VR only” and “VR + US” guidance modalities, respectively. The results are summarized in .

Table III.  Single point localization accuracy assessment. Three users were asked to localize a point three times for each of the three modalities; N = 9 for each modality. An ANOVA showed that only the modality had a significant difference.

This experiment showed that our VR-enhanced US image guidance system improved upon the widely used technique of intra-procedure guidance that currently relies on 2D US imaging alone. While both the “VR only” and “VR + US” modalities performed better than the “US only” approach, the “VR + US” guidance method appeared to be slightly less accurate than the “VR only” method. The additional errors with “VR + US” were attributed to the quality of the US images acquired with the trans-esophageal probe. It is anticipated that “VR + US” will provide a superior solution when used in a dynamic environment such as an actual in vivo intracardiac procedure Citation[29].

In addition, “VR + US” also offers a safety feature in that the surgeon can use real-time imaging to precisely position devices even if the initial patient-to-image registration has been affected due to organ motion or deformation during the procedure. A significant advantage of the VR-enhanced US guidance approach consists of its navigation versus positioning capabilities. The virtual reality component assists the user mostly with the orientation in space and navigation towards the surgical target, while the US imaging component provides the user with critical real-time information for performing detailed on-target manipulations.

Pre-clinical evaluation: mitral valve implantation

The following two sets of experiments were designed to assess the success with which an experienced surgeon was able to perform the procedure using our VR-enhanced interventional system, and to compare it to the outcome of the same procedure performed solely under US image guidance. These pre-clinical studies mimicked a mitral valve implantation procedure. The surgical task consisted of guiding a prosthetic valve mounted on the valve-insertion tool to the target (the mitral annulus), positioning it correctly, and securing it in place using a valve-fastening tool Citation[30].

The former set of experiments were performed on a cardiac intervention phantom () that was constructed in our laboratory and is similar in concept to that described by Rettmann et al. Citation[31]. This phantom facilitates the testing of new tools, surgical techniques, and skills in the laboratory under conditions that closely mimic real clinical settings, reducing the reliance on animal studies. Cardiac tissue is mimicked by polyvinyl alcohol-cryogel (PVA-C) membranes Citation[32] supported by plexiglass plates. In addition, a tube descends into the lower part of the phantom to simulate the esophagus, therefore facilitating the use of TEE probes.

Figure 7. Plexiglass cardiac intervention phantom showing the “esophagus” - the black tube where the TEE probe is inserted to simulate the intra-operative application - and PVA-C membranes representing heart wall tissue.

Figure 7. Plexiglass cardiac intervention phantom showing the “esophagus” - the black tube where the TEE probe is inserted to simulate the intra-operative application - and PVA-C membranes representing heart wall tissue.

To confirm the limitations of US guidance and emphasize the benefits of our virtual environment for navigation while mimicking an in vivo setting, excised porcine hearts were used in the latter set of experiments instead of PVA-C simulated cardiac tissue. The heart was mounted inside the cardiac phantom so as to simulate its in situ orientation during an actual cardiac intervention. Intracardiac access was achieved using the UCI, which accommodated the valve-insertion tool, valve-fastening tool and endoscopic camera for intracardiac assessment in the absence of direct vision.

The valve implantation tasks were first attempted under US guidance alone, followed by guidance using the VR-enhanced system. The procedure was performed by a surgeon and an echocardiographer, both with extensive experience of mitral valve interventions. All experiments were blinded, with results being recorded for retrospective analysis by an endoscope directed at the target. Intra-operative real-time 2D US images were acquired using the TEE probe descended into the cardiac phantom through the “esophageal tube”. The surgical target was represented by a 2-cm-diameter hole in the PVA-C membrane for the trials performed on the cardiac intervention phantom, and by the native mitral annulus for the studies performed on ex vivo porcine hearts.

US image guidance

Under US guidance alone, it was very difficult to identify both the target and the surgical instruments, as well as to determine their exact position and orientation with respect to one another. For valve placement in the phantom experiments, it was typically observed that positioning that seemed to be correct proved to be several millimeters off-target in both translation and angulation (a). The “US-only” procedure was lengthy and consistently unsuccessful; 4 clips were fired using the laparoscopic clip-applier; however, none of them managed to efficiently fasten the valve skirt to the underlying membrane.

Figure 8. a) A valve poorly implanted under US guidance in the cardiac phantom. Arrows indicate the location of the fasteners. b) Endoscopic view showing valve placement under US guidance in an excised heart. c) Endoscopic view showing a fastener incorrectly inserted under US guidance in an excised heart.

Figure 8. a) A valve poorly implanted under US guidance in the cardiac phantom. Arrows indicate the location of the fasteners. b) Endoscopic view showing valve placement under US guidance in an excised heart. c) Endoscopic view showing a fastener incorrectly inserted under US guidance in an excised heart.

Similarly, during the trials performed on the ex vivo porcine hearts, the 2D US images were misleading even to the experienced surgeons, causing them to rely on their previous experience in the clinic. After successive trial-and-error attempts, when it was determined that the valve was in place, the endoscopic camera was employed to assess the position of the valve with respect to the anatomical target (). Another endoscopic assessment followed by direct observation revealed that during valve fastening only one pin was applied in the correct location, and this pin had an incorrect angulation, causing a radial puncture of the ventricle wall ().

VR-enhanced US guidance

In addition to the virtual representations of the tracked TEE transducer and surgical instruments (valve-guiding and valve-fastening devices), our virtual environment also integrated pre-operative “anatomy”, which consisted of a CT image of the cardiac phantom acquired prior to the experiments and registered to the physical phantom. A virtual target (a 2-cm diameter spline) was interactively reconstructed from the 2D US images by sweeping the tracked US fan across the “mitral annulus” and displayed within the volume. The “surgical environment” was displayed stereoscopically using head-mounted display units, providing the surgeons with a better spatial perception of the virtual space.

After displaying the 2D echo images within the context of the 3D “pre-operative anatomy”, navigation of the valve towards the target became almost trivial. The surgeon guided the valve to the target with very little difficulty, relying mainly on the virtual environment. Two-dimensional US guidance was employed only to refine the position of the valve and confirm its final correct placement on target. The procedure was finalized by securing the valve in place using a valve-fastening device. After determining its location in space with respect to the valve, the fastening tool was guided towards the target using the virtual models. Its positioning on target was refined using real-time 2D US and then the clips were applied at multiple locations around the valve skirt ().

Figure 9. a) Correct valve implantation under VR-enhanced US guidance in the cardiac phantom. b) Endoscopic image showing the appropriate positioning (also confirmed by the clear view of the chordae tendinae) of the valve onto the native mitral annulus of an excised heart using US guidance augmented by the VR environment. c) Post-procedure image showing the correct location of the fasteners around the valve achieved under US-VR guidance in an excised heart. [Color version available online.]

Figure 9. a) Correct valve implantation under VR-enhanced US guidance in the cardiac phantom. b) Endoscopic image showing the appropriate positioning (also confirmed by the clear view of the chordae tendinae) of the valve onto the native mitral annulus of an excised heart using US guidance augmented by the VR environment. c) Post-procedure image showing the correct location of the fasteners around the valve achieved under US-VR guidance in an excised heart. [Color version available online.]

Guiding the valve to the mitral annulus in the excised hearts using the hybrid US/VR system was also a relatively simple task, and again, once on target, the valve positioning was fine-tuned according to the real-time US images. The success of the procedure was confirmed by an endoscopic evaluation (). Furthermore, the surgeon found it much easier to navigate the tip of the valve-fastening tool to the final target, and then refine its position on target based on the TEE images. Four pins were used to fasten the valve to the underlying tissue, and according to the post-procedure assessment, three of them securely attached the valve (), while the fourth pin, although properly located, did not entirely penetrate into the mitral annulus tissue.

Discussion

This paper presents the global architecture of our surgical platform, together with various integrated components, resulting in a complete virtual surgical environment that surgeons can use to plan and guide procedures in the absence of direct vision. Although this platform can be used to guide a wide variety of interventions, we have described it in the context of cardiac procedures. These applications emphasize its advantages in assisting with both the navigation and surgical instrument manipulation inside the beating heart.

Two-dimensional TEE plays a significant role in our interventional system as it provides the operator with real-time intra-procedure information. Nevertheless, these 2D images are ineffective for identifying the position and orientation of surgical tools with respect to the target. Although 3D US may provide images that are easier to interpret, most of these transducers are too large to fit within the esophagus and they only provide a narrow field of view. To “zoom out” away from the surgical target region and see the “bigger picture”, we augmented 2D intra-procedure imaging with pre-operative models of the heart that provide anatomical context and better spatial orientation. These models can accurately predict the location of the surgical target and can be easily fused with the intraoperative images using a feature-based registration technique. Ultimately, the VR environment was complemented by integrating virtual representations of the surgical instruments tracked in real time during the intervention, generating a reliable system for intra-procedure guidance.

To better mimic the environment specific to a real procedure, we conducted studies on the cardiac phantom in the OR in addition to the experiments performed in the laboratory. These investigations allowed us to identify some of the limitations imposed by the clinical setting which may be encountered during “live” interventions. Recently, we performed several intracardiac interventions on porcine subjects, and these studies confirmed some of the challenges that we expected to face during translation into the clinic.

A “busy” environment is not unusual in an OR, raising concern about using a magnetic system for surgical tool tracking as opposed to an optical system. However, given that the accuracy of a magnetic tracking system decreases away from the magnetic field emitter, it is imperative that the field generator be placed within a range of 20-30 cm from the most probable tool location. As this setup might obstruct the regular “task flow” of the clinical staff, we adopted the approach of embedding the magnetic field generator within the mattress of the operating table, underneath the surgical field. Another significant aspect related to the use of a magnetic tracking system in the OR is the need to avoid placing any ferro-magnetic objects in close proximity to the magnetic field emitter Citation[33]. This requirement implies that all surgical instruments must be manufactured from non-ferromagnetic alloys (e.g., high-grade stainless steel or plastic) to minimize tracking error.

Additional constraints are imposed by the size of the surgical instruments used in the intervention. In our applications, access to intracardiac cavities is through the left atrial appendage, using the UCI. A potential challenge regarding the instrument size may be reflected in the surgeon's dexterity in maneuvering the valve-insertion and valve-fastening tool not only inside the the UCI but also within the heart itself. As a concrete example, a slightly larger prosthetic mitral valve may be difficult to insert through the small orifice between the left atrial appendage and the left atrium. Moreover, the valve-fastening device should ideally be situated above the prosthetic valve at all times, as it is used to attach the valve skirt to the mitral annulus. Currently, we are working on optimizing tool design and on constructing devices suitable for delivering the required treatment to the target, while still being compatible with the anatomy, standard imaging modalities, and tracking systems. The valve-fastening tool currently employed will likely be the first surgical tool to be replaced by a more suitable fixation device, such as the one suggested by Downing et al. Citation[34]. In addition, different candidates for valve prostheses include collapsible devices, as currently being proposed for trans-catheter deployment, as well as stented valves, which significantly facilitate the valve-fastening process.

As our technique constitutes a novel approach to surgery, it is important that the information be presented to the surgeons in a familiar manner, while minimizing interference with the clinical staff and procedure work flow. The footprint of our system in the OR is limited to a computer workstation located several meters from the operating table and the magnetic tracking system, with the field emitter being embedded within the OR mattress (). Cables are needed to connect the various components: the tracked tools and field generator to the MTS, the ECG and video capture (from the US machine) to the computer, and the computer to either an overhead monitor or to head-mounted display units. To date we have not experienced any interference with the clinical workflow of the procedures in the OR, and future technological advancements in wireless communications will enable a reduction in the amount of wiring required to connect all devices involved.

In addition, several approaches have been considered regarding the most appropriate means of delivering the multi-modality data to the physician: the image can be displayed on a simple computer monitor, on a flat screen overlaid onto the patient and located above the operating field, on a stereoscopic screen that enables 3D visualization, or via head-mounted display units which allow the surgeons to directly “navigate” within a virtual volume, as suggested by Vogt et al. Citation[35] or Birkfellner et al. Citation[36]. These different alternatives will be explored in our future work, but to date our collaborating surgeon has reported great comfort using both overhead monitors and head-mounted display units for VR visualization.

Conclusions

We believe that the VR-based surgical guidance system is a key element in improving the performance of beating-heart cardiac interventions. Augmented with US imaging for real-time guidance in on-target manipulations, and pre-operative cardiac models to provide anatomical context and spatial orientation for navigation to target, our system provides extensive support for target identification, intracardiac route planning, and guidance of direct therapeutic interventions. This initial work has demonstrated the tremendous potential of multi-modality imaging combined with surgical tool tracking for providing the capability to both visualize and assess the surgical intervention in a manner that will ultimately be superior to direct vision, within its inherent limitations.

Acknowledgments

The authors thank Dr. Gérard Guiraudon, Dr. Doug Jones, Dr. Daniel Bainbridge and Dr. Stephen Little for their clinical collaboration; Dr. David Gobbi, Dr. Marcin Wierzbicki and Dr. Usaf Aladl for software development and valuable discussions; and Louis Estey for tool design and manufacturing. In addition, we acknowledge funding for this work from the Ontario Research and Development Challenge Fund, the Ontario Innovation Trust, the Canadian Foundation for Innovation, the Canadian Institutes of Health Research, and the Natural Sciences and Engineering Research Council.

References

  • Cutler EC, Levine SA. Cardiotomy and valvulotomy for mitral stenosis. Boston Med Surg J 1923; 188: 1022–1027
  • Watkins ER, Gross RE. Experiences with surgical repair of atrial septal defects. J Thorac Cardiovasc Surg 1955; 20: 469–491
  • Allen DS, Graham EA. Intracardiac surgery - a new method. JAMA 1922; 79: 1028–1030
  • Vahanian A, Acar C. Percutaneous valve procedures: what is the future?. Curr Opin Cardiol 2005; 20: 100–106
  • Guiraudon G, Jones D, Bainbridge D, Peters T. Mitral valve implantation using off-pump closed beating intracardiac surgery: a feasibility study. Interact Cardiovasc Thorac Surg 2007; 6: 603–607
  • Hastenteufel M, Yang S, Christoph C, Vetter M, Meinzer H, Wolf I. Image-based guidance for minimally invasive surgical atrial fibrillation ablation. Int J Med Robotics Comput Assist Surg 2006; 2: 60–69
  • Suematsu Y, Marx GR, Stoll JA, Dupont PE, Cleveland RO, Howe RD, Triedman JK, Mihaljevic T, Mora BN, Savord BJ, Salgo IS, del Nido PJ. Three-dimensional echo-guided beating-heart surgery without cardiopulmonary bypass: a feasibility study. J Thorac Cardiovasc Surg 2004; 128: 579–587
  • von Segesser L, Tozzi P, Augstburger M, Corno A. Working heart offpump cardiac repair (OPCARE) – the next step in robotic surgery?. Interact Cardiovasc Thorac Surg 2003; 2: 120–124
  • McVeigh ER, Guttman MA, Lederman RJ, Li M, Kocatruk O, Hunt T, Kozlov S, Horvath KA. Real-time interactive MRI-guided cardiac surgery: Aortic valve replacement using a direct apical approach. Magn Reson Med 2006; 56: 958–964
  • Downing SW, Edmunds LH. Release of vasoactive substances during cardiopulmonary bypass. Ann Thorac Surg 1992; 54: 1236–1243
  • Dong J, Dickfeld T, Dalal D, Cheema A, Vasamreddy C, Henrikson C, Marine J, Halperin H, Berger R, Lima J, Bluemke D, Calkins H. Initial experience in the use of integrated electroanatomic mapping with three-dimensional MR/CT images to guide catheter ablation of atrial fibrillation. J Cardiovasc Electrophysiol 2006; 17: 459–466
  • DeBuck S, Maes A, Ecto J, Bogaert J, Dymarkowski S, Heidbüchel H, Suetens P. An augmented reality system for patient-specific guidance of cardiac catheter ablation procedures. IEEE Trans Med Imaging 2005; 24: 1512–1524
  • Rhode K, Hill D, Edwards P, Hipwell J, Rueckert D, Sanchez-Ortiz G, Hegde S, Rahunathan V, Razavi R. Registration and tracking to integrate X-ray and MR images in an XMR facility. IEEE Trans Med Imaging 2003; 22: 1369–1378
  • Wierzbicki M, Drangova M, Guiraudon GM, Peters TM. Four-dimensional modeling of the heart for image guidance of minimally invasive cardiac surgeries. In: Galloway RL Jr, editor. SPIE Medical Imaging 2004: Visualization and Image-Guided Procedures. Proceedings of SPIE 2004; 5367: 302–311
  • Moore J, Guiraudon GM, Jones DL, Hill N, Wiles AD, Bainbridge D, Wedlake C, Peters TM. 2D ultrasound augmented by virtual tools for guidance of interventional procedures. Proceedings of Medicine Meets Virtual Reality 15. Studies in Health Technology and Informatics 125, JD Westwood, RS Haluck, HM Hoffman, GT Mogel, R Phillips, RA Robb, K Vosburgh. IOS Press, Amsterdam 2007; 322–327
  • Wilson K, Guiraudon G, Jones D, Peters T (2006) 4D shape registration for dynamic electrophysiological cardiac mapping. Proceedings of the 9th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2006), CopenhagenDenmark, October, 2006, R Larsen, M Nielsen, J Sporring. Springer, Berlin, 520–526, Part II. Lecture Notes in Computer Science 4191
  • Marmurek J, Wedlake C, Pardasani U, Eagleson R, Peters TM. Image-guided laser projection for port placement in minimally invasive surgery. Proceedings of Medicine Meets Virtual Reality 14. Studies in Health Technology and Informatics 119. IOS Press, Amsterdam 2006; 367–372
  • Huang X, Hill N, Ren J, Guiraudon G, Peters T. Intra-cardiac 2D US to 3D CT image registration. SPIE Medical Imaging 2007: Visualization and Image-Guided Procedures. Proceedings of SPIE 2007;6509:65092E-8, KR Cleary, MI Miga
  • Moore J, Drangova M, Wierzbicki M, Barron J, Peters TM. A high-resolution dynamic heart model based on averaged MRI data. Proceedings of the 6th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2003), MontrealCanada, November, 2003, RE Ellis, TM Peters. 549–555, Part I. Lecture Notes in Computer Science 2878. Berlin: Springer; 2003
  • Wierzbicki M. Subject-specific models of the heart from 4D images. PhD dissertation. University of Western, OntarioCanada 2006
  • Lorenzo-Valdès M, Sanchez-Ortiz GI, Mohiaddin D, Rueckert D (2002) Atlas-based segmentation and tracking of 3D cardiac MR images using nonrigid registration. Proceedings of the 5th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2002), TokyoJapan, September, 2002, T Dohi, R Kikinis. Springer, Berlin, 642–650, Part I. Lecture Notes in Computer Science 2488
  • Linte CA, Wierzbicki M, Moore J, Wiles AD, Wedlake C, Guiraudon GM, Jones DL, Bainbridge D, Peters TM. From preoperative cardiac modeling to intra-operative virtual environments for surgical guidance: An in vivo study. SPIE Medical Imaging 2008: Visualization, Image-Guided Procedures and Modeling, Proceedings of SPIE 2008;6918 (in press)
  • Linte CA, Wierzbicki M, Moore J, Guiraudon GM, Little SH, Peters TM (2007) Towards subject-specific models of the dynamic heart for mitral valve surgery. Proceedings of the 10th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2007), BrisbaneAustralia, October 29-November 2, 2007, N Ayache, S Ourselin, A Maeder. Springer, Berlin, 94–101, Part II. Lecture Notes in Computer Science 4792
  • Wierzbicki M, Drangova M, Guiraudon GM, Peters TM. Validation of dynamic heart models obtained using non-linear registration for virtual reality training, planning, and guidance of minimally invasive cardiac surgeries. Med Image Anal 2004; 8: 387–401
  • Linte CA, Wierzbicki M, Moore J, Guiraudon GM, Jones DL, Peters TM. On enhancing planning and navigation of beating-heart mitral valve surgery using pre-operative cardiac models. Proceedings of the 29th Annual Conference of the IEEE Engineering in Medicine and Biology Society. August, 2007, 475–478
  • Gobbi DG, Comeau RM, Peters TM. Ultrasound probe tracking for real-time ultrasound/MRI overlay and visualization of brain shift. Proceedings of the Second International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI ’99), Cambridge, UK, September 1999. Lecture Notes in Computer Science 1679, Berlin, 1999, C Taylor, A Colchester. Springer, 920–927
  • Khamene A, Sauer F (2005) A novel phantom-less spatial and temporal ultrasound calibration method. Proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Surgery (MICCAI 2005), Palm Springs, CA, October, 2005, JS Duncan, G Gerig. Springer, Berlin, 65–72, Part II. Lecture Notes in Computer Science 3750
  • Wiles AD, Linte CA, Moore J, Wedlake C, Peters TM. Object identification accuracy under ultrasound enhanced virtual reality for minimally invasive cardiac surgery. SPIE Medical Imaging 2008: Visualization, Image-Guided Procedures and Modeling, Proceedings of SPIE 2008;6918 (in press)
  • Wiles AD, Guiraudon GM, Moore J, Wedlake C, Linte CA, Jones DL, Bainbridge D, Peters TM. Navigation accuracy for an intracardiac procedure using virtual reality-enhanced ultrasound. SPIE Medical Imaging 2007: Visualization and Image-Guided Procedures, KR Cleary, MI Miga, Proceedings of SPIE 2007;6509:61410W-10
  • Linte CA, Wiles AD, Hill N, Moore J, Wedlake C, Guiraudon GM, Jones DL, Bainbridge D, Peters TM. An augmented reality environment for image-guidance of off-pump mitral valve implantation. SPIE Medical Imaging 2007: Visualization and Image-Guided Procedures, KR Cleary, MI Miga, Proceedings of SPIE 2007;6509:65090N–12
  • Rettmann ME, Holmes DR, Su Y, Cameron BM, Camp JJ, Packer DL, Robb RA. An integrated system for real-time image-guided cardiac catheter ablation. Proceedings of Medicine Meets Virtual Reality 14. Studies in Health Technology and Informatics 119. IOS Press, Amsterdam 2006; 455–460
  • Surry KJM, Austin HJB, Fenster A, Peters TM. Poly(vinyl alcohol) cryogel phantoms for use in ultrasound and MR imaging. Phys Med Biol 2004; 49: 5529–5546
  • Nafis C, Jensen V, Beauregard L, Anderson P. Method for estimating dynamic EM tracking accuracy of surgical navigation tools. In: SPIE Medical Imaging 2006: Visualization and Image-Guided Procedures, Proceedings of SPIE 2006;6141:61410K–16
  • Downing SW, Herzog WA, McLaughlin JS, Gilbert TP. Beating-heart mitral valve surgery: Preliminary model and methodology. J Thorac Cardiovasc Surg 2001; 123: 1141–1146
  • Vogt S, Khamene A, Niemann H, Sauer F, An AR. system with intuitive user interface for manipulation and visualization of 3D medical data. Proceedings of Medicine Meets Virtual Reality 12. Studies in Health Technology and Informatics 98. IOS Press, Amsterdam 2004; 397–403
  • Birkfellner W, Figl M, Matula C, Hummel J, Hanel R, Imhof H, Wanschitz F, Wagner A, Watzinger F, Bergmann H. Computer-enhanced stereoscopic vision in a head-mounted operating binocular. Phys Med Biol 2003; 48: 49–57

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.