482
Views
11
CrossRef citations to date
0
Altmetric
Original

Endoscope-based hybrid navigation system for minimally invasive ventral spine surgeries

, , &
Pages 351-356 | Received 28 Apr 2005, Accepted 30 May 2005, Published online: 06 Jan 2010

Abstract

The availability of high-resolution, magnified, and relatively noise-free endoscopic images in a small workspace, 4–10 cm from the endoscope tip, opens up the possibility of using the endoscope as a tracking tool. We are developing a hybrid navigation system in which image-analysis-based 2D–3D tracking is combined with optoelectronic tracking (Optotrak®) for computer-assisted navigation in laparoscopic ventral spine surgeries. Initial results are encouraging and confirm the ability of the endoscope to serve as a tracking tool in surgical navigation where sub-millimetric accuracy is mandatory.

Introduction

Surgical procedures on lumbar and thoracic vertebrae are quite common following trauma or degenerative herniation of a disk Citation[1]. To treat these disorders, it is argued that endoscope-based minimally invasive surgeries are superior to their invasive counterparts, owing to their decreased access morbidity and cost-effectiveness Citation[2–4]. However, the 2D visualization of the operative site, minimal accessibility, and reduced dexterity in minimally invasive surgeries actually increase the risk of injuring the vasculature or the spinal cord with its emanating nerves, and result in inaccurate positioning of prosthetic devices Citation[5–7]. Opting for image-guided navigation during these surgeries will counter these inadequacies and improve safety and accuracy to a significant degree. Computer-assisted navigation during posterior spine surgery has proven to increase safety and help position prosthetic devices with significantly improved accuracy Citation[8].

Computer-assisted navigation has been tried during minimally invasive surgeries on anterior portions of thoracic/lumbar vertebrae Citation[9]. The main difficulty lies in rigidly fixing the dynamic reference base (DRB) to these vertebrae. To do this, either a long stylus is used, allowing it to protrude from the opposite abdominal wall, or the DRB is fixed to the iliac crest and stringent immobilization of the patient is ensured. These methods are susceptible to instability and are hence inaccurate. Therefore, there is a requirement for a new kind of tracking system to provide effective navigation during such surgeries. The availability of high-resolution, magnified, and relatively noise-free endoscopic images in a small workspace opens up the possibility of using the endoscope as a tracking tool. Instead of an optoelectronically trackable DRB, an artificial fiducial marker is attached to the vertebra to be operated on, and the marker's position and orientation are ascertained by means of image analysis. The endoscope is fitted with an optoelectronically trackable DRB to establish a global coordinate system for instrument overlay.

Proposed hybrid navigation system

depicts the proposed system along with the various coordinate systems involved. The endoscope is fitted with an optoelectronically trackable marker shield, and its (camera) coordinate system is registered in the Optotrak coordinate system, i.e., the transformation matrix TCE is obtained. The vertebra to be operated on is registered in the coordinate system of the marker that is firmly fixed to it. In every image frame, TMC is obtained, as described in the subsequent section, by means of image analysis. Solving for TEO is trivial. Equations (1)–(3) depict the transformation of a point P (represented as a 4 × 1 homogenous coordinate vector) into different coordinate systems by using T, which includes the rotation matrix (R3 × 3) and translation vector (T3 × 1), and is represented as a 4 × 4 homogenous coordinate transformation matrix. Subscripts represent the corresponding coordinate system of the transforming point.

Figure 1. Different coordinate systems in the proposed navigation system. (M: marker; C: camera (endoscope); E: optically trackable marker shield attached to the endoscope; O: Optotrak.)

Figure 1. Different coordinate systems in the proposed navigation system. (M: marker; C: camera (endoscope); E: optically trackable marker shield attached to the endoscope; O: Optotrak.)

Previous work

Obtaining accurate 3D information from a monocular video image is a difficult but not an impossible task. Extensive work has been done in the field of augmented reality, in which the 3D position and orientation of a known marker in a video image is recovered by means of image analysis for the purpose of overlaying the video stream with virtual objects Citation[10], Citation[11]. ARToolKit is a freely available, widely used, and evaluated application toolkit for this purpose. Its robustness and accuracy have been evaluated by various groups in a setup using a webcam and other traditional cameras. Kato et al. reported continuous tracking of the 80 mm marker from 100 mm to 600 mm Citation[10]. The positional error was less than 5 mm below 300 mm, but increased drastically afterwards. Abawi et al. Citation[12] conducted accuracy testing of the algorithm using a webcam (Philips PCVC750K) and a 55-mm marker. The results may be summarized as follows:

  • The systematic error of the estimated distance from the marker to the camera is low in the range from 20 to 70 cm and has a small standard deviation.

  • The systematic error of the angle estimation is small in the range from 30° to 40° and has a small standard deviation in the range from 40° to 85°.

Using a webcam (Logitech Quickcam) and a Pulnix camera (Pulnix PEC 3010, Japan), De Siebenthal and LanglotzCitation[13]evaluated the algorithm for application in a surgical simulator. Evaluation was done using three sizes of marker (80, 60, and 40 mm), and the results show that the mean error of positional accuracy decreases with increasing marker size. Piekarski et al. Citation[14] used the algorithm for 3D hand position tracking in a mobile outdoor environment using a 20-mm marker and a PGR Firefly camera. They concluded that the algorithm was quite robust and accurate for detecting the 3D position of the marker, even in a dynamic outdoor environment, but was highly inaccurate (20–30° jitter) for orientation estimation.

The accuracy of the algorithm depends on the quality and size of the image, the size of the marker, and the precision of the calibration procedure. Endoscope images are of high quality with less noise, and the operative distance is between 4 and 10 cm from the tip of the endoscope, where the structures are magnified nearly 15 times. This motivated us to use the above-mentioned algorithm, with adequate modification, for the recovery of 3D information from 2D images in an endoscope setup.

Methods

Calibration of the endoscope

The calibration module available with the ARToolKit lacks precision, especially for an endoscope, and the tangential distortion is not included in the distortion model. We used a Matlab-based toolkit Citation[15] for calibration, which is considered to be the gold standard for this purpose. A checkerboard pattern with squares of size 2.5 mm was used. Twenty images of the pattern were used, taken at various distances (up to 10 cm) and orientations relative to the optical axis. Average re-projection error was found to be 0.5–0.6 pixels. The focal length, with scale factors and distortion factors, radial and tangential, was obtained simultaneously.

Tracking an artificial fiducial in the endoscope image

ARToolKit modules were modified to render them suitable for an endoscope setup. Images of the video stream were first undistorted, with fourth-degree radial and tangential distortion factors, and fed into the ARToolKit algorithm, which gives the transformation matrix (TMC) from the marker coordinates to the camera coordinate system. The undistortion module in the toolkit was removed. A 20-mm marker was used for evaluation purposes.

Accuracy evaluation setup

To evaluate the robustness and accuracy of the endoscope-based tracking, an optoelectronically trackable marker shield was attached to the endoscope. Another marker shield was attached to a color-coded marker, and its four corners and center were registered for tracking by Optotrak (). The position and orientation of the color-coded marker were ascertained by both image analysis and Optotrak-based methods at various distances and orientations relative to the optical axis. Error was estimated relative to the Optotrak-based method, which is considered to be the gold standard. The focal length of the endoscope was kept constant during calibration and throughout the study. A zero-degree lens system was used, which was fixed firmly to the camera head.

Figure 2. Endoscope fitted with optoelectronically trackable marker shield. The marker is also fitted with a marker shield and its four corners and the center are registered for optoelectronic tracking.

Figure 2. Endoscope fitted with optoelectronically trackable marker shield. The marker is also fitted with a marker shield and its four corners and the center are registered for optoelectronic tracking.

Evaluation was done at various distances (4 to 8 cm) and orientations (0–50°) relative to the endoscope. Orientation change was mainly by rotation around the Y-axis. The output from the endoscope is a PAL video consisting of even and odd fields captured at a time difference of 1/50th of a second. Evaluation was done using both full-sized and half-field images. Images were acquired keeping the marker static, so motion artefact was avoided.

Results

Robustness

Continuous tracking of the marker was possible from 3 to 10 cm. Tracking in the full-sized mode was more robust than in the half-field mode, especially above 8 cm, below which there was not much difference. At closer distances, the reflection ovoid formed because of high-intensity illumination caused defective segmentation and hence failure in tracking.

Accuracy

Mean positional error was 0.48 and 0.57 mm (of the center of the marker), and mean rotational error was 0.93 and 1.16° for the full-frame and half-field modes, respectively. The effect of the orientation of the marker relative to the endoscope optical axis is shown in and . Accuracy increases with increased size of the image, and is greater in full-frame mode, which is equivalent to increasing the size of the marker. Due to the flicker and blur in the interlaced image, and also because of a small amount of noise, there was jitter in the positional and orientation values. To reduce this jitter effect, the mean of the readings over 30 frames was taken for the same position and orientation. The standard deviation of the error, for both position and orientation, is less for orientations between 15 and 30° relative to the optical axis. This is more evident in full-frame mode.

Figure 3. Position(a) and rotational error (b) (vertical axis) in full-field mode for a 2-cm marker at various distances from the endoscope tip. Orientation of the marker's Z-axis to the optical axis is represented on the horizontal axis.

Figure 3. Position(a) and rotational error (b) (vertical axis) in full-field mode for a 2-cm marker at various distances from the endoscope tip. Orientation of the marker's Z-axis to the optical axis is represented on the horizontal axis.

Figure 4. Position(a) and rotational error(b) (vertical axis) in half-field mode for a 2-cm marker at various distances from the endoscope tip. Orientation of the marker's Z-axis to the optical axis is represented on the horizontal axis.

Figure 4. Position(a) and rotational error(b) (vertical axis) in half-field mode for a 2-cm marker at various distances from the endoscope tip. Orientation of the marker's Z-axis to the optical axis is represented on the horizontal axis.

Discussion

The results confirm the ability of the endoscope to serve as a tracking device. The proposed hybrid system can be successfully integrated into the currently available navigation systems with minimal modification. The color-coded artificial marker can be easily sterilized, occupies minimal space, and requires no major modification of the surgical procedure.

There are some issues that must be addressed before integration into navigation systems. Most of the currently available endoscope systems have either NTSC or PAL video output. The image grabbed from such an output will be an interlacing of odd and even fields acquired at different time periods. This will result in a highly distorted image, even upon slight movement of the endoscope. Interlacing and noise also cause jitter as mentioned. The results show that a half-field image can be effectively used to counter the problem of motion artefact to some extent, but not jitter. The latest endoscope models have progressive cameras with digital video output that may minimize the problem of motion blurring and jitter. Rapidly changing illumination intensity in the endoscope's field of vision has a significant effect on the intensity-threshold-based segmentation. There is a need for an adaptive-threshold-based algorithm. Automatic illumination adjustment in the latest models may also help in resolving this problem.

Calibration and studies were performed keeping the focal length constant, but a surgeon can change it during the procedure to focus on the operative structures. It is difficult to ascertain the changing focal length in older endoscope systems. The new breed of endoscopes, such as EndoEYE™ from Olympus, have their image acquisition chip at the tip, so a tedious optical lens system is avoided. The necessity of changing focal length is avoided in these systems where the focal length remains fixed. Implementing the algorithm in such a system would be practical. Oblique-tipped (30°) laparoscopes are used in most of these surgeries, so there is a need to calibrate the scope according to the rotation over its optical axis, as suggested by Yamaguchi et al. Citation[16].

Partial occlusion of the marker by surrounding viscera is an expected scenario during surgical procedures, and the algorithm has to be modified to address this problem. Further studies will be focused on solving these shortfalls after an appropriate endoscope system has been selected. Studies will also be undertaken to explore the possibility of using this system in other endoscope-based minimally invasive surgeries.

References

  • Magerl F, Aebi M, Gertzbein SD, Harms J, Nazarian S. A comprehensive classification of thoracic and lumbar injuries. Eur Spine J 1994; 3(4)184–201
  • Mahvi DM, Zdeblick TA. A prospective study of laparoscopic spinal fusion: technique and operative complications. Ann Surg 1996; 224(1)85–90
  • Schultheiss M, Kinzl L, Claes L, Wilke HJ, Hartwig E. Minimally invasive ventral spondylodesis for thoracolumbar fracture treatment: surgical technique and first clinical outcome. Eur Spine J 2003; 12: 618–624
  • Beisse R, Potulski M, Bühren V. Endoscopic techniques for the management of spinal trauma. Eur Trauma 2001; 27(6)275–291
  • Zdeblick TA, David SM. A prospective comparison of surgical approach for anterior L4–L5 fusion: laparoscopic versus mini anterior lumbar interbody fusion. Spine 2000; 25: 2682–2687
  • Mahvi DM, Zdeblick DA. A prospective study of laparoscopic spinal fusion: technique and operative complications. Ann Surg 1996; 224: 85–90
  • Farooq N, Grevitt MP. ‘Does size matter?’ A comparison of balloon assisted less-invasive vs conventional retroperitoneal approach for anterior lumbar interbody fusion. Eur Spine J 2004; 13: 639–644
  • Schlenzka D, Laine T, Lund T. Computer-assisted spine surgery. Eur Spine J 2000; 9(Suppl. 1)57–64
  • Rose S, Maier B, Marzi I. Computer-assisted, image-guided and minimal-invasive ventral stabilization of thoraco-lumbar spine fractures. Proceedings of 3rd Annual Meeting of the International Society for Computer Assisted Surgery (CAOS-International), MarbellaSpain, June, 2003, 308–309
  • Kato H, Billinghurst M. Marker tracking and HMD calibration for a video-based augmented reality conferencing system. Proceedings of 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR '99), San Francisco, CA, 1999, 85–94
  • Molineros J, Sharma R. Real-time tracking of multiple objects using fiducials for augmented reality. Real-Time Imag 2001; 7: 495–506
  • Abawi DF, Bienwald J, Dörner R. Accuracy in optical tracking with fiducial markers: an accuracy function for ARToolKit. Proceedings of the Third IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 2004), Arlington, VA, November, 2004, 260–261
  • De Siebenthal J, Langlotz F. Use of a new tracking system based on ArToolkit for a surgical simulator: accuracy test and overall evaluation. Augmented Reality Toolkit. The First IEEE International Workshop, DarmstadtGermany, September, 2002
  • Piekarski W, Thomas BH. Using ARToolKit for 3D hand position tracking in mobile outdoor environments. Augmented Reality Toolkit. The First IEEE International Workshop, DarmstadtGermany, September, 2002
  • http://www.vision.caltech.edu/bouguetj/calib_doc/
  • Yamaguchi T, Nakamoto M, Sato Y, Nakajima Y, Konishi K, Hashizume M, Nishii T, Sugano N, Yoshikawa H, Yonenobu K, Tamura S (2003) Camera model and calibration procedure for oblique-viewing endoscope. Proceedings of 6th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2003), MontrealCanada, November, 2003, RE Ellis, TM Peters. Springer, Berlin, 373–381, Part II. Lecture Notes in Computer Science, Vol. 2879

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.