550
Views
3
CrossRef citations to date
0
Altmetric
Biomedical Paper

Surgical navigation display system using volume rendering of intraoperatively scanned CT images

, , , , &
Pages 240-246 | Received 21 Feb 2006, Accepted 25 Jun 2006, Published online: 06 Jan 2010

Abstract

As operative procedures become more complicated, simply increasing the number of devices will not facilitate such operations. It is necessary to consider the ergonomics of the operating environment, especially with regard to the provision of navigation data, the prevention of technical difficulties, and the comfort of the operating room staff. We have designed and created a data-fusion interface that enables volumetric Maximum Intensity Projection (MIP) image navigation using intra-operative mobile 3D-CT data in the OR. The 3D volumetric data reflecting a patient's inner structure is directly displayed on the monitor through video images of the surgical field using a 3D optical tracking system, a ceiling-mounted articulating monitor, and a small-size video camera mounted at the back of the monitor. The system performance and accuracy was validated experimentally. This system provides a novel interface for a surgeon with volume rendering of intra-operatively scanned CT images, as opposed to preoperative images.

Introduction

Recent developments in clinical engineering have resulted in many types of equipment, such as vital signs monitors, anesthesia apparatus, artificial respirators and electrical cautery instruments becoming available in the operating room (OR). In their respective roles of collecting data such as intra-operative electrocardiograms, respiration rates, body temperature and blood pressure, such devices have become essential to the success of surgical procedures. In laparoscopic surgery in particular, crucial additional equipment such as high-intensity light sources, CO2 insufflators, monitors and video recorders present a problem in that they require additional space in the OR Citation[1]. In addition, imaging devices such as those using plain X-rays or ultrasound are sometimes employed during operations. More recently, surgical navigation systems using magnetic sensors or precise optical position sensors such as OPTOTRAK (Northern Digital Inc., Waterloo, Ontario, Canada) have been introduced. The feasibility and effectiveness of such systems has been described in the literature Citation[2], Citation[3] and they have successfully assisted surgeons and reduced the fatigue associated with prolonged operations. However, to provide the necessary guidance information for surgeons, these technologies require position sensors and additional monitors that enable accurate and speedy intra-operative navigation, and these take up additional space. Thus, the usability of the whole system in the OR environment, especially with regard to its ergonomic parameters and simplicity, requires prioritization, since in a cluttered OR and with increasingly complex operative procedures, simply increasing the number of devices present will not facilitate operations. The ergonomics of the environment, especially with regard to the provision of navigation data, the prevention of technical difficulties, and the comfort of the OR staff must be considered.

In recent years, intra-operative navigation, in which the target position is provided to assist in the intuitive understanding of the surgical field, has been studied and applied in many clinical areas Citation[4], Citation[5], but mainly in orthopedics and neurosurgery. Position measurements of a surgical field have usually been performed with magnetic and marker-type optical position sensors, and preoperatively scanned CT or MRI 3D images have been commonly used to represent inner structures. The time interval between acquisition of inner structure data and the actual operative procedures should ideally be as short as possible so that the image data depict the operation site accurately. We have developed a novel interface for advanced operation planning based on guided images with volume rendering of intra-operatively scanned CT images, as opposed to preoperative ones. In this study, a mobile C-arm-type 3D-CT Citation[6] (Siemens-Asahi Medical Technologies, Ltd.) was used to acquire the inner structure data in the OR and provide the information for surgical navigation. Registration-free navigation with scanned images obtained by 3D-CT has been reported by Grützner et al. Citation[7], who provided the resulting navigation information in the CT image itself. In this paper, we report on our attempts to design and implement a data-fusion display interface for surgical navigation, which involves combining Maximum Intensity Projection (MIP)-based volumetric rendering of intra-operative 3D-CT data with real-time views of the surgical field, in which ergonomics as well as the provision of an easy-to-understand intra-operative display of navigation images were taken into consideration. A preliminary experiment was performed to validate the system for clinical use.

Methods

An image presentation device for use in intra-operative surgery navigation should have the following features:

  1. The image presentation device itself should never limit the space for other surgical equipment.

  2. The image presentation device and position sensors should not interfere with the operative space. The position of the display monitor should be easily changeable, and the monitor should be easily removable from the operative field when not in use.

  3. The monitor should be located in the vicinity of the surgical field and also allow the assistant on the other side to refer to the same navigation images.

  4. In order for a surgeon to better understand the spatial relationships and directions between the surgical field and navigation images, the line of sight for observing the surgical field and that for the navigation image should be correspondingly adjusted.

  5. Cabling for the surgical navigation device should preferably not be installed on the floor of the OR.

To fulfill these requirements, we decided that the monitors and position measurement device should hang from the ceiling of the OR. shows a 3D CAD image of the positional layout of the optical position sensor, five monitor arms, and four shadow-less lamps. The data-fusion display and optical position sensor were originally built into the OR. This design minimized OR clutter, even when surgery navigation devices had been installed. The monitors were mounted on five-degree-of-freedom multi-joint arms so that they could be observed from various positions and angles. Furthermore, the layouts of the devices were carefully designed so as not to disturb the measurement area of the position sensor. The room was constructed as the High-Tech Navigation Operating Room Citation[8], Citation[9] in an operations building of the Dai-San Hospital, Jikei University, Tokyo, as shown in .

Figure 1. 3D CAD design and configuration of the High-Tech Navigation Operating Room. [Color version available online.]

Figure 1. 3D CAD design and configuration of the High-Tech Navigation Operating Room. [Color version available online.]

Figure 2. The High-Tech Navigation Operating Room. A mobile 3D-CT can be seen at left. [Color version available online.]

Figure 2. The High-Tech Navigation Operating Room. A mobile 3D-CT can be seen at left. [Color version available online.]

The data-fusion navigation interface was composed primarily of a 15-inch LCD monitor affixed to a ceiling-hung articulating arm and a small-size video camera installed at the back of the monitor. The system was equipped with a detachable sterilization lever so that it could be manipulated by a surgeon and positioned where it was most convenient for the navigation data to be used, i.e., in immediate proximity to the surgical field. The left image in shows the data-fusion display and the right image shows the small-size video camera installed at the back of the monitor. In order to capture the scene of the surgical field, a small color video camera (Dragonfly, Point Grey Research Inc., Vancouver, British Columbia, Canada) was used. Time-sequentially captured images were sent to a PC (Dual CPU: Xeon 2.8 GHz, 2 GB RAM, nVidia QuadroFX1000) through an IEEE1394 interface. This camera, which could collect stream VGA-quality images at 30 frames per second (fps) without compression, was incorporated into the ceiling-hung LCD monitor, thus providing “through”-the-monitor images for the operating surgeon. The position and direction of the monitor was constantly measured and tracked by the ceiling-hung OPTOTRAK optical 3D position sensor. The system also enabled the surgeon to observe the patient's inner structure from the viewpoint of the monitor using VR augmentation with video see-through during surgery.

Figure 3. The setup of the data-fusion display. The operator is moving the monitor to the position from where he wishes to observe the inner structure. A small-size video camera is inset in the back of the monitor. [Color version available online.]

Figure 3. The setup of the data-fusion display. The operator is moving the monitor to the position from where he wishes to observe the inner structure. A small-size video camera is inset in the back of the monitor. [Color version available online.]

In the present study, a mobile C-arm-type 3D-CT was used to acquire the inner structure data used as an information source for surgical navigation in the OR. To enable CT measurements during surgery, a non-metal operating table with dynamoelectric mobility characteristics (MAQUET GmbH & Co. KG, Rastatt, Germany) was incorporated. Three-dimensional volume data in 12 cm3 was measured once with a scanning period of 2 minutes. The display of the acquired internal structure data was available immediately by volume rendering onto the video image of the surgical field.

As shown in , the key coordinate systems are those of the OPTOTRAK on a global basis, the navigation display, and the volume data obtained with the C-arm CT. An optical marker flag is attached to the C-shaped frame of the CT, and its position during the measurement of the volume data is obtained as shown in . The positional relationship between the marker flag and the data position of the C-arm CT required calibration once inside the OR. To do this, a cube-shaped gypsum block sample was created using a rapid-prototyping system (Zprinter, Z Corporation, Burlington, MA) with powder lamination technology, its shape being designed with a 3D CAD system and the physical model being directly created from the digital data. The gypsum block was then measured with the C-arm CT as shown in and , and the calibration was conducted using the corner position in the measured data coordinate and the corner position in the physical space obtained by the OPTOTRAK. Finally, the transformation matrix between the marker flag of the C-arm CT and the coordinate system of the C-arm data itself could be solved. shows the result of the cube registration and fusion representation of the segmented cube surface model on the video images of the data-fusion display. After this calibration, the measured data position of the C-arm 3D-CT became a known parameter until the marker flag attached to the C-arm was repositioned. Registration-free surgical navigation could thus be realized using the patient's intra-operatively scanned volume data.

Figure 4. Configuration of the navigation system and coordinate transformation. [Color version available online.]

Figure 4. Configuration of the navigation system and coordinate transformation. [Color version available online.]

Figure 5. (a) OPTOTRAK marker flags (circled) on the data-fusion display and C-arm. (b) A gypsum block under CT scan. (c) C-arm CT images of a gypsum block. (d) Results of the cube registration and fusion representation of the cube surface model on the video images. [Color version available online.]

Figure 5. (a) OPTOTRAK marker flags (circled) on the data-fusion display and C-arm. (b) A gypsum block under CT scan. (c) C-arm CT images of a gypsum block. (d) Results of the cube registration and fusion representation of the cube surface model on the video images. [Color version available online.]

Results and discussion

A checkered board was used to calibrate the data-fusion display camera. Its position was changed and the respective images were captured at pre-set time intervals. The internal camera parameters and positional relationship between the camera coordinate and the marker flag attached to the display could be obtained instantly by image processing and a computer vision algorithm. This was very useful in calibrating parameters that could be executed quickly and easily in the OR. For details on the calculations and algorithms applied, refer to the reports of Tsai Citation[10], Zhang Citation[11] and Hartley and Zisserman Citation[12].

shows the differential error between the detected checkered board corners in the captured images and the re-projected corners of the computed projection matrix on the image plane. The calibration was performed at approximately 700-800 mm from the display, assuming that the object was placed as during an operation. The graph shows the results of error calculations when the image resolution was set at 360 × 240 pixels. The RMS error was 0.126 pixels. Thirty-five grid-points per frame and the calibration were performed with 22 captured frames. We believe that such calibration accuracy with a perspective matrix is sufficient for video see-through navigation.

Figure 6. Re-projection error of camera calibration. [Color version available online.]

Figure 6. Re-projection error of camera calibration. [Color version available online.]

When intra-operatively scanned CT data was loaded into the data-fusion system, the volume data at 256 × 256 × 256 resolution was transferred through a gigabit network into the computer located in the OR. Parallel processing of the display position tracking and transparent rendering of the volume data with a 3D texture technique was implemented, and this method allowed an update rate of 12 fps for the surgical field video images and the superimposed volume data. The operator was able to intuitively confirm the intra-operative inner structure obtained by the C-arm CT simply by looking through the display, as shown in . Head-mounted displays have been commonly used as image presentation devices for augmented reality. However, their prolonged use causes fatigue and they also obstruct the operator's view. For OR applications, and by using a mobile ceiling-hung arm, our system enabled the surgeon to intuitively observe the patient's inner structure from his viewpoint. Further, the devices could be easily removed from the proximity of the surgical field when navigation was not necessary.

Figure 7. A scene showing volumetric navigation for an elbow joint. [Color version available online.]

Figure 7. A scene showing volumetric navigation for an elbow joint. [Color version available online.]

Accuracy evaluation was carried out to confirm feasibility for clinical use. A gypsum model was created from CT data of an elbow and the physical phantom was measured by mobile 3D-CT in the OR. We compared the computer graphics display of the mobile CT data with the real-world view in the data-fusion display. Mobile CT data was segmented by CT value thresholding, and a virtual model of the scanned CT data was reconstructed. The pixel error of edge lines, along with the virtual and real objects, was measured by 2D image processing. shows a graph of the average errors between the real and virtual objects in the navigated image. The direction of the data-fusion display was changed horizontally with a central focus on the elbow phantom, and the errors were plotted as the angle of the display changed. Camera calibration of the data-fusion display was conducted at 0°. In this case, 1 pixel represents approximately 1 mm in physical space. The accuracy of the final 2D navigation image was around 1 or 2 pixels. The accuracy of this intra-operative navigation method depends on the fine camera calibration and precise identification of the mobile CT data position. In actual clinical use, this test CT data is simply replaced with the scanned patient data, and the condition of navigation accuracy is the same as with this phantom test. We confirmed that this method has sufficient precision for a surgical navigation interface.

Figure 8. Average errors between real and virtual objects in the navigated image. The phantom model was created from CT data of the elbow. The virtual object was directly reconstructed from a mobile 3D-CT scan. [Color version available online.]

Figure 8. Average errors between real and virtual objects in the navigated image. The phantom model was created from CT data of the elbow. The virtual object was directly reconstructed from a mobile 3D-CT scan. [Color version available online.]

The operator became able to intuitively confirm the intra-operative inner structure obtained by mobile CT, registration-free. represents the time-sequentially navigated images for an elbow joint depicted on a data-fusion display while changing the viewing direction. The volume data from the mobile 3D-CT was transparently superimposed on the live video images, with varying directions of the display. We assumed that the subject was static and that the volume data itself was not updated. Thus, the data acquisition was conducted fragmentarily and was limited due to the desired reduction in radiation exposure. The time required for one scan with the mobile 3D-CT was 2 minutes. In effect, the volume-rendering image did not correspond to the real-time deformation tracking of the patient. Even with such limitations, however, this kind of visual representation and navigation interface was sufficiently effective and beneficial for intuitive confirmation of the inner structure immediately after performing the CT measurement. The important thing is that the navigation image could be generated from the intra-operative CT scan, rather than from preoperative data.

Figure 9. Time-sequentially generated images in the data-fusion display. The volume data of the mobile 3D-CT was superimposed onto the live video image according to the varying direction of the display. [Color version available online.]

Figure 9. Time-sequentially generated images in the data-fusion display. The volume data of the mobile 3D-CT was superimposed onto the live video image according to the varying direction of the display. [Color version available online.]

Conclusion

In this paper we have reported on the design and creation of a data-fusion interface for clinical applications in the operating room. The system enabled volumetric MIP image navigation using intra-operative mobile 3D-CT data. The 3D volumetric images reflecting a patient's inner structure were directly displayed on the monitor, which also simultaneously displayed the superimposed video images of the surgical field. In the system, a 3D optical tracking device, a ceiling-mounted articulating monitor, and a small-size video camera mounted at the back of the monitor were used. System performance and navigation accuracy were validated in an experiment carried out in the OR.

In addition, it was very important and challenging for the system that the surgical navigation data were obtained intra-operatively rather than being based on pre-operative images. We think, however, that pre-operative imaging will remain necessary, because the data resolution and measurement ranges were limited in the intra-operative imaging system. A combination of techniques that use several medical imaging modalities is crucial for oncoming surgical navigation.

References

  • Hong D., Swanstrom L. L. Electronic and Computer-Integrated Operating Rooms: Primer of Robotic & Telerobotic Surgery. Lippincott Williams & Wilkins. 2004; 21–25
  • Delp S. L., Stulberg S. D., Davies B., Picard F., Leitner F. Computer assisted knee replacement. Clin Orthop Rel Res 1998, 354: 49–56
  • Picard F., DiGioia A. M., Moody J., Martinek V., Fu F. H., Rytel M., Nikou C., LaBarca R. S., Jaramaz B. Accuracy in tunnel placement for ACL reconstruction. Comparison of traditional arthroscopic and computer-assisted navigation techniques. Comput Aided Surg 2001; 6(5)279–289
  • Levison T. J., Moody J., Jaramaz B., Nikou C., DiGioia A. M. (2000) Surgical navigation for THR: A report on clinical trial utilizing HipNav. Proceedings of the Third International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2000), Pittsburgh, PA, October, 2000, S. L. Delp, A. M. DiGioia, B. Jaramaz. Springer, Berlin, 1185–1187, Lecture Notes in Computer Science 1935
  • Nabavi A., Hata N., Gering D., Chatzidakis E. M., Leventon M., Weisenfeld N., Pergolizzi R., Oge K., Black P. M., Jolesz F. A., Kikinis R. Image guided neurosurgery: visualization of brain shift. Navigated Brain Surgery, U. Spetzger, S. Stiehl, J. Gilsbach. Aachen, Cotta 1999; 17–26
  • Wiesent K., Barth K., Navab N., Durlak P., Brunner T., Schuetz O., Seissler W. Enhanced 3D-reconstruction algorithm for C-arm systems suitable for interventional procedures. IEEE Trans Med Imaging 2000; 19(5)391–403
  • Grutzner P. A., Hebecker A., Waelti H., Vock B., Nolte L. P., Wentzensen A. Clinical study for registration-free 3D-navigation with the SIREMOBIL Iso-C3D mobile C-arm. Electromedica 2003; 71(1)7–16
  • Suzuki N., Hattori A., Suzuki S., Otake Y., Hayashibe M., Kobayashi S., Nezu T., Sakai H., Umezawa Y. Construction of a high-tech operating room for image-guided surgery using VR. J. D. Westwood, R. S. Haluck, H. M. Hoffman, G. T. Mogel, R. Phillips, R. A. Robb, K. G. Vosburgh. IOS Press, Amsterdam 2005; 538–542, Medicine Meets Virtual Reality 13. Studies in Health Technology and Informatics 111
  • Hayashibe M., Suzuki N., Hattori A., Otake Y., Suzuki S., Nakata N. (2005) Data-fusion display system with volume rendering of intraoperatively scanned CT images. Proceedings of the 8th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2005), Palm Springs, CA, October, 2005, J. S. Duncan, G. Gerig. Springer, Berlin, 559–566, Part II. Lecture Notes in Computer Science 3750
  • Tsai R. Y. A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J Robotics Automation 1987; 3(4)323–344, RA-
  • Zhang Z. A flexible new technique for camera calibration. IEEE Trans Pattern Anal Machine Intell 2000; 22(11)1330–1334
  • Hartley R., Zisserman A. Multiple View Geometry in Computer Vision. Cambridge University Press. 2000

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.