1,224
Views
25
CrossRef citations to date
0
Altmetric
Original

A system for ultrasound-guided computer-assisted orthopaedic surgery

, , &
Pages 281-292 | Received 26 Apr 2005, Accepted 25 May 2005, Published online: 06 Jan 2010

Abstract

Current computer-assisted orthopedic surgery (CAOS) systems typically use preoperative computed tomography (CT) and intraoperative fluoroscopy as their imaging modalities. Because these imaging tools use X-rays, both patients and surgeons are exposed to ionizing radiation that may cause long-term health damage. To register the patient with the preoperative surgical plan, these techniques require tracking of the targeted anatomy by invasively mounting a tracking device on the patient, which results in extra pain and may prolong recovery time. The mounting procedure also leads to a major difficulty of using these approaches to track small bones or mobile fractures. Furthermore, it is practically impossible to mount a heavy tracking device on a small bone, which thus restricts the use of CAOS techniques. This article presents a novel CAOS method that employs 2D ultrasound (US) as the imaging modality. Medical US is non-ionizing and real-time, and our proposed method does not require any invasive mounting procedures. Experiments have shown that the proposed registration technique has sub-millimetric accuracy in localizing the best match between the intraoperative and preoperative images, demonstrating great potential for orthopedic applications. This method has some significant advantages over previously reported US-guided CAOS techniques: it requires no segmentation and employs only a few US images to accurately and robustly localize the patient. Preliminary laboratory results on both a radius-bone phantom and human subjects are presented.

Introduction

Musculoskeletal diseases, fractures and other injuries affecting bones, joints, cartilage and related anatomy are common problems in patients of almost all ages. For instance, accidental slips and falls are usually reacted to by an outstretched arm and extension of the hand. If such an incident results in a fractured wrist, 75% of the time the scaphoid bone () will be injured. Statistics shows that more than a quarter of a million people suffer from new scaphoid fractures in the USA every year Citation[1]. Fractures and related diseases also pose a serious health threat to elderly people. Projections made by the Canadian Medical Association Citation[2] show that by the year 2041, in Canadians aged 65 or older (who will constitute an estimated 25% of the nation's population at that time, according to Statistics Canada), the total number of occurrences of just one type of fracture, proximal femoral fractures (PFFs), will reach 88,124 per year, with a possible range from 78,649 to 100,395. All these injuries require immediate orthopedic surgery and, with the climbing demand, there is an urgent need for new and emerging technology to improve the surgical procedures by reducing the variance in outcome, enhancing the accuracy (thereby shortening the patient's recovery and hospitalization) and lowering the cost of treatment. One such technique is computer-assisted orthopedic surgery (CAOS) Citation[3–5].

Figure 1. Problems of preoperative-CT- and intraoperative-fluoroscopy-based CAOS systems.

Figure 1. Problems of preoperative-CT- and intraoperative-fluoroscopy-based CAOS systems.

Current state-of-the-art CAOS systems make extensive use of preoperative computed tomography (CT) and intraoperative X-ray fluoroscopy as their main imaging modalities. These imaging techniques are capable of providing high-quality visualization of bones, but there are several major problems associated with them.

As CT and fluoroscopy require the use of X-rays, both patients and surgeons are exposed to ionizing radiation that may cause long-term health damage. This problem is of particular concern to surgical teams conducting operations on a frequent basis. In addition, intraoperative fluoroscopy produces overlapping images of other surrounding bones, which makes the targeted bone difficult to see. For example, fluoroscopy imaging is challenging for the scaphoid fracture because of the overlapping images of other carpal bones ().

Furthermore, in order to register the patient with preoperative images, CT-based techniques also require tracking of the patient's position in the operating room (OR). This position tracking is currently achieved by rigidly mounting a specially engineered tracking device (called a marker or an optical target) on the patient's anatomy by means of drilled holes, then having it tracked by a camera system in real time. Though it is a relatively small invasive procedure, this mounting can cause extra pain and may prolong recovery time for patients. and demonstrate the mounting of optical targets on the pelvis and femur.

Even more critically, the mounting of an optical target poses a major challenge for CT-based CAOS systems: It is difficult to use this method to track small bones or fractures (e.g., a scaphoid) that typically have greater mobility than large bones (e.g., a pelvis). The whole point of mounting an optical target on the patient is to achieve a rigid fixation between the two, such that the position of the patient can be inferred from the position of the optical target being tracked. Typically, the optical target is mounted either directly on the bone if the structure is large enough or off the bone if the structure is small but connected to adjacent large bones with acceptable rigidity. Neither is the case for a small bone structure with relatively large mobility. It is practically impossible to mount a heavy (metal) marker on a small bone, which in most cases requires surgical attention for injuries, without causing further damage. Even if such a marker could be mounted, the fixation would not be rigid (due to the small size of the bone) and the marker would displace the bone and block the placement of surgical fixation devices. In contrast, if the optical target were to be mounted off the bone (e.g., if the optical target was mounted on the distal radius to track the scaphoid), the fixation would not be rigid enough to achieve a reliable registration as the small bone structure has a certain amount of mobility relative to the optical target.

To address these issues, we present a completely non-invasive CAOS technique that employs medical ultrasound (US), both preoperatively and intraoperatively, as the main imaging modality. Medical US is known for its non-ionizing and real-time nature, and our proposed method does not require the mounting of any invasive tracking device on the patient.

Background

The employment of US imaging in intraoperative guidance of CAOS has one significant advantage over the current intraoperative fluoroscopy-based techniques: It completely eliminates exposure to ionizing radiation for both the patient and the surgical team. In particular, US-guided CAOS (UCAOS) techniques would address the concerns of surgeons who have to perform frequent operations and are thus regularly exposed to harmful radiation in the OR.

Typically, for a CAOS system to provide intraoperative guidance, the preoperative model and surgical planning data must be registered to the patient by estimating a 3D transformation between the preoperative and intraoperative images in the OR. Therefore, registration Citation[6], Citation[7] is a single point of failure in the entire system. For UCAOS applications, this major challenge lies in how to register the intraoperative US images to the preoperative model accurately, reliably and in real time.

For registering the preoperative CT to the intraoperative US, rigid-body feature-based registration techniques using ICP-based algorithms Citation[8], Citation[9] are largely dominant in current CAOS applications. In CT-guided spine surgery, Herring et al. Citation[10] and Muratore et al. Citation[11] registered a preoperative CT to surface points of phantom vertebrae acquired from intraoperative US and reported a target registration error (TRE) Citation[6] of < 2 mm. For CT-guided pelvic surgery, Carrat et al. Citation[12], Citation[13] conducted a cadaver study to register a CT model to intraoperatively segmented 3D US surfaces to guide percutaneous placement of iliosacral screws. In summarizing this work, on the basis of the clinical study of a total of 34 patients, Tonnetti et al. Citation[14] made a thorough comparison between the outcomes of computer-guided percutaneous pelvic surgery using intraoperative US and fluoroscopy in terms of invasiveness, time consumption, accuracy and postoperative recovery. They reported that, besides the obvious advantage of decreased intraoperative radiation exposure, the US-guided approach also demonstrated a high accuracy of screw placement. As a general study, Kryvanos Citation[15] developed a UCAOS system for fracture reduction, deformity correction, osteotomy planning and surgical guidance, with a primary focus on the pelvis or long bones, and proposed an improved ICP-based algorithm for real-time tracking of the targeted bone by registering a bone surface extracted from US to a preoperative CT model. Tests with phantom and cadaveric bones yielded a mean distance error (MDE) of < 1.6 mm and a rotation error of < 2°. Amin Citation[16], Citation[17] proposed a probabilistic framework in addition to an ICP-based algorithm for registering 2D intraoperative US images to a preoperative CT model. The probabilistic framework combines three independent sources of information to assign the probability that any given pixel in the US image will represent a bone surface. These three sources are a bone-surface reflection indicated by the image intensity, a bone-shadow region indicated by a directional edge detector and a spatial prior based on a set of anatomic landmarks. The experiments were conducted using a phantom pelvis, with fiducial registration providing the ground truth. A translation error of < 3 mm and an average rotation error of < 1° were reported.

All of the earlier reported UCAOS techniques share one major disadvantage: they all require segmentation of the surface data from intraoperative US images in order to be registered to the preoperative models, which may result in error in segmentation being propagated to the registration. Segmentation of US images is particularly challenging because of the abundant presence of speckles in the image that are formed by the constructive and destructive interference of US waves.

To avoid the difficult task of segmenting a US image, intensity-based registration techniques Citation[6] may be used. When compared with the feature-based approach, intensity-based methods tend to be more robust and accurate because they prevent the registration from being affected by segmentation errors. In addition, by using all the information available in each image, these methods effectively average out any errors caused by noise or random fluctuations in the image intensities Citation[18]. Typically, the registration between preoperative CT and intraoperative US requires intensity-based intermodality registration techniques, among which an approach based on mutual information is widely employed Citation[19].

To the best of our knowledge, mutual information or other intensity-based registration techniques have not been reported in UCAOS applications. However, in other related CAS applications, e.g., prostate or abdomen liver therapy, the use of this type of technology has been extensively explored. Firle et al. Citation[20], Citation[21], in their brachytherapy radiation procedure to treat prostate cancer, proposed to use mutual information to register a preoperative CT model of a prostate training phantom to an intraoperative 3D US volume. The 3D US volume was directly generated using a bi-planar US transducer and had a size of 162 × 148 × 172 voxels at a resolution of 0.42 mm/voxel. The results were found to be sufficiently promising to merit further clinical investigation. For real-time imaging control of therapy using radio-frequency ablation of solid liver tumors, Berg et al. Citation[22] employed a normalized mutual information algorithm to register a 2D intraoperative US image with a 3D preoperative CT model of an abdominal phantom. They reported a root mean squares (RMS) error of 3.4 mm, with a manual registration being the ground truth. For clinical diagnosis of breast cancer, Meyer et al. Citation[23] registered two types of 3D Doppler US volume acquired from the same patient using mutual information. The US volume was constructed using 2D US images with the position of the transducer tracked by an optical tracking system. For their specific type of application, only rotation error was measured and was < 1° on average.

A major inspiration for our work was proposed by Eadie et al. Citation[24] in their applications of interactive guidance for liver surgery. They have developed a mechanism to link live intraoperative US images with preoperative surgical planning images by means of a 3D image index block. The index block is created using a set of planning images, from which images in any plane can be interpolated. The live US image is then matched through the database using the predefined indices. The reported error range from the best matching image to the live image is < 18 mm and the rotation error is < 10°. Although the accuracy of this method was poorer than the requirements for a typical CAOS system (which usually requires accuracy < 2 mm), the idea inspired us to design a similar but more accurate registration method to meet the demands of CAOS applications.

Methods

We propose a novel UCAOS technique that employs both preoperative and intraoperative US, in place of preoperative CT and intraoperative fluoroscopy, for surgical guidance. gives a high-level overview of our design and the methodology of the system.

Figure 2. An overview of the methodology of the proposed UCAOS system.

Figure 2. An overview of the methodology of the proposed UCAOS system.

Instead of mounting an invasive tracking device on the patient for tracking purposes, the position of the targeted anatomy is provided by the intraoperative US images. An optical target is mounted on the US probe and tracked by a camera system to provide real-time positional information for the US images. The process comprises both preoperative and intraoperative procedures.

Preoperatively, a set of 2D freehand US images (e.g., a total of 2000 images) is acquired from the targeted anatomy along with their corresponding positional information on the US probe. These preoperative image data are used to construct a preoperative database that serves two main purposes:

  • to construct a preoperative 3D volumetric representation of the patient's anatomy that can be used for surgical planning (stage no. 1 in ),

  • to form a preoperative searchable image database for use by the registration process.

Intraoperatively, the preoperative US volume is registered to the patient using intraoperative 2D US images.
  • In the OR, the surgeon takes a few live US images of the targeted anatomy while the position of the US probe is tracked in real time by the camera system. These intraoperative US images are used to find the physical position of the patient during the surgery (see the lower left image in ).

  • A mutual information-based registration algorithm is employed to find the closest match to the live image in the preoperative image database (stage no. 2 in ).

  • It should be borne in mind that the same images searched for in the preoperative database are also the ones used to construct the preoperative US volume of the targeted anatomy. Assuming the closest matching image is actually the live image, we can register the preoperative 3D US volume to the live US image (the lower right image in ) and thus to the patient for surgical guidance (stage no. 3 in ).

In terms of implementation, the proposed UCAOS system contains five major components: US probe calibration, US image acquisition, position tracking, volume reconstruction and visualization, and image registration. A number of open-source frameworks were employed, including the Insight Segmentation and Registration Toolkit (ITK), the Visualization Toolkit (VTK), the Vision Numerics Library (VNL) and the Microsoft DirectX/DirectShow. shows the overall system design in unified modeling language (UML) Citation[25].

Figure 3. Object-oriented component-based system design in UML.

Figure 3. Object-oriented component-based system design in UML.

Hardware configuration

US images are generated by a General Electric (GE) Voluson 730 Expert 3D/4D US machine (), then fed to an ATI All-In-Wonder 7500 frame grabber at 30 frames per second (fps). A Traxtal VersaTrax Active Tracker (3- or 4-marker optical target) is mounted on a 2D US probe () and tracked in real time by a Northern Digital (NDI) Polaris optical tracking system (). The Polaris uses two stereo infrared cameras () to track infrared signals emitted by the optical target and has a reported accuracy of 0.35 mm 3D RMS error at 60 Hz. The central processing unit is a Dell Optiplex GX270 desktop (with an Intel Pentium 4 2.60-Hz CPU and 2-GB PC3200 DDR400 SDRAM) running on Microsoft Windows XP Professional.

Figure 4. Hardware configurations of the proposed UCAOS system.

Figure 4. Hardware configurations of the proposed UCAOS system.

Experimental setup

Laboratory experiments are set up for both a radius-bone phantom () and human subjects (). In both cases, an optical target is rigidly fixed on the subject and tracked by the camera system. This allows us to permanently attach a dynamic reference body (DRB) frame to the subject regardless of its movement during the experiment. By measuring the difference between the actual position of the subject in this DRB reference frame and the one suggested by the registration, the registration error can be calculated.

Figure 5. Experimental setup. (a) A stainless steel mounting jig was built to image a radius-bone phantom in a water bath. (b) A wooden mounting brace was wrapped around the arm of a human subject using velcro pads.

Figure 5. Experimental setup. (a) A stainless steel mounting jig was built to image a radius-bone phantom in a water bath. (b) A wooden mounting brace was wrapped around the arm of a human subject using velcro pads.

US probe calibration

With the optical target mounted on the US probe, the physical position of the probe is tracked by the camera system whenever a US image is acquired from the patient. However, knowing the position of the US probe alone is not adequate to determine the position of the US image. The relationship between these two coordinate frames can be calculated through the process known as US probe calibration, which estimates a homogeneous transformation that maps the position of individual pixels from the US image frame to the US probe frame.

Calibration is usually conducted by imaging an artificial object with known physical properties or geometries, referred to as a phantom. A recent and comprehensive review of calibration techniques for US imaging was written by Mercier et al. Citation[26]. We use a custom-built N-wire phantom Citation[27], Citation[28] that requires only one US image to calculate the calibration parameters. shows the N-wire phantom and the US image captured for calibration.

Figure 6. (a) The N-wire US probe calibration phantom. (b) A US image showing the cross-section of the N-wires.

Figure 6. (a) The N-wire US probe calibration phantom. (b) A US image showing the cross-section of the N-wires.

Direct evaluation of the calibration accuracy is typically difficult, because we lack a reliable method for measuring the exact spatial relationship of the US image frame to the probe frame. We therefore employed an indirect approach with the help of a Traxtal stylus probe. Using the probe, we could accurately measure the physical position of a needle tip in 3D space (). We then captured a US image of the needle tip and mapped its position from the US image frame to the 3D space using the calibration parameters (). As we knew the accuracy of the stylus measurement (with a root mean square error (RMSE) of < 0.2 mm Citation[29]), by comparing the difference between the measurements from the stylus probe and the US image we could obtain the error (TRE) of the calibration results. illustrates the results in detail. The TRE of 0.64 mm falls within the error tolerance of our CAOS applications (accuracy of < 2 mm is usually required). These results also match the accuracy reported by similar work employing an N-wire phantom Citation[28].

Figure 7. Validation of the US probe calibration result.

Figure 7. Validation of the US probe calibration result.

Table 1.  Validation of the US probe calibration result.

3D US volume reconstruction and visualization

Three-dimensional US volume reconstruction is a preoperative process for generating a 3D volumetric representation of the targeted anatomy from multiple 2D US images. depicts this process graphically.

Figure 8. 3D US volume reconstruction process.

Figure 8. 3D US volume reconstruction process.

Typically, a large set of 2D US images (e.g., 2000 in our experiments) is acquired from the patient's anatomy along with their corresponding positional information on the US probe as provided by the tracking system. The US images are captured freehand, meaning that as long as the US probe is properly tracked, the probe can move freely over the targeted anatomy so that the images cover the desired region of interest. Then, with the application of calibration parameters, the position of every pixel in the acquired images is projected into a voxel in the world coordinate frame to construct a 3D US volumetric model of the targeted bone geometry. Naturally, during the projection, some of the voxels would receive more than one projected 2D pixel and some would receive none. The intensities of the latter are simply assigned as zero (black). For a voxel that receives at least one projected pixel, a counter is associated with it, and the overall intensity of the voxel would be the average of all the intensities of the pixels projected into it. For validation purposes, in our experiment, we construct the volume in an optical target frame (DRB reference frame) that is attached to the subjects.

Once the 3D US volume is constructed, we save it into a standard VTK data format and use the VTK visualization pipeline to render and visualize the model. shows the 3D US volume rendering of the distal end of a radius-bone phantom () compared with the CT model of the same structure ().

Figure 9. 3D US volume rendering of the distal end of a radius-bone phantom.

Figure 9. 3D US volume rendering of the distal end of a radius-bone phantom.

Mutual information-based image registration

Search-for-match algorithm

Registration of the preoperative 3D US volume to intraoperative US images is achieved by the use of a search-for-match algorithm. In the OR, the surgeon takes live US images of the patient with the position of the US probe being tracked in real time; a registration algorithm is then used to locate the image best matching the live image in the preoperative US images. Assuming that the best match is actually the live image inside the preoperative database, we can register the preoperative volume to the live image and thus to the patient. Here, we employ a registration method that is based on mutual information Citation[19], Citation[30], Citation[31], a concept borrowed from information theory in the communications field. A thorough review of the use of mutual information in our UCAOS system can be found in the first author's thesis Citation[29]. We have used the Mattes mutual-information-based registration algorithm implemented in Insight Toolkit (ITK) in this work Citation[32].

Registration results and validation

Preliminary laboratory experiments were set up on both a radius-bone phantom () and human subjects (). For both subjects, a set of 2000 preoperative images was acquired together with their corresponding positional data to form the preoperative image database. The US probe was moved back and forth in parallel to produce a dense distribution of US images that covered a region along the long axis of the distal radius within a distance of < 30 mm.

  • For experiments with the radius phantom, the model was attached to the stainless mounting jig with the reference DRB rigidly mounted on the jig, then scanned by US in a water bath.

  • For experiments with human subjects, the wooden mounting brace was fixed to the arm of volunteers by wrapping with heavy-duty velcro pads to achieve temporary rigidity between the brace and the arm, then the reference DRB was mounted on the brace. The US images were acquired on the distal radius with surrounding soft tissues.

The error in the registration was measured by the rotation of the image frame in Euler ZYX angles and the translation of the image frame origin. Differing in the way we selected the live image to match with the preoperative images, two types of experiments were conducted to examine the accuracy and reliability of the registration algorithm from different perspectives.

Experiment type I—Live image from preoperative database

In these experiments, the live US image was randomly selected from within the preoperative database, then matched with the rest of the preoperative images. Because the preoperative image database is constructed with an extremely dense population of 2D freehand US images to cover the targeted anatomy from any possible perspective, we can reasonably assume that any local region in the preoperative database contains the most similar images. Hence, the aim of this type of experiment is to examine whether, if the preoperative image database does contain the closest matching image to the live image, the mutual-information-based registration method would be able to find it accurately and how reliable it could be.

One hundred and ninety-five preoperative images were randomly selected as live images for the radius phantom and 100 for the human subjects. The results of the experiments show that, for the radius phantom, the difference between the closest matches and the live images averages 0.69 mm in translation with a standard deviation of 0.72 mm, whereas the rotation error averages 0.14° with a standard deviation of 0.13°. and show the matching results from the radius phantom for one of the live images. For human subjects, similar results were found to average 0.62 mm in translation error with a standard deviation of 0.48 mm and 0.08° in rotation error with a standard deviation of 0.05°. and show the matching results from the human subjects for one of the live images.

Figure 10. Registration results of Experiment Type I with live images randomly selected from the preoperative images: best matching results for one of the live images from the radius phantom (a and b) and the human subject (c and d) (color version available online).

Figure 10. Registration results of Experiment Type I with live images randomly selected from the preoperative images: best matching results for one of the live images from the radius phantom (a and b) and the human subject (c and d) (color version available online).

These results demonstrate that the registration method is capable of localizing the best matching image accurately and reliably, with sub-millimetric distance to the live image, as long as the preoperative database does contain such an image that is almost identical to the live image.

Experiment type II—Live image acquired at a different time

In these experiments, a set of 500 2D freehand US images was acquired at the same anatomic region of the human subject as that of the preoperative images, but at a different time. Our aim here is to simulate a real surgical situation in which the preoperative images are captured ahead of time and the surgeon needs to take live intraoperative US images of the patient in real time in the OR. The registration accuracy in these experiments would be more or less close to that expected in an actual surgical situation, which helps to determine the feasibility of applying the method to clinical applications.

Here the average translation error for human subjects was ∼2.5 mm with ∼1° of rotation error ( and show the matching results for one of the live images.)

Figure 11. Registration results of Experiment Type II with live images acquired at a different time: best matching results for one of the live images from the human subject (a and b) (color version available online).

Figure 11. Registration results of Experiment Type II with live images acquired at a different time: best matching results for one of the live images from the human subject (a and b) (color version available online).

Compared with the accuracy of the registration method in Experiment Type I, the results from this type of experiment show the accuracy when the database does not contain a preoperative image that is almost identical to the live image.

Visualization of the registration result

Once the best matching image is found in the preoperative database, we are able to register the live image onto the 3D US volume reconstructed from the preoperative images. In turn, the 3D US volume could be mapped to the position of the live image in the OR and thus be registered to the patient's targeted anatomy. displays both the live image and the best matching image inside the preoperative 3D US volume of the radius phantom. It also shows the CT model of the same bone manually overlaid with the 3D US volume to allow comparison with the registration result.

Figure 12. Visualization of the registration result (color version available online).

Figure 12. Visualization of the registration result (color version available online).

Discussion

Accuracy and robustness

Our results with Experiment Type I show that the mutual-information-based registration method is capable of finding the closest matching image to the live image accurately and reliably, as long as the preoperative database actually contains an image that is almost identical to the live image. The translation errors in the image frame origin between the live and the best matching images were all sub-millimetric, and the rotation errors were minor. This essentially means that the positions of the live image (in both translation and rotation) and the best matching image are very close to each other, which is understandable because the live image is from within the preoperative database and, given the high density of preoperative images forming the database, the most similar image would be the one that is located at almost the same position as that of the live image, thereby giving the highest mutual information. Therefore, the results from Experiment Type I effectively set an upper bound of the accuracy that the registration method could possibly reach in an actual surgical situation.

In contrast, the results from Experiment Type II indicate the error to be expected when the best matching preoperative image shows the targeted anatomy at a position that is close, but not almost identical, to that of the live image. These results are very promising because, if we can construct a more complete preoperative US image database that would cover the patient's anatomy from more angles and locations, it may be possible to bring down the error range to a significantly lower level (comparable to that of Experiment Type I).

Optimization of performance

Our proposed registration technique has demonstrated robust and accurate results. However, this comes at the price of increased computational complexity, which occurs for two reasons. First, in order to find the closest matching image, the search-and-match algorithm, understandably, has to compare the live image with each of the preoperative images in the database. This exhaustive search increases linearly with the density of the preoperative image. Furthermore, the mutual-information-based registration is an intensity-based technique that uses all the information from the image intensities. In particular, it requires an estimate of the joint probability distribution between two images and the local probability distributions in each image, a process that is computationally expensive. On a Pentium 4 computer with a 2.5-GHz processor and 2.0 GB of memory, it takes an average of 30 min for the registration process to complete the search and locate the best match in a 2000-image preoperative database. To be useful in actual clinical applications, the method is in need of optimization.

In general, we have observed that the closer the preoperative image to the live image, the larger the mutual information. A typical example of such a relationship is given by . However, this observation only serves as a general guideline, not a rule. In the same figure, the image with the shortest distance to the live image does not have the highest mutual information, because it has a greater change in rotation (). Assuming that a higher mutual information generally indicates a closer distance to the live image, we have designed a fast searching algorithm by constructing a preoperative image database sorted by the physical positions of images. We then estimate the possible location of the best matching image by sampling the value of mutual information at different positions inside the database. Finally, to localize the best match, we conduct an exhaustive search in the neighborhood of the sample point that yields a larger mutual information.

Figure 13. Relationship between the mutual information and the distance of the matching images to the live image (color version available online).

Figure 13. Relationship between the mutual information and the distance of the matching images to the live image (color version available online).

Another promising approach to optimization is through parallel processing. Because the registration process for one preoperative image (to match it with the live image) is independent of that for another, the search-and-match algorithm could be parallelized to reduce the processing time, with limitations posed only by the hardware. Further experiments are planned to explore the potential and validate the reliability of these optimal approaches.

Conclusion

We have presented an ultrasound guided computer-assisted system for providing interactive guidance for orthopedic surgery. We have proposed a novel technique to register live intraoperative US images with a preoperative 3D US volume of the patient's anatomy. Our preliminary laboratory experiments on both a radius-bone phantom and human subjects have shown that it is possible to use US imaging as an alternative to the CT and fluoroscopy that are primarily used in current state-of-the-art CAOS applications.

By employing US as the main imaging modality both preoperatively and intraoperatively, we completely avoid exposure to ionizing radiation from X-rays. The method also does not require the invasive mounting of any tracking device on the patient. Furthermore, the 3D US volume construction and the registration algorithm enable us to image small and mobile bones or fractures and to accurately map the preoperative images and surgical plan to the patient for real-time surgical guidance—a procedure that has only been possible for large bone structures (e.g., pelvis or femur) using current techniques. Because the mutual-information-based registration algorithm does not require segmentation of US images, it has a significant advantage over previously reported UCAOS techniques that mainly rely on the accuracy and robustness of the segmentation algorithm in the OR. In addition, as image entropy is intensity-based and uses the statistical information gathered from all the pixels in the images, a mutual-information-based registration algorithm is generally more robust and more accurate than a feature-based technique. Finally, the underlying principle of mutual information–the use of joint probability distribution between two images for similarity measurement–makes it easy to expand the use of the registration technique from intramodality applications (i.e., among US images) to intermodality applications (e.g., from live intraoperative US to preoperative CT images).

A natural extension of this work would be to introduce the proposed technique into actual clinical applications. There is certainly plenty of room for improvement to meet several challenges posed by critical and demanding surgical situations. Such improvements could address registration accuracy and robustness, performance of the algorithm for real-time application and calibration of the US probe.

Acknowledgments

The authors wish to thank the IRIS/Precarn Network of Centres of Excellence, Communications and Information Technology Ontario (CITO) and the Natural Sciences and Engineering Research Council (NSERC) for funding this project.

References

  • Herndon JH. Scaphoid fractures and complications. American Academy of Orthopaedic Surgeons 1994
  • Papadimitropoulos EA, Coyte PC, Josse RG, Greenwood CE. Current and projected rates of hip fracture in Canada. Can Med Assoc J 1997; 157: 1357–1363
  • DiGioia T. What is computer assisted orthopaedic surgery. Proceedings of the 4th Computer Assisted Orthopaedic Surgery Conference (CAOS/USA), Pittsburgh, PA, June, 2000, 5–8
  • Schep NWL, Broeders IA, van der Werken C. Computer assisted orthopaedic and trauma surgery: state of the art and future perspectives. Injury 2003; 34: 299–306
  • Sugano N. Computer-assisted orthopedic surgery. J Orthop Science 2003; 8: 442–448
  • Hajnal J, Hawkes DJ, Hill D, Hajnal J. Medical image registration. The Biomedical Engineering Series. CRC Press, Boca Raton, FL 2001
  • Hill D, Batchelor P, Holden M, Hawkes D. Medical image registration. Phys Med Biol 2001; 46(3)R1–R45
  • Ma B, Ellis RE. Robust registration for computer-integrated orthopedic surgery: laboratory validation and clinical experience. Med Image Anal 2003; 7: 237–250
  • Rusinkiewicz S, Levoy M. Efficient variants of the ICP algorithm. Proceedings of the IEEE 3rd International Conference on 3-D Digital Imaging and Modeling, Quebec CityCanada, May 28–June 1, 2001, 145–153
  • Herring JL, Dawant BM, Maurer CR, Muratore DM, Galloway RL, Fitzpatrick JM. Surface-based registration of CT images to physical space for image-guided surgery of the spine: a sensitivity study. IEEE Trans Med Imaging 1998; 17: 743–752
  • Muratore DM, Russ JH, Dawant BM, Galloway RL. Three-dimensional image registration of phantom vertebrae for image-guided surgery: a preliminary study. Comput Aided Surg 2002; 7: 342–352
  • Carrat L, Tonetti J, Lavallée S, Merloz P, Pittet L, Chirossel JP (1998) Percutaneous computer assisted iliosacral screwing. Proceedings of the First International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '98), Cambridge, MA, October, 1998, WM Wells, A Colchester, S Delp. Springer, Berlin, 1496: 84–91, Lecture Notes in Computer Science
  • Carrat L, Tonetti J, Merloz P, Troccaz J (2000) Percutaneous computer assisted iliosacral screwing: Clinical validation. Proceedings of the 3rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2000), Pittsburgh, PA, October, 2000, S Delp, AM DiGioia, B Jaramaz. Springer, Berlin, 1935: 1229–1237, Lecture Notes in Computer Science
  • Tonetti J, Carrat L, Blendea S, Merloz P, Troccaz J, Lavallée S, Chirossel JP. Clinical results of percutaneous pelvic surgery: computer assisted surgery using ultrasound compared to standard fluoroscopy. Comput Aided Surg 2002; 6: 204–211
  • Kryvanos A. Computer assisted surgery for fracture reduction and deformity correction of the pelvis and long bones. University of Mannheim, Germany 2002, Ph.D. thesis
  • Amin DV, Kanade T, DiGioia AM, Jaramaz B. Ultrasound registration of the bone surface for surgical navigation. Comput Aided Surg 2003; 8(1)1–16
  • Amin DV. Ultrasound registration for surgical navigation. Carnegie Mellon University, Pittsburgh, PA 2001, Ph.D. thesis
  • Penney GP, Weese J, Little JA, Desmedt P, Hill DL, Hawkes DJ. A comparison of similarity measures for use in 2-D–3-D medical image registration. IEEE Trans Med Imaging 1998; 17: 586–595
  • Pluim J, Maintz JB, Viergever MA. Mutual information based registration of medical images: a survey. IEEE Trans Med Imaging 2003; 22(8)986–1004
  • Firle E, Wesarg S, Karangelis G, Dold C. Validation of 3D ultrasound-CT registration of prostate images. Proceedings of SPIE International Conference on Medical Imaging, San Diego, CA, February, 2003, 354–362
  • Firle EA, Wesarg S, Dold C. Mutual-information-based registration for ultrasound and CT datasets. Proceedings of SPIE International Conference on Medical Imaging, San Diego, CA, February, 2004, 1130–1138
  • Berg J, Kruecker J, Schulz H, Meetz K, Sabczynski J (2004) A hybrid method for registration of interventional CT and ultrasound images. Proceedings of the 18th International Congress and Exhibition (CARS 2004). June, 2004, HU Lemke, MW Vannier, K Inamura, AG Farman, K doi, JHC Reiber. Elsevier, Amsterdam, IL, 492–497, Computer assisted radiology and surgery
  • Meyer CR, Boes JL, Kim B. Semiautomatic registration of volumetric ultrasound scans. Ultrasound Med Biol 1999; 25: 339–347
  • Eadie LH, de Cunha D, Davidson BR, Seifalian AM. Real-time pointer to a preoperative surgical planning index block of ultrasound images for image guided surgery. Proceedings of SPIE International Conference on Electronic Imaging, San Jose, CA, 2004, 14–23
  • Booch G, Jacobson I, Rumbaugh J. Unified modeling language user guide (object technology). Addison-Wesley Professional, Redwood City, CA 1998, The Addison-Wesley Object Technology Series
  • Mercier L, Lango T, Lindseth F, Collins LD. A review of calibration techniques for freehand 3-D ultrasound systems. Ultrasound Med Biol 2005; 31(2)143–165
  • Pagoulatos N, Haynor DR, Kim Y. A fast calibration method for 3D tracking of ultrasound images using a spatial localizer. Ultrasound Med Biol 2000; 27: 1219–1229
  • Zhang Y. Direct surface extraction from 3D freehand ultrasound images. The University of British Columbia (UBC), VancouverCanada September, 2002, Master's thesis
  • Chen T. A system for ultrasound guided computer-assisted orthopaedic surgery. Queen's University, KingstonCanada January, 2005, Master's thesis
  • Collignon A, Maes F, Delaere D, Vandermeulen D, Suetens P, Marchal G. Automated multi-modality image registration based on information theory. Proceedings of Information Processing in Medical Imaging 1995 (IPMI '95), Ile de BerderFrance, June, 1995, 263–274
  • Wells W, Viola P, Atsumi H, Nakajima S, Kikinis R. Multi-modal volume registration by maximization of mutual information. Med Image Anal 1996; 1: 35–51
  • Thevenaz P, Unser M. Optimization of mutual information for multiresolution image registration. IEEE Trans Image Process 2000; 9(12)2083–2099

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.