747
Views
3
CrossRef citations to date
0
Altmetric
Research Article

Three-dimensional multimodal image non-rigid registration and fusion in a High Intensity Focused Ultrasound system

, Ph.D., , &
Pages 1-12 | Received 25 Apr 2011, Accepted 04 Oct 2011, Published online: 08 Dec 2011

Abstract

High Intensity Focused Ultrasound (HIFU) has been successfully applied in tumor therapy. For a successful HIFU therapy, it is crucial to localize the tumor region accurately. In this paper, we present a semi-automatic non-rigid registration method for implementing image guided surgery navigation and localization by matching pre-operative CT/MR images and intra-operative ultrasound images. The global motion of the target is modeled by an affine transformation, while the local deformation of the target is described by Free-Form Deformation (FFD) based on B-splines. The results of our experiments on simulated and real data show that the non-rigid registration method based on HPV interpolation (partial volume based on the Hanning windowed sinc function) is effective at restraining local extrema and improves the accuracy of registration results. A preliminary clinical validation of the use of the non-rigid registration method in image guided localization of a HIFU system is also reported.

Introduction

In recent years, the High Intensity Focused Ultrasound (HIFU) therapy technique has become a research hotspot in the field of tumor therapy Citation[1]. This approach exploits the capability for penetration, orientation and focusing that is peculiar to ultrasound. HIFU is able to build up a high-intensity focus in deep tissue, which causes acute cell death by focusing the ultrasound beams for a short time. At present, HIFU equipment is being marketed for clinical application as an efficient method for minimally invasive therapy of tumors. The key factor in this technique is the accurate intra-operative localization system for tracking the tumor region. Currently, this intra-operative navigation and localization relies mainly on B-mode ultrasound, primarily because it is inexpensive. During the first stage of HIFU therapy, the operators often manually perform the segmentation of ultrasound images in order to mark the position of the tumor focus, which is both time-consuming and dependent on the operators’ experience Citation[2]. The goal of our non-rigid registration and fusion method is to lessen the workload of the operators and improve the accuracy of localization of the tumor region by matching ultrasound images to CT/MR images acquired prior to HIFU therapy.

For the registration of pre-operative CT/MR images and intra-operative ultrasound images, Penney et al. Citation[3] proposed a registration algorithm for liver images which converted the intensity values of the MR and ultrasound images into vessel probability values. Normalized Cross-Correlation (NCC) was employed as the similarity metric of registration, but only the linear relation between the two images was assessed. von Berg et al. also registered 3D CT images to ultrasound images with an optical position measurement system. In addition, Mutual Information (MI) was used as the similarity measure for rigid registration Citation[4]. Wein et al. simulated medical ultrasound from CT data to carry out the registration Citation[5]. A modified correlation ratio was calculated to compare the two images, which can be directly related to the NCC similarity metric under a presumption of linear mapping. Betrouni et al. Citation[6] presented a new method for automatic repositioning of patients undergoing prostate cancer radiotherapy, based on the registration of ultrasound and CT/MR images. In this approach, the extracted contours on the two images were aligned by the ICP (Iterative Closest Point) registration algorithm. However, all these approaches are limited to rigid or affine transformations. During a HIFU operation, the motion and deformation of soft tissues and organs like the liver, kidney and prostate will be complicated as a result of changes in the patient's body position, breathing and heartbeat. In other words, to describe the motion of these tissues, it is necessary to adopt a non-rigid transformation instead of a rigid or affine transformation.

In general, multimodal non-rigid image registration includes feature-based methods, voxel intensity-based methods and hybrid methods. The feature-based methods often use features extracted from images such as points, lines and surfaces to perform the registration. Urschler et al. developed a point feature-based non-linear registration algorithm combining global shape context feature points and local SIFT (Scale Invariant Feature Transform) feature points Citation[7]. For the other points, a TPS interpolation was used. However, the accuracy of this algorithm relies mainly on the point extraction method. In voxel intensity-based approaches, mutual information has been shown to align images from different imaging modalities robustly. Shekhar et al. Citation[8] investigated rigid and affine registration of ultrasound volumes using mutual information. The experimental results show that successful registration is still possible despite the characteristic poor image quality, which indicates that mutual information might be extended to non-rigid registration. Wang et al. Citation[9] proposed a hybrid registration method incorporating statistical shape information into a fluid model. However, due to the very large computational cost, especially for 3D data volumes, this method was not suitable for our application. Escalante-Ramirez Citation[10] introduced the Hermite transform as an image representation model to tackle the problem of fusion for multimodal medical images. The model contains some important properties of human visual perception, such as local orientation analysis and a Gaussian derivative model of early vision. Zhu and Cochoff Citation[11] developed an object-oriented framework for image registration, fusion and visualization, which employed many design patterns to facilitate legacy code re-use, manage software complexity, and enhance the maintainability and portability of the framework.

In this paper, we describe the development of a semi-automatic algorithm for 3D non-rigid registration of pre-operative CT/MR and intra-operative ultrasound images with the purpose of mapping therapy planning into the real-time coordinate system of HIFU. First, the global motion is modeled by an affine transformation, and registration is implemented on the control points selected manually by experts. To facilitate the selection of control points, we create a three-view interactive frame that can display axial, sagittal and coronal slices simultaneously. Following this, a local non-rigid transformation is described by the free-form deformation (FFD) model based on B-splines Citation[12]. In particular, we use a new interpolation method called HPV, which was described in a previous paper Citation[13], to create the smoother mutual information function. The experimental results obtained with liver and kidney images indicate that our approach is able to give much better fusion of CT/MR and ultrasound images for navigation and localization in HIFU operations.

The paper is organized as follows. The next section introduces the image navigation and localization system for HIFU. This is followed by a section describing the registration algorithm. The fourth section presents the experiment results, and is followed by the conclusion.

Image navigation and localization system in HIFU

HIFU belongs to the physical methods of tumor therapy. It can produce an intensity of 300–2000 W/cm2 and raise the tissue temperature to 70–80°C within 0.1–10 seconds Citation[14]. Compared with other methods of tumor treatment, the HIFU technique has some advantages such as lessening the patient's pain, improving their quality of life, and reducing the expense of treatment, etc. More importantly, HIFU would be a better choice for and prolong the life of patients in which the opportunity to resect the tumor has been lost. shows the model of HIFU system used.

Figure 1. High Intensity Focused Ultrasound (HIFU) therapy system.

Figure 1. High Intensity Focused Ultrasound (HIFU) therapy system.

For a successful HIFU therapy, it is very important to accurately localize the tumor target. At present, researchers are paying more attention to the evaluation of the HIFU curative effect, and the localization of the tumor target still depends on observation of intra-operative ultrasound images. In our system, the localization of the tumor region depends on the registration and fusion of CT/MR and ultrasound images instead of B-mode ultrasound images alone. The image navigation and localization system mainly comprises pre-operative therapy planning, intra-operative 3D ultrasound imaging, and registration of images from two different modalities. The flowchart of the localization system is shown in . The pre-operative therapy planning concentrates on the segmentation and extraction of the tumor region from CT/MR images and the track planning of the therapy foci.

Figure 2. Flowchart of the image-guided localization system in our HIFU.

Figure 2. Flowchart of the image-guided localization system in our HIFU.

Intra-operative ultrasound imaging

In general, the therapy probe and the imaging probe are integrated as a single unit into the HIFU system. We fix the B-mode ultrasound imaging probe on the center of the transducer array of the therapy probe. shows the configuration of the therapy probe and imaging probe in our HIFU system. The intra-operative 3D volume data can be obtained from the B-mode ultrasound imaging probe using a scanning pattern, e.g., a linear scan, rotating scan, swing scan or freehand scan. The linear scan used in our system is a very simple method. The imaging probe is controlled electronically so as to move linearly, and the scanning distance is predefined. These 2D slice images, like CT/MR slices, can be used in 3D reconstructions. However, the linear scan technique also has some disadvantages: It requires a large water container during the operation, and its use is limited to regions sheltered from interference due to bones.

Figure 3. (a) Therapy probe in HIFU system. (b) B-mode ultrasound imaging probe in HIFU system.

Figure 3. (a) Therapy probe in HIFU system. (b) B-mode ultrasound imaging probe in HIFU system.

Image guided surgery navigation in HIFU

In existing HIFU therapy systems, the operators must manually segment the ultrasound images for the focus region in the patient, and different operators may produce different segmentation results. After the registration of the pre-operative CT/MR images and intra-operative ultrasound images, they will be fused altogether. We expect these fusion images to serve as a reference for the segmentation of the ultrasound images. Using these references, different operators can obtain nearly identical segmentation results.

To reduce the segmentation workload of operators and improve the accuracy of localization, an image guided surgery navigation technique can also be introduced into the HIFU therapy process. In fact, it is the pre-operative therapy planning on CT/MR images that is mapped into the intra-operative ultrasound images by registration. First, the whole therapy region is extracted from the CT/MR images prior to surgery. The therapy region is then divided into therapy targeting fractions. The center points of these fractions should correspond to the foci from the phased array transducer in the HIFU system. show the kidney region delineated on an MR image, while shows a 3D visualization of some therapy targeting fractions, with different colored regions representing different therapy fractions. Finally, the positions of focus in the pre-operative CT/MR images are transformed into intra-operative ultrasound images via the registration results.

Figure 4. (a) The kidney region delineated on an MR image. (b) 3D visualization of therapy targeting fractions. The red, green and blue regions represent three different targeting fractions.

Figure 4. (a) The kidney region delineated on an MR image. (b) 3D visualization of therapy targeting fractions. The red, green and blue regions represent three different targeting fractions.

Image registration algorithm

The key purpose of the image navigation and localization system is the registration and fusion of pre-operative CT/MR images and intra-operative ultrasound images. The goal of image registration is to map selected points in the CT/MR image onto the corresponding points in the ultrasound image Citation[15], which also involves a search for the optimal spatial transformation T: CT/MR (x, y, z) → US (x′, y′, z′). As a result of the poor quality of the ultrasound image, we present a semi-automatic registration algorithm that consists of a global affine transformation and a local non-rigid transformation.

Global affine model

A general class of transformation-affine model is chosen to represent the motion of the whole focus region. In three dimensions, it can be written as follows:There are 12 degrees of freedom in the transformation. Because of the lower signal-to-noise ratio of ultrasound images, we manually select control point pairs to compute the affine model parameters in order to ensure the accuracy of the registration. As described in reference Citation[16], Equation 1 is also denoted as X′ = AX + T, and the matrices A, T of the affine model parameter can be calculated by the linear least-squares estimation method in four steps, as shown below:

  1. Selection by an expert of m control point pairs from the matched images:where I is the unit matrix and l is an m × 1 matrix in which all elements are 1.

  2. Computing the matrices:

  3. Obtaining matrix A:

  4. Obtaining matrix T:

During HIFU therapy, the operators often acquire axial slices of the tumor region by a linear scan. However, it is difficult for the operators to select the control point pairs precisely depending on the axial slices alone. Some anatomic points cannot be aligned other than by observing the sagittal and coronal planes. In this paper, we describe the development of a user interface including axial, sagittal and coronal planes, as seen in . An anatomic point can be shown in three planes simultaneously in this frame, while the sagittal and coronal slices are obtained from the axial slices by re-slice computing. Therefore, it is convenient for the operators to select the control point pairs on this three-view frame.

Figure 5. The three-view user interface including ultrasound and CT volume data.

Figure 5. The three-view user interface including ultrasound and CT volume data.

Local non-rigid transformation

After the affine transformation capturing the global motion of an object, an additional nonlinear transformation modeling the local deformation of the object is required. The Free-Form Deformation (FFD) model Citation[17] based on B-splines is a powerful tool for modeling 3D deformable objects, and has been successfully applied to non-rigid registration of breast MR images Citation[18]. To define a spline-based FFD, we assume that the 3D image volume is denoted as V = {(x, y, z) | 0 ≤ x < X, 0 ≤ y < Y, 0 ≤ z < Z}. The deformation at point (x, y, z) can then be computed from the positions of the surrounding 4 × 4 × 4 neighborhood of control points ϕi,j,k as follows:where u, v and w are the relative positions of point (x, y, z) in three dimensions. The functions B represent the basis functions of the B-spline. The voxel-based registration method means that feature calculation is straightforward using image intensity, which does not require the definition of landmarks or surfaces. Thus, the accuracy of these methods is not limited by segmentation errors. Mutual information (MI) is one of the most frequently used methods. It has been shown to align images from different modalities accurately and robustly Citation[19]. To find the optimal deformation parameters, the cost function E = −MI + ωCS will be minimized, where ω is a weighted coefficient and CS is a smoothness constraint term which regularizes the deformation as follows:

A prerequisite for successful registration is that the mutual information function must be quasi-convex with as few local extrema as possible. However, the speckle noise in ultrasound images and interpolation artifacts often make this difficult. The traditional linear interpolation and partial volume (PV) interpolation Citation[20] methods are likely to bring the local extreme of mutual information. By applying linear interpolation to an image, noise and some small structures can disappear. Noise disperses the joint histogram, and thus a reduction in noise may reduce this dispersion, so that linear interpolation causes local minima of mutual information value at grid alignment. In contrast, by applying partial volume interpolation to an image, several histogram entries are updated at a non-grid point pair while only one histogram entry is increased at a grid point pair. Therefore, at non-grid alignment, the joint histogram may be additionally dispersed corresponding to grid alignment. Subsequently, partial volume interpolation yields local maxima of mutual information value at grid alignment. To remove the undesired local extrema and create a smooth mutual information function, we adopted a two-step solution. First, the ultrasound images were filtered by a filter based on total variation minimization and oscillatory functions Citation[21]. We then used the HPV interpolation algorithm presented in reference Citation[13] to estimate the joint histogram, which employed an approximation function of the Hanning windowed sinc Citation[22] as the kernel function of the partial volume (PV) interpolation.

We employ a simple iterative gradient descent technique which steps in the direction of the gradient vector with a certain step size μ. The algorithm stops at the global optimum if for some small positive value ϵ. This optimization method needs to compute the derivative of the cost function with respect to the model parameters Φ. For the first term in E, a new method has been developed to estimate the derivative of mutual information with respect to the FFD model parameters Φ based on the HPV interpolation algorithm. Denoting H = {hfr} as the joint image intensity histogram of the overlapping volume of the floating image F with image intensities {f} and the reference image R with image intensities {r} gives us the following:with hf = Σrhfr, hr = Σf hfr and N = Σf,r hfr. As described in reference Citation[23], the derivative of mutual information with respect to the FFD model parameters Φi can be written as follows:

According to the HPV interpolation algorithm, hfr can be expressed aswhere δ(x,  y) is the discrete unit pulse. Because all ωk,m vary smoothly with the point qk, which itself varies smoothly with the parameters Φi in Equation 2, the histogram hfr is a continuous function of Φi and the derivatives ∂hfr/∂Φi can be computed exactly using analytic expressions. Thus,The derivative of the second term CS in cost function E has been given in reference Citation[24] as follows:We also have

Experimental results

Two experiments were undertaken to evaluate our registration algorithm. Simulated data experiments can help to evaluate the accuracy of the registration, since the exact deformations are unknown for the actual images. Thus, the first experiment reproduced the procedure for recovering a known deformation for both affine registration and local non-rigid registration. The comparison between HPV and traditional PV interpolation was also performed in this experiment. However, registration of naturally deformed images is the best method for testing the performance of the registration algorithm, because some issues such as motion ambiguities and speckle decorrelation are not present in simulated images. Hence, the second experiment was performed on clinical datasets with unknown deformation.

Simulated data experiment

We used 3D CT and MR axial scans of the volunteer's abdomen obtained in the Sixth Peoples’ Hospital of Shanghai as the reference images. The volunteer was asked to hold his breath at maximum exhale while the images were acquired. The CT scans were collected on a Siemens CT system with an image size of 512 × 512 × 24, and we resampled them to 64 × 64 × 24. The MR scans were acquired on a 1.5-Tesla GE Vision MR system and the image size was 256 × 256 × 34 with voxel dimensions of 1.25 × 1.25 × 4.5 mm. We resampled these images to 128 × 128 × 64 with voxel dimensions of 2.5 × 2.5 × 1.1 mm. First, we will assess the performance of affine registration on CT data. Then, the superiority of HPV over PV interpolation in local non-rigid transformation on MR data will be demonstrated.

For affine registration, we artificially deformed the CT image data with known deformation. The affine solution was then used to register the original CT image and the deformed image, and the results are presented in . We could see from the results that the estimated deformation by the affine solution was very similar to the true deformation.

Table I.  Affine registration results and parameters of true deformation.

For local non-rigid registration, an artificially deformed volume was created as the floating image by applying known warping on the 3D MR slices as follows:where x, y and z are the point coordinate values in the original image, x′, y′ and z′ are the point coordinate values in the deformed image, and k is a constant coefficient which represents the degree of deformation. We manually defined a cubic region of interest (ROI) covering part of the kidney with 6 × 6 × 6 the number of control points.

One registration result is shown in . As a qualitative measure, we have chosen to use the difference image between the result image and the reference image. It can be clearly seen that the HPV interpolation algorithm achieved a better result than the PV method, and that mis-registration was reduced significantly. For a quantitative evaluation, we calculated the correlation coefficient (CC) of the overlapping region and the root mean square (RMS) error between the recovered deformation Tq and the ground truth Tr. summarizes the registration performance in terms of CC and RMS for five warps. The results show that HPV improves the CC and decreases the RMS. This indicates that HPV interpolation can reduce the local extreme in a non-rigid registration.

Figure 6. Registration in the simulated data experiment. (a) Reference image. (b) Floating image. (c) Registration result for the ROI using PV. (d) Difference image between (c) and the ROI of (a). (e) Registration result for the ROI using HPV. (f) Difference image between (e) and the ROI of (a).

Figure 6. Registration in the simulated data experiment. (a) Reference image. (b) Floating image. (c) Registration result for the ROI using PV. (d) Difference image between (c) and the ROI of (a). (e) Registration result for the ROI using HPV. (f) Difference image between (e) and the ROI of (a).

Table II.  Comparison of registration results with known warping.

We also tested the robustness of the algorithm by adding Gaussian noise with variance of 0.01 to one of the floating images described above. The 3D MR volume was still used as the reference image. The CC and RMS values after PV were 0.4877 and 3.5332 mm, while the corresponding values were 0.7978 and 1.3714 mm, respectively, for HPV. To intuitively observe the ability of HPV to reduce local extrema in non-rigid registration, we plotted the distribution of the metric value with iterations after using the two interpolation algorithms. shows the distribution of the metric values using PV interpolation; it is evident that the function curve is oscillating, which indicates that the algorithm is caught by a local extreme. shows the distribution of the metric using HPV interpolation; here it is evident that the function curve is ideal and smooth.

Figure 7. The distribution of the metric values with iterations after PV and HPV interpolation algorithms. (a) Using PV interpolation. (b) Using HPV interpolation.

Figure 7. The distribution of the metric values with iterations after PV and HPV interpolation algorithms. (a) Using PV interpolation. (b) Using HPV interpolation.

Real data experiment

In this experiment, we evaluated the performance of the semi-automatic registration algorithm on clinical CT/MR and ultrasound volume data of the liver and kidney. First, we selected some control point pairs to calculate the parameter matrix of the affine transformation in our three-view interface. Non-rigid registration was then performed using affine registration results, while the ROI covering part of the liver or kidney was selected with 8 × 8 × 8 the number of control points. To limit computer memory requirements and increase speed, the rebinning operations were performed using 64 bins on the datasets of this experiment.

The CT images of the volunteer's liver were acquired on a Siemens CT system at the Sixth People's Hospital of Shanghai for use as the reference image. The images have a size of 512 × 512 × 400. The 3D ultrasound axial slices were acquired on our HIFU system using a linear scan as the floating image. Because of the obstruction by the ribs, the ultrasound volume data only contained part of the liver, comprising 480 × 456 × 340 voxels. An example of this registration is shown in . For each imaging plane corresponding to a row, the reference image, the floating image, the fused image after affine registration, and the deformed floating image after local non-rigid registration are all presented. A visual approach to evaluating the success of registration is to look for a consistent object in the fused images. Qualitatively, a good match of the anatomical structures was apparent in all the imaging planes. For a quantitative evaluation, the CC value was changed from 0.0625 to 0.1152 after local non-rigid registration using HPV interpolation.

Figure 8. Registration between CT and ultrasound images of the liver. (a) Reference image in axial (top), sagittal (middle) and coronal (bottom) planes. (b) Floating image in axial, sagittal and coronal planes. (c) Fused image after affine registration in axial, sagittal and coronal planes. (d) Deformed floating image after local non-rigid registration in axial, sagittal and coronal planes.

Figure 8. Registration between CT and ultrasound images of the liver. (a) Reference image in axial (top), sagittal (middle) and coronal (bottom) planes. (b) Floating image in axial, sagittal and coronal planes. (c) Fused image after affine registration in axial, sagittal and coronal planes. (d) Deformed floating image after local non-rigid registration in axial, sagittal and coronal planes.

The 3D MR volume data of the volunteer's kidney that was previously referred to in the description of the simulated data experiment was resampled to 256 × 256 × 272 voxels for use as the reference image. The 3D ultrasound axial slices containing kidney were acquired on our HIFU system using a linear scan as the floating image, comprising 736 × 454 × 440 voxels. An example of this registration is shown in . Again, a good match of the anatomical structures is apparent in the registration results. The CC value was changed from 0.2716 to 0.5065 after local non-rigid registration using HPV interpolation. We find that the semi-automatic non-rigid registration algorithm can provide better results on real data. Two experts at the Sixth People's Hospital of Shanghai have assessed the registration results on clinical data, and it is considered that the new semi-automatic registration algorithm is appropriate for clinical application. We therefore believe that the image guided surgery technique has potential for use in conjunction with the HIFU therapy system using our approach.

Figure 9. Registration between MR and ultrasound images of the kidney. (a) Reference image in axial (top), sagittal (middle) and coronal (bottom) planes. (b) Floating image in axial, sagittal and coronal planes. (c) Fused image after affine registration in axial, sagittal and coronal planes. (d) Deformed floating image after local non-rigid registration in axial, sagittal and coronal planes.

Figure 9. Registration between MR and ultrasound images of the kidney. (a) Reference image in axial (top), sagittal (middle) and coronal (bottom) planes. (b) Floating image in axial, sagittal and coronal planes. (c) Fused image after affine registration in axial, sagittal and coronal planes. (d) Deformed floating image after local non-rigid registration in axial, sagittal and coronal planes.

Conclusions

In this paper, we have proposed a semi-automatic non-rigid registration algorithm for performing image guided surgery navigation and localization in a HIFU therapy system, in order to decrease the workload of operators and improve the accuracy of localization. The proposed combination of an affine transformation and an FFD model based on B-splines provides better flexibility for modeling the motion of an object. We have validated our algorithm for matching simulated and real data. The registration results show that the non-rigid registration using HPV interpolation can effectively restrain the emergence of a local extreme, and can satisfy the requirements of a localization system in HIFU. However, due to the poor quality of ultrasound images, future work will include the introduction of more spatial information and more specific intensity models into the similarity criterion in order to improve greatly the accuracy of registration.

Acknowledgments

The authors would like to thank The Sixth People's Hospital of Shanghai for providing the images.

Declaration of interest: This study is supported by funding from the National Natural Science Foundation of China (NSFC) (No. 61002046, No. 60972110 and No. 60972158), and the National Basic Research Program of China (973 Program) (No. 2010CB732506).

References

  • Gorny KR, Hangiandreou NJ, Hesley GK, Gostout BS, McGee KP, Felmlee JP. MR guided focused ultrasound: Technical acceptance measures for a clinical system. Phys Med Biol 2006; 51(12)3153–3173
  • McGough RJ, Kessler ML, Ebbini ES, Cain CA. Treatment planning for hyperthermia with ultrasound phased arrays. IEEE Trans Ultrason Ferroelectr Freq Control 1996; 43(6)1074–1084
  • Penney GP, Blackall JM, Hamady MS, Sabharwal T, Adam A, Hawkes DJ. Registration of freehand 3D ultrasound and magnetic resonance liver images. Med Image Anal 2004; 8(1)81–91
  • von Berg J, Kruecker J, Schulz H, Meetz K, Sabczynski J. A hybrid method for registration of interventional CT and ultrasound images. In: Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K, Reiber JHC, editors. Computer Assisted Radiology and Surgery. Proceedings of the 18th International Congress and Exhibition (CARS 2004), Chicago, IL, June 2004. Amsterdam: Elsevier; 2004. pp 492–497.
  • Wein W, Khamene A, Clevert DA, Kutter O, Navab N, Simulation and fully automatic multimodal registration of medical ultrasound. In: Ayache N, Ourselin S, Maeder AJ, editors. Proceedings of the 10th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2007), Brisbane, Australia, 29 October-2 November 2007. Part I. Lecture Notes in Computer Science 4791. Berlin: Springer; 2007. pp 136–143
  • Betrouni N, Vermandel M, Pasquier D, Rousseau J. Ultrasound image guided patient setup for prostate cancer conformal radiotherapy. Pattern Recognition Letters 2007; 28(13)1808–1817
  • Urschler M, Bauer J, Ditt H, Bischof H, SIFT and shape context for feature-based nonlinear registration of thoracic CT images. In: Proceedings of the 2nd International Workshop on Computer Vision Approaches to Medical Image Analysis (CVAMIA 2006), Graz, Austria, May 2006. pp 73–84
  • Shekhar R, Zagrodsky V. Mutual information-based rigid and non-rigid registration of ultrasound volumes. IEEE Trans Med Imag 2002; 21(1)9–22
  • Wang YM, Lawrence HS. Physical model-based non-rigid registration incorporating statistical shape information. Med Image Anal 2000; 4(1)7–20
  • Escalante-Ramirez B. The Hermite transform as an efficient model for local image analysis: An application to medical image fusion. Computers and Electrical Engineering 2008; 34(2)99–110
  • Zhu YM, Cochoff SM. An object-oriented framework for medical image registration, fusion, and visualization. Computer Methods and Programs in Biomedicine 2006; 82(3)258–267
  • Lee S, Wolberg G, Shin SY. Scattered data interpolation with multilevel B-splines. IEEE Trans Visualization Comput Graph 1997; 3(3)228–244
  • Lu XS, Zhang S, Su H, Chen YZ. Mutual information-based multimodal image registration using a novel joint histogram estimation. Comput Med Imaging Graph 2008; 32(3)202–209
  • Sanghvi NT, Hawes R, Kopecky K, Gress F, Ikenberry S, Cummings O, Zaidi S, Hennige C, High intensity focused ultrasound for the treatment of rectal tumors: A feasibility study. In: Proceedings of the 1994 IEEE Ultrasonics Symposium, Cannes, France, November 1994. pp 1895–1898
  • Barbara Z, Jan F. Image registration methods: A survey. Image and Vision Computing 2003; 21(11)977–1000
  • Zhang QB, Luo B, Wei S. Registration for feature point sets based on affine transformation. Journal of Image and Graphics 2003; 8(10)1121–1125
  • Sederberg TW, Parry SR. Free-form deformation and solid geometric models. Comput Graph (ACM) 1986; 20(4)151–160
  • Rueckert D, Sonoda LI, Hayes C, Hill DLG, Leach MO, Hawkes DJ. Non-rigid registration using Free-Form Deformations: Application to breast MR images. IEEE Trans Med Imag 1999; 18(8)712–721
  • Collignon A, Maes F, Delaere D, Vandermeulen D, Suetens P, Marchal G, Automated multimodality image registration using information theory. In: Bizais Y, Barillot C, editors. Proceedings of the 14th International Conference on Information Processing in Medical Imaging (IPMI ‘95), Ile de Berder, France, June 1995. Kluwer Academic Publishers; 1995. pp 263–274
  • Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P. Multimodality image registration by maximization of mutual information. IEEE Trans Med Imag 1997; 16(2)187–198
  • Vese LA, Osher SJ. Image denoising and decomposition with total variation minimization and oscillatory functions. J Mathematical Imaging & Vision 2004; 20(2)7–18
  • Lehmann TM, Gönner C, Spitzer K. Survey: Interpolation methods in medical image processing. IEEE Trans Med Imag 1999; 18(11)1049–1075
  • Maes F, Vandermeulen D, Suetens P. Comparative evaluation of multiresolution optimization strategies for multimodality image registration by maximization of mutual information. Med Image Anal 1999; 3(4)373–386
  • Rohlfing T, Maurer CR, Jr, Bluemke DA, Jacobs MA. Volume-preserving nonrigid registration of MR breast images using Free-Form Deformation with an incompressibility constraint. IEEE Trans Med Imag 2003; 22(6)730–741

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.