238
Views
0
CrossRef citations to date
0
Altmetric
Abstracts

Abstracts from ISRACAS 2005

Eighth Israeli Symposium on Computer-Aided Surgery, Medical Robotics, and Medical Imaging Petach Tikva, Israel, May 19, 2005

Pages 43-49 | Published online: 06 Jan 2010

Invited lectures

Validation of medical image processing for computer aided surgery: Methodology and terminology

Pierre Jannin IDM, Medical School, University of Rennes, France

Image processing is used extensively in computer-aided surgery (CAS). The importance of validation of image-processing methods in that context is now well established. It is required to highlight the intrinsic characteristics of a method, as well as to evaluate performance and limitations. Moreover, validation clarifies the potential clinical contexts or applications that a method may serve. Validation may also demonstrate a method's clinical added value, as well as estimate social or economic impact.

The validation process in the context of CAS is diverse and complex. Different evaluation levels can be studied from technical feasibility to societal impact. CAS systems involve many image-processing components, e.g., segmentation, registration, visualization, and calibration. Each component is a potential source of errors. Therefore, validation should involve the study of the performance and validity of the overall system, the performance and validity of the individual components, and error-propagation along the overall workflow. Clinical validation of CAS systems (in terms of large-scale multi-site randomized clinical trials) is difficult, since CAS is a recent technology and the required randomization is an ethical problem.

Validation is usually performed by comparing the results of a method or system with a reference that is assumed to be very close or equal to the exact solution. The main stages of reference-based validation are as follows. The first step is to clearly identify the clinical context and specify the validation objective. Then, the validation criteria to be studied and corresponding to the validation objective should be chosen, along with the associated validation metrics that quantify validation criteria. Validation data sets are chosen to provide an access to the reference. The method of computing the reference should be specified, as well as the format of the input and output of comparison between the reference and the results of the method applied to the validation data sets. The validation metric used for comparison is chosen according to its suitability for assessing the clinical validation objective. Quality indices are computed on the comparison output to characterize the properties of the error distribution. Finally, statistical test(s) are used to assess the validation objective. During this process, attention must be paid to the accuracy of the reference, to the clinical realism of the validation data sets, and to the coherence between the validation data sets and the validation objective.

Issues concerning validation are numerous. Comparison of method performances requires the use of standardized, or at least rigorous, terminology and common methodology for the validation process. Validation data sets with available reference are needed. Mathematical and statistical tools are required for quantitative evaluation. Validation differs from performance evaluation in the fact that validation studies concern performance evaluation of a method in a precise clinical context and for a precise objective. Consequently, comprehension of clinical issues is also of importance.

Improving validation methodology for medical image processing components for CAS could help improve understanding and interpretation of CAS system performance, increase the clinical acceptance of CAS systems, and facilitate technology transfer from the lab to the bedside.

The increasing role of computational anatomy and physiology in medical image analysis

Nicholas Ayache Epidaure/Asclépios Laboratory, INRIA, Sophia-Antipolis, France

Medical image analysis brings about a revolution to the medicine of the 21st century, introducing a collection of powerful new tools designed to better assist the clinical diagnosis and to model, simulate, and guide more efficiently the patient's therapy. A new discipline has emerged in computer science, closely related to others like computer vision, computer graphics, artificial intelligence and robotics.

In this talk, I describe the increasing role of statistical and functional modeling to guide the interpretation of complex series of medical images, and illustrate my presentation with three applications: the modeling and analysis of 1) brain variability from a large database of cerebral images, 2) tumor growth in the brain, and 3) heart function from a combined exploitation of cardiac images and electrophysiology.

I conclude with a presentation of some promising trends, including the analysis of in vivo microscopic images.

Session 1: Surgical navigation and medical robotics

Computer-aided surgery in otolaryngology/head & neck surgery – The hadassah experience

Ron Eliashar, M.D. Department of Otolaryngology/Head & Neck Surgery, Hebrew University School of Medicine, Hadassah Medical Center, Jerusalem, Israel

Endoscopic endonasal surgery (EES) has become the standard practice in sinonasal and anterior skull-base surgery. A total of 265 endoscopic endonasal procedures have been performed since the platform arrived in April 2001. Computer-aided surgery (CAS) using the LandmarX System (LXS) was used in 63 patients (23.7%) in whom it was assumed that the ability to identify surgical sites accurately could be compromised by previous surgery, massive recurrent polyposis, or abnormal anatomy, or when biopsies had to be taken from specific anatomic locations (e.g., clivus, wall of sphenoid sinus, orbital apex). In addition, two patients diagnosed with low-grade malignant tumors of the lower jaw were operated on using both the Image-Guided Implantology System (IGIS) and the LXS.

In 62 of 63 ESS patients the surgical procedure was uneventful. One patient with an atelectatic maxillary sinus developed a minor complication of pre-septal orbital hematoma. In 94% of cases the image-guided navigation system provided localization with less than 2 mm of localization error (range: 1.1–2.0 mm; mean: 1.6 mm). In all cases the surgical team felt that the system increased the intra-operative safety factor for the patient. The overall operating room time at the end of the study was 10 minutes longer than when regular EES was used.

In the two mandibular patients the accuracy level of the navigation provided by the IGIS was less than 0.5 mm, while that of the LXS was in the range of 3–4 mm. Tumor resection was done following the IGIS navigation. Pathologic examination demonstrated resection with tumor-free margins.

Conclusion: CAS enables a new level of efficiency and safety in EES. Nevertheless, it is not advised for surgeons who are not familiar with regular EES. For the experienced endoscopist, however, CAS is a valuable new tool in complex procedures. In mandibular resections, unsynchronized mobility of the lower jaw compromises the accuracy of the navigation, which is based on preoperatively acquired computed tomography. Specialized dental computerized navigation systems employ teeth-supported tracking appliances, which update the position of the mandible throughout the surgery. This also enables more accurate registration since the fiducial points are supported by the hard tissue of the teeth rather than on the elastic soft tissue of the skin. Specialized dental computerized navigation systems (such as the IGIS) are therefore better fitted to provide accurate navigation for surgery of the lower jaw.

Table-mounted versus bone-mounted reference frame attachment in navigation-assisted orthopedic surgery

I. Ilsar1, Y. Weil1, R. Mosheiff1, L. Joskowicz2, A. Peyser1, and M. Liebergall1 1Department of Orthopedic Surgery, Hadassah Hebrew University Medical Center, Jerusalem; and 2School of Engineering and Computer Science, The Hebrew University, Jerusalem, Israel.

Introduction: Fluoroscopy-based navigation systems enable surgeons to simultaneously correct parameters while placing implants in multiple two-dimensional views. This facilitates implant placement in all planes with less radiation exposure and provides maximal accuracy. To enable a navigated procedure, a rigid bony tracker named the reference frame is rigidly fixed to a stable bony structure. This may create technical obstacles such as interference with surgical instruments and the fluoroscope, and create an additional – albeit small – operative site. Subsequently, local wound complications may occur. As an alternative, we propose to attach the reference frame to the fracture table instead of the iliac crest, under the assumption that no relative motion will occur between the table-mounted reference frame and the target organ. We validate this assumption by comparing the navigation accuracy while fixing the reference frame to the patient's bony anatomy and to the operating table.

Methods: The study population consisted of 10 patients with femoral neck fracture (AO/OTA 31B1, 31B2.1) who underwent fixation of the fracture with three cannulated 6.5-mm cancellous screws using fluoroscopy-based navigation. To measure accuracy during the navigated procedure, the following steps were performed: Step 1–The patient was positioned on a fracture table and the reference frame was attached to the iliac crest with two 3-mm Shanz screws. Three guide wires used for cannulated screw fixation were inserted under fluoroscopy-based navigation. Step 2–New fluoroscopic images were acquired with the guide wires in place. Step 3–The navigated drill guide was placed over each guide wire to record the final navigated drill guide position. The resulting images include the actual guide wire positions (in lieu of the real implant) and the virtual trajectories of the navigated drill guide as computed by the navigation system. Ideally, when no relative motion occurs, these two positions should completely match; in practice, a small error appears. Validation of the navigation accuracy was performed by measuring the translational and angular deviations of the virtual trajectory image from the real image of the implant on the same fluoroscopic image in the anteroposterior and lateral views. Step 4 – The reference frame was removed from the iliac crest and attached to the fracture table with bars and clamps of an external fixator. Step 3 was then repeated. Finally, the recorded images were downloaded and analyzed, with all measurements reported in-plane. The two-tailed T-test was used for statistical analysis.

Results: The data for 29/30 screws is presented. For the anteroposterior view, when the reference frame was attached to the iliac crest, the average translational deviation of the trajectory from the inserted guide wire was 1.18 ± 0.92 mm at the entry site and 1.25 ± 1.53 mm at the trajectory tip. When the reference frame was attached to the fracture table, the average deviation was 1.24 ± 0.90 mm and 1.85 ± 1.37 mm, respectively. The differences were not statistically significant. The angular differences were 0.88 ± 0.82° in the iliac-crest-mounted reference frame group and 1.07 ± 0.82° in the table-mounted reference frame group, which is also not statistically significant. For the lateral view, when the reference frame was attached to the iliac crest, the average translational deviation of the trajectory from the inserted guide wire was 1.42 ± 0.88 mm at the entry site and 1.63 ± 1.25 mm at the trajectory tip. When the reference frame was attached to the fracture table, the average deviation was 1.26 ± 0.71 mm and 1.57 ± 0.85 mm, respectively. The differences were not statistically significant. Angular differences were 1.05 ± 0.84° in the iliac-crest-mounted reference frame group and 1.20 ± 0.80° in the table-mounted reference frame group, again not statistically significant.

Conclusion: In navigation-assisted cannulated screw fixation for femoral neck fractures, attaching the reference frame to the fracture table instead of to the iliac crest allows for similar accuracy of the navigation process with the possible benefit of reducing patient morbidity. This may have further application for table-mounted devices and navigated surgical instruments.

Miniature robot-based precise targeting system for keyhole neurosurgery: Concept and preliminary results

L. Joskowicz1, M. Shoham2,3, R. Shamir1, M. Freiman1, E. Zehavi3, and Y. Shoshan4 1School of Engineering and Computer Science, The Hebrew University of Jerusalem, Israel; 2Department of Mechanical Engineering, Technion, Haifa, Israel; 3Mazor Surgical Technologies, Caesarea, Israel; and 4Department of Neurosurgery, School of Medicine, Hadassah University Hospital, Jerusalem, Israel.

This paper describes a novel system for precise automatic targeting in minimally invasive neurosurgery. The system consists of a miniature robot fitted with a rigid mechanical guide for needle, catheter, or probe insertion. Intraoperatively, the robot is directly affixed to the patient skull or to the head clamp. It automatically positions itself with respect to predefined targets in a preoperative CT/MRI image following a three-way anatomical registration with an intraoperative 3D laser scan of the patient's anatomical features. We describe the system architecture, surgical protocol, software modules, and implementation. Registration results on 19 pairs of real MRI and 3D laser scan data show an RMS error of 1.0 mm (std = 0.95 mm) in 2 secs.

Sessions 2 & 3: Medical image processing

Evaluation of proximal femur bone mineral density using digitalized plain X-ray radiography of the hip

I. Ilsar1, A. Hareven1, I. Leichter2, L. Brocke1, O. Safran1, A.J. Foldes3, Y. Mattan1, and M. Liebergall1 1Department of Orthopedic Surgery, Hadassah Hebrew University Medical Center, Jerusalem; 2Jerusalem College of Technology, Department of Electro-Optics, Jerusalem; and 3Osteoporosis Center, Hadassah Hebrew University Medical Center, Jerusalem, Israel

Introduction: Many of the women affected by osteoporosis are not diagnosed until fractures occur. This is largely due to the lack of a convenient, reliable and inexpensive screening technique for the diagnosis of osteoporosis. The most widely accepted method for measuring bone mineral density (BMD) is Dual-energy X-ray Absorptiometry (DXA). However, the need for relatively expensive equipment and trained personnel reduce the accessibility of DXA as a routine screening tool for osteoporosis in the general population. Plain pelvic X-ray radiography is a simple and inexpensive examination. In principal, the gray level of the bone in the X-ray radiograph is related to the BMD. Several factors render plain X-ray radiographs of the hip unsuitable for BMD measurements, mainly the variability in X-ray exposure levels and soft tissue surrounding the bone. In this study, we aimed to develop new modifications of plain X-ray radiography of the proximal femur. These modifications were designed to compensate for some of the interfering factors mentioned above.

Methods: The study population consisted of 99 women, divided into three groups: Group 1 (28 patients, mean age 77.8 ± 9.9 years) – elderly patients who were hospitalized due to a low-energy fracture of the neck of the femur. Group 2 (38 women, mean age 67.5 ± 9.1 years) – the first control group - elderly women without a fracture. Group 3 (33 women, mean age of 40.4 ± 7.34 years) – the second control group – young women. Each patient's left hip (the contralateral, non-fractured, hip in group 1) was radiographed with a brass step-wedge positioned near the hip as a standard reference, using a computerized radiography system. A DXA examination of the same hip followed the plain radiograph. On each radiograph, regions of interest (ROIs) of the proximal femur were determined in concordance with the ROIs of the DXA examination. The mean gray level was measured for each ROI. Several geometric parameters of the proximal femur were measured: the neck-shaft angle, femoral neck width and length, and femoral head diameter. In addition, further regions were determined: three soft tissue regions surrounding the proximal femur (on the medial and lateral aspect of the femoral neck), and the various steps of the step wedge. The mean gray level was measured for these regions as well. Statistical comparisons between the 3 groups were done using one-way analysis of variance with Sidak correction for multiple comparisons. Multiple linear regression was applied to predict the DXA values.

Results: The difference in the gray level of the different ROIs within the proximal femur was not statistically significant between any of the groups. However, correction of the bone gray level to the exposure level, done by dividing the gray level of the ROI to that of the step wedge, resulted in statistically significant differences between group 1 and either group 2 or group 3, but not between the two control groups. Similar results were obtained by correction of the gray level of the ROIs to that of the soft tissue. The DXA results were significantly lower in the fracture group in comparison to the non-fractured elderly control group, and lower still in this group as compared to the younger group.

Multiple R2 of 0.62 was found predicting the DXA value from the gray level of each ROI (corrected for the gray level of the step wedge), soft tissue gray levels (also corrected) and the geometric measurements.

Discussion: This study shows that after correction for the exposure level and the soft tissue surrounding the bone, a plain digital radiograph of the pelvis can provide valuable information concerning the bone mineral content of the proximal femur. These preliminary results warrant further research aimed at exploring the potential value of this fast, accessible and relatively inexpensive technique for diagnosing osteoporosis and predicting future fractures.

The accuracy of digital (filmless) templating in total hip replacement

William J. Murzic, M.D., Zeev Glozman, B.S., Paula Lowe R.N., and Priya Hirway, M.S. Ortho-Crat, Ltd., Israel

Preoperative templating has been useful for determining the correct size of prosthesis in cementless total hip replacement. Typically, this has been accomplished with good success using acetate overlays on plain radiographs. With the advent of digital X-ray and PACS, software has been developed that incorporates the templates of many different vendors into a program that enables the surgeon to measure without radiographs the size of the intended femoral and acetabular components. We compared the two techniques to assess the relative accuracy of digital templating.

A total of 40 cases were analyzed, comparing the preoperative templated sizes with the actual size of the prostheses implanted at surgery. Both acetabular and femoral component sizes were reviewed. Magnification markers were used in all cases, and all templating and surgery was performed by one surgeon. Twenty hips that were templated using radiographs were compared to 20 hips that were done using digital templating software on a PACS workstation (TraumaCad, Novapacs, Salt Lake City, UT) A synergy femoral component and a reflection cup (Smith and Nephew, Memphis, TN) were used in all cases. Preoperative templating data was compared to prosthesis size on operative notes.

Using standard templating, 30% of implanted stems were the same size as templated, 65% were within one size, and 5% within 2 sizes. With digital templating, 60% were the same size, 35% were within one size, and 5% within 2 sizes. For acetabular components using acetate overlays, 50% of implanted cups were the same size as templated, 45% were within 2 mm, and 5% within 4 mm. Digitally, 45% were of identical size, 35% were within 2 mm, and 20% within 4 mm. All postoperative films show good fit of the components and there were no intraoperative or postoperative fractures.

This preliminary study, using recently developed digital templating software, showed no significant differences when compared to the standard technique using magnified radiographic overlays. Use of this templating software was safe and effective.

In total hip replacement, preoperative templating provides valuable information about anatomy and appropriate implant size. Having the information prior to surgery provides surgical accuracy, reduces incidence of fractures, and decreases operative time. With the increasing demand for digital imaging/PACS, digital templating will become more prevalent.

DT-MRI partial volume effects reduction using the multiple tensor variational framework

Ofer Pasternak1, Nir Sochen2 and Yaniv Assaf 3,4 1School of Computer Science; 2Department of Applied Mathematics, Tel-Aviv University; 3Department of Neurobiochemistry, Faculty of Life Sciences, Tel-Aviv University; and 4Edersheim-Levi-Gitter Institute for Functional Human Brain Imaging, Tel-Aviv Sourasky Medical Center and Tel Aviv University, Tel Aviv, Israel.

Background: Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) has become a popular tool for analysis of Diffusion Weighted Magnetic Resonance Images (DW-MRIs). It provides quantitative measures for diffusion anisotropy of water molecules and the ability to delineate and visualize major cerebral neuronal pathways. It is often used as a pre-operative tool in brain surgery. The mathematical model of DT-MRI was found to be inappropriate in cases of partial volume, where more than one type of diffusion compartment resides in the same voxel. Among the artifacts related to partial volume are fiber orientation ambiguity and cerebro-spinal fluid (CSF) contamination, both of which damage the credibility and effectiveness of in-vivo neuronal fiber delineation. MRI voxel dimensions are much larger than those of neuronal cells, resulting in partial volume effects. In addition, the borders between different tissues are not aligned with the grid determined by the voxels. Therefore, partial volume effects reduction requires diffusion models that permit higher tissue complexity. However, when a model allows more geometric freedom it demands more free parameters, and the fitting process becomes ill-posed.

Methods: In this work we delineate neuronal fibers using the Multiple Tensor Variational (MTV) framework. The framework encapsulates a multiple tensor model into a functional and adds biologically driven constraints in the form of regularization terms. The multiple tensor model assumes that each voxel contains a number of separated diffusion compartments. The regularization terms added by the MTV framework constrain the shape of the compartments, thus stabilizes the fitting process. The minimization of the MTV functional results in a set of diffusion tensors and their relative weights which best describes the MR signal attenuation. Euler-Lagrange equations are solved via the gradient descent method that produces a set of diffusion-reaction Partial Differential Equations (PDEs). The PDEs flow leads to the minimization of the functional while preserving the tensor's attributes. The regularized fitting results in separated compartments and also reduces noise by smoothing neighborhood variations.

By adjusting the constraints enforced by the MTV framework, we were able to use the same framework for reducing either fiber ambiguity or CSF contamination. When resolving fiber ambiguity, the compartments were constrained as anisotropic, cylindrically shaped ellipsoids, each resembling a single fiber orientation. To resolve CSF contamination, we constrain one of the compartments as an isotropic, spherically shaped ellipsoid, with radii similar to those found for free water diffusion. We assume that the remaining compartment is free of CSF contamination, thus reflecting the diffusion of tissue water and not CSF. We use this uncontaminated compartment for the fiber delineation.

Results: The MTV framework was tested in resolving fiber ambiguity on synthetic data resembling crossing neuronal fibers, and on a phantom with known fiber orientations. The phantom was built to have areas of homogenous neuronal fibers matter and areas with partial volume due to multiple fiber orientations. We show how MTV resolves the fiber ambiguity in the synthetic data, and results in more accurate fiber orientations for the phantom compared with common DT-MRI analysis. Using the resulting fiber orientations, we were able to successfully trace fibers through fiber crossings.

The ability of MTV to reduce CSF contamination is demonstrated in cases of patients with hydrocephalus, where DT-MRI encounters high CSF contamination. We show that MTV found larger anisotropic areas, especially in proximity to the CSF-filled ventricles, while maintaining the contrast between fibrous anisotropic areas and isotropic tissue. A large portion of neuronal fiber pathways are obscured by CSF contamination in hydrocephalus patients. Therefore, the addition of anisotropic voxels identified by MTV proved helpful for appropriate analysis in these patients.

A new warping grid-based method for surface reconstruction of medical models

Sergei Azernikov and Anath Fischer Technion - Israel Institute of Technology, Haifa, Israel

Volumetric implicit models of 3D objects have recently been introduced into the reconstruction processes from scanned data. The grid-based methods are considered to be the major technique for reconstructing surfaces from volumetric models, mainly due to their efficiency and simplicity. However, these methods suffer from a number of inherent drawbacks, resulting from the fact that the imposed Cartesian grid in general is not well adapted to the surface. Therefore, a novel grid-based iso-surface extraction method is proposed. With this method, the imposed volumetric grid is deformed adaptively according to the object's shape. This adaptation improves the quality of surfaces reconstructed from the volumetric models.

The typical reconstruction process that applies on scanned data consists of four main phases: a) scanning a 3D object; b) registration of the data into a single point cloud; c) meshing the point cloud; and d) high-level CAD model creation. This approach is suitable for objects with simple topology. However, meshing a cloud scanned from a complex object is difficult. Moreover, the point clouds are often incomplete and noisy, making the meshing process very unstable. To deal with these obstacles, volumetric approaches were introduced. With these approaches, the implicit volumetric model is first reconstructed from the scanned points. Contrary to the piecewise-linear triangular mesh, this model is a piecewise-smooth and mesh-independent representation of the unknown surface. For downstream applications, however, i.e., visualization and analysis, explicit mesh representation may be required. Therefore, an implicit representation is converted into an explicit one by iso-surface extraction or contouring.

We have improved the performance of the grid-based implicit surface contouring methods by adapting the imposed volumetric grid with the anisotropic metric field induced by the surface shape. With the proposed approach, the process begins by reconstructing an implicit model from the scanned points cloud. In the current work, the multi-level partition of unity method is used. With this method, a set of overlapping quadric patches is fitted to the cloud. Then, these patched are blended to produce a piecewise-smooth implicit representation of the surface. In the next phase, the background octree is constructed in order to represent the geometric tensor field. Then, the field is evaluated and propagated on this octree. As a result, a metric tensor is set for each point in the problem domain. Afterwards, the uniform grid is adapted geometrically by relaxing the vertex position, while edge length is calculated in the produced anisotropic metric. Finally, the iso-surface mesh is extracted using one of the common grid-based contouring techniques. This approach has the following important advantages:

The geometric adaptation of the grid is shape-driven and not axis-aligned as with octrees. Therefore, complex geometric features can be recovered with a lower number of voxels.

The geometric adaptation only perturbs the grid's geometry, while keeping the structured topology. That is, this adaptation can be used as a preprocessor for any existing grid-based contouring method.

The meshes extracted from the adapted grids exhibit much higher quality than those extracted from the original grids with the same number of voxels. Moreover, dual contouring of the adapted grids produces all-quad, anisotropic meshes.

Industrial session

Capsule endoscopy

Rafi Rabinovitz, Ph.D. Given Imaging Ltd.

Traditional methods aiming for direct visualization of the digestive tract do not provide information on the small bowel, which is approximately 6 m in length. Furthermore, visualization of other parts of the digestive tract, i.e., the esophagus, stomach and colon, requires inconvenient procedures with long endoscopes (flexible tubes) that are advanced into the digestive tract through the throat or rectum.

Given Imaging Ltd. is developing patient-friendly products for visualizing gastrointestinal disorders. The company's technology platform is the Given Diagnostic System, featuring a single-use miniature video camera placed in a capsule (PillCam®), 11 × 26 mm, which the patient swallows. The PillCam acquires several color video images per second as natural peristalsis propels it through the digestive tract, and sends them to a data recorder worn on the patient's waist. A typical capsule endoscopy of the small bowel provides 50,000 images for up to 7 hours, which are stored at the data recorder. After the capsule is ingested, the patient is not restricted to the medical environment. At the end of the procedure, the stored data is loaded into a dedicated PC workstation which is equipped with special application software (Rapid®) for processing, presenting and storing images and for generating medical reports.

Currently, Given Imaging is selling video capsules for the small bowel (PillCam®-SB) and the esophagus (PillCam®-ESO). PillCam capsules are a naturally ingested method for direct visualization of the entire small bowel and esophagus. PillCams are currently marketed in more than 60 countries, and have benefited more than 150,000 patients.

Cardiop-B system for 3D coronary artery reconstruction from few 2D X-ray angiographic projections

Michael Zarkh and Moshe Klaiman Paieon Medical Ltd., Israel

As opposed to the computer tomography approach, which provides 3D volume images, the standard coronary angiography procedure nowadays uses 2D X-ray angiographic projections. CardiOp-B is a system for 3D reconstruction of coronary vessel segments based on single-plane angiography. The system is intended for use in real time in the context of the standard angiography procedure, and overcomes the limitations that are inherent in 2D analysis of 3D vessel anatomy.

The system reconstructs a 3D coronary vessel segment, unfolding its true morphology and providing accurate dimensions, in particular an accurate quantitative lesion analysis. The 3D vessel segment is represented as a tree of generalized tubular organs given by a tree of 3D centerlines and local radii at every centerline point. To obtain the 3D reconstruction, the following algorithm steps are performed.

A 2D analysis for two or three ECG-gated images from different perspectives is carried out. At this stage, the 2D centerlines and edges of an artery are extracted and corresponding radii are calculated. In addition, a densitometry measurement attaches a cross-section area value to every 2D centerline point. The densitometry is based on gray-value analysis around the vessel of interest within each single image. The stage of 2D analysis requires a minimal user interaction to define a vessel of interest using 3 marking points per image.

The next algorithmic step is a point-by-point matching of 2D centerlines. After the point-by-point matching is done, the 3D centerline point calculation is simply the intersection (in the generalized sense) of the projection lines of matched 2D points. The matching of 2D centerlines is a nontrivial task for the following reason: The coronary artery is a moving organ depending on heartbeat phase and breathing. Because the video sequence rate is limited and breathing is usually not under control, there exist local distortions not covered by the geometry model, even if the imaging system is well calibrated and the imaging geometry model is known exactly. We proposed an approach of using a simple geometry model (orthographic projection) and a technique to overcome local distortions and provide a precise match.

After the 3D centerline has been reconstructed, the 3D radius for every 3D centerline point is calculated. The 3D radius value aggregates densitometry and radius values coming from every 2D source that participated in 3D reconstruction. The quantitative information optimally combines 2D measurements, taking into consideration the local 3D skeleton orientation and viewing geometry.

The CardiOp-B system underwent comprehensive validation and has received FDA and CE approval. The clinical studies have demonstrated that our system is precise, robust and easy to operate.

Polestar N20 - MR image guidance system

R. Ben-Kish Odin Medical Technologies, Yoqneam, Israel

PoleStar N20 is an image guidance system featuring both intraoperative MRI and optical navigation capabilities. Its main application today is neurosurgical, but since it is easily integrated into any general operating room it has a lot of potential for other applications as well.

The MRI scanner is based on a mobile open 0.15T permanent magnet. It supports a wide range of different MRI sequences in a field of view that covers the whole head. The images are easily acquired at any stage of the surgery, providing the surgeon with powerful means to plan the surgical procedure and perform accurate resection as the surgery progresses. MRI images taken in the OR help the surgeon distinguish between benign and malignant tissue, even in cases where they are visually undistinguishable.

The navigation employs an IR camera that tracks pointing devices and other surgical instruments. It also constantly follows the movement of the magnet and the patient on the operating table, providing accurate and reliable navigation throughout the surgery, despite the changing anatomic environment. Imaging performed during the progress of the surgery allows the surgeon to navigate on the basis of up-to-date information, thereby overcoming the brain-shift problem.

The PoleStar N20 is the second generation of the PoleStar family. The first generation, PoleStar N-10, was installed in 22 sites in 2000–2004. The new generation offers a larger field of view, improved image quality and navigation options. More than 2000 surgeries have already been performed in more than 30 PoleStar N-10 and PoleStar N20 sites throughout the US, Europe and Israel. The PoleStar N20 system supports many neurosurgical procedures, including resection of low- and high-grade gliomas, transphenoidal pituitary surgery, posterior fossa surgery, biopsies, shunt placement and more.

VRI: New imaging modality for the lungs

Igal Kushnir, Meir Botbol and Alon Kushnir Deep Breeze 2, Hailan St. Industrial Park, Or-Akiva, Israel

We present a new imaging technology for the human body that is radiation-free and organ-oriented. It is called Vibration Response Imaging (VRI). Unlike magnetic resonance imaging (MRI), X-ray or ultrasound, VRI uses passive vibration energy that is naturally created in organs to produce a dynamic image of the organ.

The development of the first VRI for the lungs was based on the finding that the lung vibration energy directly correlates to the lung airflow. As VRI constructs an image from the amplitude, frequency, intensity and timing of lung vibrations caused by airflow, any change in these parameters should be reflected in the image. Moreover, structural and functional alterations, such as bronchial obstruction or space-occupying lesions such as lung cancers, are reflected by a corresponding modification of the vibration response. VRI employs this vibration response to record lung vibrations, and displays an image of the vibration characteristics of the lungs. The method of accumulating the VRI energy of the lungs requires full coverage of the lungs by attaching 40 specially designed piezo-electric pressure sensors over the back. VRI sensors are attached by a low-vacuum computer-controlled method. The VRI device uses several stages of filtering to select the frequency band that typically represents the lung. The filtering is also used to reduce distortion (e.g., background noise and undesirable signals coming from the heart and muscle) and to enhance the signal by reducing all other frequency components. The filtering process includes band-pass filtering (100–250 Hz) that allows through only a desirable range of frequencies; median filtering that suppresses impulse noise; and truncation of samples above a given threshold. Finally, singular value decomposition (SVD) is used to identify only meaningful underlying variables. At each time interval, the output of the time domain filtering reflects the vibration energy by integrating the previously filtered signals over a certain time interval. These new samples are then processed to produce a spatial, 2-dimensional image, which is assumed to be ruled by the solution of the diffusion equation. For data acquisition, we have developed a special 64-multi-channel A-to-D converter, which enables filtering, amplification and conversion of the analog signal into digital data. The system includes a 16-bit acquisition level and a variable sampling rate (4–20 KHz) that acquires the analog signals and converts them to digital data. The VRI graphic representation generates a gray-level-coded spatial representation of the lung vibrations. High data values, in which lung vibration energy is greatest, are depicted as dark colors (black) and low data values are shown as light colors (light gray); the minimum is defined as “white”. This representation enables the viewer to follow the course of the dynamic development of the lung in the image. An additional advantage of VRI is that it also detects and records lung sounds; each channel can be examined by the VRI algorithm for crackles (i.e., discontinuous abnormal lung sounds), wheezes (i.e., continuous abnormal lung sounds), and automatic breathing cycle selection. Crackles and wheezes that are identified by the algorithm are presented in the display as colored dots on the lung image.

With ethical committee approval, the VRI was studied on more than 200 human subjects, both healthy and with various lung pathologies. Dynamic lung images from the VRI were compared to existing gold-standard imaging technologies. It has been possible to discern specific VRI signs for the different lung abnormalities. A test of 20 healthy subjects found a significant, direct correlation between the lung vibration energy and the actual airflow, providing evidence that the VRI also has the ability to produce quantitative measurements of the lung. VRI is tested for imaging other organs in the human body, such as the heart, by using their own intrinsic vibrations.

Pedicle screw insertion by spineassist miniature robotic system

Ori Hadomi, Avi Posen and Moshe Shoham Mazor Surgical Technologies, Caesarea, Israel

This presentation describes a new approach to medical robotics using a miniature robot that is directly mounted on the patient anatomy. It is currently indicated and FDA-approved for spinal applications. Attaching the robot to a vertebra renders the system and spine a unified rigid body, hence patient movement and breathing do not change the location of the robot relative to the vertebra, thus providing significantly higher accuracy in the insertion of implants, e.g., pedicle screws. In addition, the proposed system is a semi-autonomic one designed to accurately guide and assist the surgeon; the actual surgical procedure is still performed by the surgeon, who remains in full control at all times.

Cadaver and clinical cases performed by the system in the last several months, during which tens of screws were inserted, show high accuracy of implant location with respect to the planned location, in the range of 1 mm. The system's potential to enable minimally invasive percutaneous procedures is also addressed.

3D ultrasound: Visualization technology and medical added value

Ziv Soferman BIOMEDICOM, Creative Biomedical Computing Ltd., Israel

Three-dimensional ultrasound provides nice pictures of anatomical structures which may be understood by non-ultrasound experts, by practitioners of other disciplines, or even by the patients themselves. However, it is still questioned whether 3D ultrasound has added clinical value for the ultrasound expert compared to classical 2D ultrasound.

BIOMEDICOM's product is an add-on to any 2D ultrasound system (connecting to the standard video-output port) that upgrades it to 3D imaging capabilities. This upgrade is achieved as follows: 1) The user acquires a series of 2D images in a typical fan acquisition protocol, using a gyroscopic orientation sensor attached to the ultrasound probe; 2) Reconstruction of the 2D set of images using the respective geometric information obtained with the gyroscopic sensor results in a 3D image of the scanned region; 3) Optionally, the user may perform an almost automatic segmentation of the organ of interest; 4) The user may invoke ordinary means like a bounding box and other navigation/orientation utilities; 5) Three visualization modes are available, namely “smooth” surface-volume rendering, volume rendering with transparency, and multi-planar representation (MPR).

The segmentation algorithm is unique in its ability to isolate organs or other anatomies in difficult cases, such as fetus from the placenta (as compared to the rather easy separation of fetus from fluid). The algorithm is successful in doing so even when the ultrasound images are corrupted by speckles and shadows.

Two clinical examples are shown which demonstrate the added value of 3D ultrasound. The first example is a follow-up imaging of a patient who had undergone an ablation procedure for a tumor in the liver. In the follow-up session, the 2D ultrasound wrongly showed that the “hole” at the location of the ablated tumor appeared fine and no traces of the tumor could be observed. Eventually, therefore, the patient would have been discharged. However, 3D ultrasound was then performed after injecting a contrast agent, and in the 3D image formed by grouping the series of 2D images, it was found that the boundary of the “hole” had a layer or “shell” which had an excess of blood vessels. This “shell” provided evidence that the tumor had not, in fact, been fully ablated. Therefore, a follow-up procedure was invoked on the spot to complete the ablation. This “shell” was too weak to be identified in any of the 2D slices, and was visible only in the grouping of slices into a combined 3D image. Ultrasound contrast agents were also used in the second example. Here, the dense vascular structure obscured the malignant part. In 2D it was very difficult to determine the 3D vascular structure. In an ordinary 3D volume rendering, the vascular structure would have appeared too dense, obviating the ability of observing its malignant center. In transparency mode, however, the brighter center could be enhanced by making the less-bright parts more transparent. Therefore, the malignant bright center became visible and distinguishable from all the other parts. This transparency mode is found very valuable when the object of interest is vascular and contrast agents are applied to emphasize the brightness of the desired object or structure.

Three-dimensional ultrasound, together with the suitable visualization method, proves to be of added value when the desired object has a complex 3D structure, as in a complex 3D vascular structure, or when the phenomena to be observed can be identified in 3D but not in any one 2D slice.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.