3,697
Views
10
CrossRef citations to date
0
Altmetric
Original Articles

Advanced Imaging and Robotics Technologies for Medical Applications

&
Pages 299-321 | Published online: 14 Dec 2011

Abstract

Due to the importance of surgery in the medical field, a large amount of research has been conducted in this area. Imaging and robotics technologies provide surgeons with the advanced eye and hand to perform their surgeries in a safer and more accurate manner. Recently medical images have been utilized in the operating room as well as in the diagnostic stage. If the image to patient registration is done with sufficient accuracy, medical images can be used as “a map” for guidance to the target lesion. However, the accuracy and reliability of the surgical navigation system should be sufficiently verified before applying it to the patient. Along with the development of medical imaging, various medical robots have also been developed. In particular, surgical robots have been researched in order to reach the goal of minimal invasiveness. The most important factors to consider are determining the demand, the strategy for their use in operating procedures, and how it aids patients. In addition to the above considerations, medical doctors and researchers should always think from the patient's point of view. In this article, the latest medical imaging and robotic technologies focusing on surgical applications are reviewed based upon the factors described in the above.

1. INTRODUCTION

Imaging and robotics technologies have substantially contributed to human health and welfare. Particularly in the area of surgery, these technologies would be another assistive eye and hand to surgeons (Dohi Citation1995; Gunkel et al. Citation1999; Liao et al. Citation2010; Su et al. Citation2009). By using imaging technologies such as computed tomography (CT) or magnetic resonance imaging (MRI), physicians can see the inside of the body or organs that are not normally visible to human eye. These images also provide pathological information that is hardly obtained by direct vision. When the surgeon is not confident about the complete removal of a tumor, an intra-operative MRI may reveal the existence of a tumor remnant (Hong et al. Citation2007). A recent trend in medical imaging is its expansion of application area to the surgical field. The main purpose of medical imaging has been for diagnosis before the treatment. However, the medical image has been directly referred to during the surgery as an image guidance tool (Caversaccio et al. Citation2000; Caversaccio and Freysinger Citation2003; Gumprecht et al. Citation1999; Labadie et al. Citation2005; Morioka et al. Citation1999; Seemann et al. Citation2005). In this review, we will address major image modalities and their applications for surgery, particularly in terms of image guidance.

Medical robots have also been developed for various surgical applications. The da Vinci surgical robot has been already recognized widely and proved its usefulness in urology. Numerous surgical robots have been researched to reach the goal of minimal invasiveness (Stoianovici et al. Citation2007; Labadie et al. Citation2004; Strauss et al. Citation2007). After the period of laparoscopic surgery which is the main application of the da Vinci robot, single incision laparoscopic surgery, or single port surgery could be an alternative to the laparoscopic surgery in the future. To accommodate the demand, highly advanced mechanism and control schemes would be required. Another trend in medical robots is the integration of image guidance. If the imaging and robot systems are delicately combined; surgical planning, simulation, training, and navigation can be done with the same data and platform. To simplify and arrange the high complexity existing in and between various computer based systems would be a challenge to spread medial robots more in real clinics.

This article reviews the latest imaging technologies and robotic technologies focused on the surgical area. Various image modalities for treatment purpose are introduced in Section 2, and the topic of image guided surgery is described in more detail in Section 3. The advantages and risk of image guidance are also discussed. In Section 4, medical robotics focused on image-guided robots and endoscopic surgery robots are introduced with the outlook for the future medical robot. Requirements and matters of consideration in the development of medical robots will be also suggested.

2. Medical Image

A number of image modalities have been developed. Each image modality has its own features; therefore, doctors need to select the most suitable one for the patient. In this Section, the basic principle and its advantages are addressed, particularly in treatment or surgery.

2.1. Medical Ultrasound

Medical ultrasound (US) utilizes amplitude and elapsed time of echo signals to reconstruct 2-D images. It is the most safe and nearly harmless imaging modality to the human body. The cost is low and the size is small; therefore, it has been installed in many hospitals and clinics. Another strong point of US is the capability of real-time imaging, which is not available in computed tomography (CT) or magnetic resonance imaging (MRI). It also has high sensitivity and resolution, although it has low specificity. We can recognize any suspicious region in the US images, but it is hard to exactly specify what it is (Figure a). Ultrasound is often used for percutaneous needle insertion therapies such as radio frequency ablation (RFA) or percutaneous ethanol injection therapy (PEIT). These are therapies to insert a needle to the tumor and kill the tumor by heat or ethanol. Surgeons search the tumor and introduce a needle by referring to the US images. Sometimes, the tumor is not clearly identified in US images, and hard to find by 2-D images. In that case, MR or CT images can be used with US images at the same time to find the tumor and guide the needle insertion (Hong et al. Citation2006; Maeda et al. Citation2009; Wein et al. Citation2008).

Figure 1 Medical images; (a) US for liver phantom, (b) CT for ear, and (c) MRI for abdomen (color figure available online).

Figure 1 Medical images; (a) US for liver phantom, (b) CT for ear, and (c) MRI for abdomen (color figure available online).

2.2. Computed Tomography

Computed tomography is a very popular diagnostic imaging modality. It provides 3-D reconstructed images, and generally better resolution than MR images. Since it uses the X-ray, hard tissue such as bone is well imaged. The soft tissue such as nerve or cartilage, in contrast is not clearly identified (Figure b, 1c). The image intensity is determined by the X-ray absorption ratio at various tissues; therefore, we can obtain a constant CT value for each tissue or organ. It means that intensity based automatic segmentation can be implemented with CT images. The fatal disadvantage of CT is radiation exposure. We cannot acquire the CT images repeatedly from one patient for this reason. Additional CT scans to acquire the marker-included images for surgical navigation is an agonizing decision between benefit of image guidance and possible radiation damage to the patient. Radiation is more serious problem for surgeons, or radiologists who are routinely using the X-ray fluoroscope. To avoid this radiation exposure, medical robotic systems will be employed under the X-ray imaging environment. (Kwoh et al. Citation1988).

2.3. Magnetic Resonance Image

Magnetic resonance imaging (MRI) is another well-known image modality for diagnosis. The principle of MRI is based on the movement of protons. The protons of hydrogen in the body align its spin direction in the strong magnetic field. Radio frequency pulse is, then applied to the proton to fall down the spin direction, and finally obtain the signals from the protons, while its spin will gradually recover. MRI is also considered very safe to the patient, and it shows better specificity for soft tissue than hard tissue, because soft tissue has a lot of water or fat, i.e., abundant hydrogen.

Recently intra-operative MRI has been developed (Figure ). It is also called interventional MRI, or open MRI. Patients are placed in the MR gantry or near the MR gantry during the surgery, thus surgeons can perform their surgery in the MRI room. The advantage of intra-operative MRI is that we can obtain MR images even in the middle of the surgery. Using the intra-operative MR images, we can confirm whether the tumor is completely removed or still remained partly. When the tumor is still observed, surgeons continue the surgery to remove the tumor completely. According to Tokyo Women's University Hospital in Japan (Muragaki et al. Citation2011), 5 year survival rate after glioma resection surgery was reported to be higher in the group that used the intra-operative MRI than the group that underwent conventional surgery without intra-operative MRI. To perform a surgery in the MRI room, metallic instruments must be handled with extra caution. Basically, metallic instruments cannot be carried in the MRI room. All surgical instruments required are substituted with instruments made from non-metal or nonmagnetic material such as ceramic, plastic, or titanium. Nevertheless, if the magnetic field strength is considerably low, approximately under 0.5 Tesla, conventional instruments made from metal can be allowed in the MRI room within a certain area. In general, the area is outside of 5 Gauss line that surrounds the MR magnets. The intra-operative MRI, however still takes time to obtain the images. The recent MRI supports real-time imaging, though it takes several seconds for single 2-D slice imaging.

Figure 2 Intra-operative MRI and surgical bed installed in the operating room (color figure available online).

Figure 2 Intra-operative MRI and surgical bed installed in the operating room (color figure available online).

Functional MRI (fMRI) is an imaging modality to analyze the function of brain. As well as anatomical knowledge, functional analysis could be used for treatment or surgery. For brain surgery, careful consideration should be taken particularly when the target tumor is closely located to the speech or motor area. The principle of fMRI is based on the activity of HbO2 which increases with blood flow. If a specific part of brain is activated, the blood flow and the number of HbO2 increases. Relatively decreased Hb in the blood flow affect the magnetic field, and finally form fMRI.

Recently the diffusion of H2O in the brain can be imaged. H2O diffusion in the brain or spine is limited by the behavior of cells or neurons, and it has direction and magnitude. Using this phenomenon, we can obtain diffusion weighted image (DWI) or diffusion tensor image (DTI) for brain function. Since DTI provides the shape of nerve fibers, it is used for the surgery in which the nerves should be protected (Figure ).

Figure 3 DTI image for brain surgery; nerve fibers are displayed (color figure available online).

Figure 3 DTI image for brain surgery; nerve fibers are displayed (color figure available online).

2.4. Endoscopic Image

Endoscopic surgery has rapidly spread in various areas since it is less invasive than conventional open surgery. Pain and scarring is less and an earlier return to daily life is possible. Various endoscopes have been developed. Such as laparoscope for the abdomen area, arthroscope for the orthopedic area, neuroendoscope for neuro-surgery, and so on. Latest endoscopes have very small diameter less than 5 mm, and 3-D stereo vision is supported in them. There are three kinds of structures for endoscope. The most conventional type is uses a relay lens and CCD camera connected to the scope. Another type uses optical fibers instead of a relay lens. The other type uses a distal CCD at the tip of endoscope (Yasunaga et al. 2007). The second and third types can be used for flexible endoscopes. The endoscopic image is in color and suitable to observe the organ surface. On the other hand, CT or MR images provide cross-sectional images inside the organ. Augmented reality based surgical navigation displays the endoscopic image and CT or MR image super-imposed at the same time, which is described in detail in the next chapter.

3. IMAGE GUIDANCE FOR SURGERY

The medical images mentioned above can be used for treatment or surgery as well as diagnosis. For this purpose, the images are employed as guidance tools during the surgery. This image guidance technique for surgery is also known as “surgical navigation.” This technique has already been used in neurosurgery, otolaryngology, and orthopedics (Low et al. Citation2010; Hong et al. Citation2009; West et al. Citation2001). The surgical navigation is expected to become more popular in various departments in the hospital, if the remaining issues are overcome (Tomikawa et al. Citation2010).

The concept and principle of surgical navigation is very similar to those of the automotive navigation system which uses a global position system (GPS) navigation device. In surgical navigation, a surgical instrument corresponds to the vehicle, a position tracking system corresponds to the satellite, and medical image corresponds to the map. As the car position is displayed in the map, the surgical instrument position is displayed on CT or MR images, so that doctors may confirm their approach and predict further progress against the target. Surgical navigation is most helpful in the cases in which the target lesions are located inside organ, and invisible by the endoscope. It is crucial when the surgeon cannot expect normal anatomy after previous surgery, or in anomaly. The medical image guidance is also useful for the case where the boundary between tumor and normal tissue is vague, so it requires referring to the CT or MR images. Figure a shows the concept of image guidance for surgery.

Figure 4 The surgical navigation; (a) configuration of image guidance for surgery, (b) display of surgical navigation for ear surgery (color figure available online).

Figure 4 The surgical navigation; (a) configuration of image guidance for surgery, (b) display of surgical navigation for ear surgery (color figure available online).

In surgical navigation, markers are attached to the surgical instruments to track their position. To use medical images as a guidance map, image to patient registration is required. This is a task to match the coordinates between image and patient. Typically, a transformation matrix is calculated using identical points or surface in two dataset. The various techniques and its advantages and disadvantages are described in the following sections. After the registration of image to patient, measured instrument position which has a coordinate of patient space is displayed on the images. In order to follow the patient movement, the position and orientation of the patient is stored in the matrix, where represents the position and the orientation of the patient (P) in the base of the camera (C) position. On the other hand, the surgical device (D) position and orientation is stored in . If the patient to image (I) transformation matrix is represented by , The relative position of the surgical device against the moved patient in the image, which is represented by , is calculated by

There have been a number of types of surgical navigation. The systems are categorized by image modalities used, position sensors, visualization methods, software platforms, etc. In the following sections, the advantages and disadvantages of each method are described, and the usefulness and hidden risks in clinical use are also discussed. To perform a successful image-guided surgery, the following key technologies are investigated.

3.1. Medical Image Processing

To extract and visualize the region of interest (ROI) from the images, image segmentation is required. Target objects, in general are tumors, lesions, blood vessels, and nerves. Organs can also be the region to be extracted (Figure b). The image intensity based approaches are the most conventional methods. Anatomical knowledge is useful particularly for medical image processing (Hong et al. Citation2004). Although differences exist between people, the shape and position of organs do not have extreme variation. This knowledge can be employed in constructing a mathematical model, which can be adapted to each person if necessary. As an example, for the image guidance in orthopedic surgery, image-less registration has been introduced recently (Dorr et al. Citation2005). This new technique utilizes a common bone shape based on human anatomy, and adapts the model to each patient by referring to the fiducial points that are obtained from the patient body.

Image segmentation is time-consuming task, so there is a temptation that medical doctors assign this task to an engineering staff who does not have enough knowledge about the disease, or image processing software i.e., full automatic segmentation without human intervention. However, the manual or semi-automatic method performed by medical doctors is strongly recommended for safe and responsible treatments.

3.2. Surgical Tool Tracking

To detect and track a surgical instrument is an essential part of image-guided surgery. Surgical instruments that require to be tracked in real-time are determined by surgeons’ request. In general, a bipolar or monopolar electrocautery device is often chosen. Needles, suction device, forceps, surgical drills, and endoscopes can also be the instruments for tracking. Any surgical instrument can be tracked as long as it has a pointed tip, and markers for the position sensor can be attached to the instrument.

The most widely used position tracking systems are optical tracking systems and electromagnetic tracking systems (Hong et al. 2011). Many optical tracking systems use infrared. The infrared is emitted from the stereo cameras and reflected at the markers attached on the target instruments. The active type sensor using an emitting diode has better performance, but the wired connection to the markers is not suitable for surgery. The optical tracking system is most common in the surgical navigation area, since it shows high accuracy. The problem, however is that the system does not work if the infrared is blocked by interference. The staff and other equipment such as anesthesia machines and surgical microscopes when located between the sensor cameras and the markers block the infrared transmission. To avoid this occlusion problem, a specially designed stand for the optical sensor cameras or an mechanical arm fixed on the ceiling are effectively employed. Another limitation of optical tracking system lies in the case in which flexible or bendable instruments are used. The tip position may be changed during the surgery, and the optical sensor cannot detect the change because of their rigid marker. There are several optical tracking systems that use visible ray instead of infrared. They are relatively low cost, but the markers are usually larger than the infrared sensor.

On the other hand, electromagnetic tracking systems consist of a magnetic field generator and sensors that are placed in the generated magnetic field. The sensors require wired connection to the main system, and the sensors work as markers in the optical tracking system. The electromagnetic system has limitations such as small recognition area, and affect from magnetic materials around it. However, it operates even in the space where the light does not reach, and the sensor can be attached to the tip of flexible or bending instruments. The accuracy in general, is lower than that of the optical system, although it is improving up to the close level of the optical system. Figure shows one of the commercial optical and electromagnetic tracking systems.

Figure 5 Tool tracking system; (a) optical system vs. (b) electromagnetic system (color figure available online).

Figure 5 Tool tracking system; (a) optical system vs. (b) electromagnetic system (color figure available online).

3.3. Image Registration

Accurate coordinate matching between the patient and the medical images is the most important part in performing the surgical navigation (Eggers et al. Citation2006; Knott et al. Citation2004) For this registration, we need at least three points in patient and image spaces (Liu et al. Citation2003; Arun and Blostein Citation1987; Besl and McKay Citation1992). According to the procedures to acquire feature points, the registration methods can be classified. The most conventional approach is to use fiducial markers (Hong and Hashizume Citation2010). The skin-affixed markers are representative ones. The markers are attached to the patient in appropriate places before taking MR or CT images. The markers can be identified in the images, and are used for the fiducial markers during the surgery. Patients need to have the markers attached until the surgery begins. The images taken without such markers cannot be used for registration. If we can find any specific points in the body, we do not need artificial fiducial markers. To use anatomical landmarks as the fiducial markers for registration has much advantage. However it is difficult to find such anatomical landmarks, and furthermore pointing out the correct positions of the landmarks during the surgery is also difficult and has much variance for medical doctors.

The mark-less registration method employs the unique shape of body itself, such as a line or surface found in the face. After obtaining the line or surface, the system performs automatic matching with the medical images (Nottmeier et al. Citation2007; Schicho et al. Citation2007). This approach is referred to as the iterative closest point (ICP) method. This method provides an automatic registration process without using the fiducial markers. However, it shows lower accuracy than the paired point registration according to the literature (Luebbers et al. Citation2008). In the commercialized systems, they use the ICP method with the prior results of the paired point method to improve the registration accuracy. There is also a template-based method (Eggers et al. Citation2007). A template which includes fiducial markers is fixed on the patient before acquiring the medical images. Once the template is removed after the scan, it will be fixed at the same position on the patient again, when the surgery is performed. Since this method does not require skin markers, it is more convenient for the patient, but it is difficult to place the template at the same position. The template can be fixed to the teeth or gingival of the patient (Figure b). In some cases, the fiducial markers are screwed on the skull. The markers fixed on bone are not affected by the skin deformation or movement. However, it requires a high level of invasiveness to the patient, so this method has not been popularly used these days. Figure shows skin-affixed markers, a template and anatomical landmarks for the registration, and Table shows the advantages and drawbacks of each registration method.

Figure 6 Fiducial markers for registration; (a) skin markers, (b) template, and (c) anatomical landmarks on the temporal bone (color figure available online).

Figure 6 Fiducial markers for registration; (a) skin markers, (b) template, and (c) anatomical landmarks on the temporal bone (color figure available online).

Table 1. Advantages and drawbacks of registration methods

3.4. Display Methods During Surgery

There are several display methods of image guidance. We can classify the display modes into multi-planar, 3-D graphic, and augmented reality modes (Figure ). The multi-planar display is the most conventional style. This method provides three orthogonal planes at the position of the surgical instrument. Typically, three planes consist of axial, sagittal, and coronal planes. This display method provides familiar views to the doctors, but it is not intuitive, and difficult to imagine the 3-D space because of 2-D display. The 3-D graphic display method provides virtual 3-D space and renders the surgical tool position as well as ROIs such as tumors and arteries resulted from the segmentation. This display is intuitive and provides 3-D information. However a problem of these two methods is that surgeons need to move their vision from the patient to the navigation monitor. The augmented reality (AR) display provides the navigation information on the real patient images. Real patient image implies here, endoscopic or surgical microscopic images which surgeons are looking at during surgery (Kawamata et al. Citation2002; Low et al. Citation2010). The tumors or vessels inside body or organs are superimposed on the images (Figure ), so that surgeons may know where to access. There is also a study to project the ROI image on the patient body (Sugimoto et al. Citation2010). This AR display is attractive in that surgeons do not have to move their vision from the endoscope monitor or the patient. A weakness of AR display is difficult expression about the distance to the ROIs. We can know that tumors or vessels exist inside the organ, but cannot know how deeply they are located from the suface. Besides, the display frame rate decreases greatly when the ROIs are superimposed. A recent research proposed dual navigation using a 3-D graphic and AR display to acquire the depth information (Kim et al. Citation2011).

Figure 7 The display methods for image guided surgery; (a) multi-planar, (b) 3-D graphic, (c) augmented reality mode (color figure available online).

Figure 7 The display methods for image guided surgery; (a) multi-planar, (b) 3-D graphic, (c) augmented reality mode (color figure available online).

3.5. Accuracy and Safety of Image Guidance

Image guidance in surgery is a very helpful technology, but accuracy and reliability remain to be issues in clinical use (Claes et al. Citation2000; Copeland et al. Citation2005; Metzger et al. Citation2007; West et al. Citation2001). If the guidance via image is inaccurate, surgeons may leave the tumor, or damage other normal tissues. In surgical navigation, three different accuracies are defined, i.e., fiducial location error (FLE), fiducial registration error (FRE), and target registration error (TRE) (Fitzpatrick et al. Citation1998). FLE implies the marker identification accuracy. When the skin markers are shifted or deformed, or the markers are not clearly visible in medical images, we have high FLE. FRE is the most commonly used parameter to evaluate the accuracy. Once we obtain a transformation matrix through the registration process, the coordinates of the markers in patient space can be converted to image coordinates, then the converted coordinates are compared to the coordinates of the markers in image space. The root mean square (RMS) error between the original one and the transformed one is the FRE.

FRE between the image and the patient space is defined by equation (3) (Liu et al. Citation2003), and a rotation matrix R and a translation vector t are determined when the FRE is minimal. The relationship between the points in the patient space x and the image space y is given by

where w is a weighting factor, usually set to 1.

In general, FLE strongly affects FRE. However, FRE can have a large value even if FLE is zero, when the patient moves or image distortion exists. The performance of position tracking system also affects FRE. FRE is displayed after the registration, and usually, it varies approximately from 1 to 5 mm. FRE must be very carefully accepted, because it does not represent the accuracy at the target. It represents only the accuracy related to the markers or the anatomical landmarks which are used for registration. TRE means the error at the target, and it can be considered real navigation accuracy, but in most cases, direct measurement for TRE is very difficult, because the target has various shapes and sizes, and precise indication is difficult both in real patient and medical images.

Therefore, careful consideration should be taken to evaluate and accept the results after the registration is done. Even though FRE is very small, a large TRE can exist. This case occurs when the markers are located relatively closely each other, and the target is far from the marker group. For example, in the surgical navigation for ear surgery, markers are placed around earflap (Figure a), and the target is located often in the middle or inner ear area. In that case TRE can be large, while the FRE is small.

4. MEDICAL ROBOTICS

In general, the term “medical robotics” includes devices used not only for surgery but also tissue analysis, welfare, rehabilitation, and nursing. Among the various medical fields, surgery is of great importance because it requires direct interaction with the human body. In this article, we will focus on the role of surgical robotics, reviewing several important requirements and examples. Nowadays, robots are widely used in several fields of surgery, including neurologic, orthopedic, abdominal, urological, ear-nose-throat (ENT), pediatric, and fetal surgery. Fundamentally, the most important purpose of a surgical robot is to achieve minimally invasive surgical treatments. In the next section, we will describe basic requirements and classifications used in medical robotics, and review examples of some current fields of research.

4.1. Basic Requirements of Medical Robots

Superficially, it seems that an industrial robot could be redesigned for use as a medical robot, but many design changes are necessary to satisfy the requirements of a medical robot in an operating room. Neither the hardware nor the software of industrial robots are designed for surgery; therefore, these would have to be redesigned to meet the specific clinical demands of surgery. The differences in requirements of medical robots as compared with industrial robots are outlined below (Dohi Citation1995).

4.1.1. Direct contact with the human body

By definition, medical robots have to be in direct contact with the patient body. This distinguishes them from their industrial counterparts, where the separation of work areas between humans and robots can be easily accomplished by assigning specific locations for human and robots.

4.1.2. Safety

Accident safety has to be treated very carefully. A power failure for a medical robot means that the surgeon has to be able to manually continue the operation. In contrast, with an industrial robot one can wait until power is restored.

4.1.3. Different functions are required for each medical procedure

Surgical tasks differ depending on the operation site.

4.1.4. No trials are permitted

The main purpose of a surgical robot is to basically remove human tissue; therefore, it is obvious that test runs are not allowed.

4.1.5. User-friendly interfaces

The end users of medical robots, i.e., medical doctors, do not usually receive in-depth training to operate complicated machines.

4.1.6. Sterilization

The parts that are in direct contact with the patient body have to be completely sterile, therefore, must be sterilized before the surgery is performed. The parts that do not come into direct human contact can be covered with sterile sheets.

4.2. Image-Guided Robots

In the 1980s, stereotactic neurosurgies began to be performed using robots that used quantitative positioning information acquired based on X-ray CT images. Needle placement and semi- or fully automatic needle insertion were performed using these robots (Kwoh et al. Citation1988; Masamune et al. Citation1995a; Masamune et al. Citation1995b; Glauser et al. Citation1995; Stoianovici et al. Citation1998). These precise-positioning robots are driven by surgical tools based on pre-operative surgical simulations, such as the CAD/CAM surgical system (Taylor et al. Citation1999; Bargar et al. Citation1998; Kwon et al. Citation2002). A representative CAD/CAM robot, the ROBODOC manufactured by the Curexo Technology Corporation, which is used to help perform orthopedic surgeries, was commercially available during the early stages of medical robotics. Using ROBODOC, the surgeon first uses a pre-surgical bone cutting simulation to make a precise hole in the femur before fixing an artificial hip prosthesis (http://www.robodoc.com 2011).

MRI and ultrasonic images are also used together to get beneficial information in the quantitative coordinate system required by needle-guidance robots. MRI has the capability of taking detailed images, such as T2-weighted, angiographic and functional images, without exposing the patient to radiation. Nowadays, open-type MRIs are gradually being installed in operating rooms to assist during surgeries. To use these images, surgical robots that can function under strong magnetic fields are being developed by many research institutes (Masamune et al. Citation1995a; Stoianovici et al. Citation2007; Chinzei et al. Citation2000; Fischer et al. Citation2008) (Figure ). Because of the effects of a 0.2–1.5 T magnetic field, a conventional robot made with ferromagnetic materials cannot be used; instead non-metal, non-ferromagnetic materials, sensors, and actuators must be used. Thus, ultrasonic and pneumatic motors are often used in robotic actuation. The merit of ultrasonic imaging is that it provides real-time imaging, though the image generally has a lot of noise. By applying visual feedback technology, the robot can track the targeted region and precisely puncture the designated area with the needle (Hong et al. Citation2004; Harris et al. Citation1997; Ng et al. Citation1993). In the past, the problem was that the images would be 2-D and the target would move in a 3-D manner; however, recently developed 3-D ultrasonic imaging techniques are becoming more common, and the 3-D tracking of targets will be available in the near future.

Figure 8 The first MRI compatible needle insertion robot (color figure available online).

Figure 8 The first MRI compatible needle insertion robot (color figure available online).

The abovementioned robotic systems require medical imaging capabilities; thus, the segmentation of the targeted region, registration between images, the patient and the robot, and a high degree of precision, are required in the commercial products.

4.3. Robots for Endoscopic and Microscopic Surgeries

In the 1990s, new surgical procedures for performing laparoscopic surgery were developed, and surgeons were able to operate inside the body using a rigid, optical endoscope and a long pair of forceps through just three or four incisions made in the skin of the patient (Mouret Citation1990). In the early trials conducted to introduce a robot for endoscopic surgery, the endoscopic robot, taking the role of the assistant, would hold the endoscope that was controlled by the head surgeon (Sackier and Wang Citation1994; Finlay and Ornstein Citation1995; Kobayashi et al. Citation1999; Taylor et al. Citation1995). Using the robot, the surgeon could perform a surgery alone. At the same time, forceps with multiple degrees of freedom were developed and combined with the robot to assist during endoscopic surgeries (Chang et al. Citation2003; Guthart and Salisbury Citation2000; Mitsuishi et al. Citation2003). Da Vinci, which is manufactured by Intuitive Surgical Co., is currently the most famous medical robot. This is the master-slave robotic system that contains one 3-D endoscopic arm, three robotic forceps for performing operations, and a master console for the surgeon. The surgeon only operates the master robot and the slave robot acts within the patient's body (Figure ). Using Da Vinci-like master slave robot, a surgeon can operate overseas, i.e., tele-surgery is possible. In tele-surgery, the network speed and its reliability are the most important factors that need to be considered. Mitsuishi research group is currently developing a master-slave robot for microvascular surgeries that utilizes very precise movements under the magnified view of a microscope for vascular and neuro surgery (Mitsuishi et al. Citation2000). Blood vessels having a diameter of less than 1 mm can be anastomosed, thereby reducing errors caused by tremors. The key points of the master-slave system are the user interface, the multiple degrees of freedom, and the time-delay of the manipulation. From a robotics point-of-view, the success of the Da Vinci system is attributed to the seven degrees of freedom in the control and use of forceps and the smoothness of the master robot.

Figure 9 Master-slave surgical robot: daVinci (daVinci 2011) (color figure available online).

Figure 9 Master-slave surgical robot: daVinci (daVinci 2011) (color figure available online).

Recently, new surgical procedures known as NOTES (Natural Orifice Translumenal Endoscopic Surgery) and SPS (Single-Port Surgery) have been attracting a lot of attention (Sclabas et al. 2006; Zuo et al. Citation2008; Hawes Citation2006). Using only one access port, a surgeon can observe the surgical view and perform complex procedures using a flexible, fiber endoscope and a small robot at the tip of the endoscope (Figures and ). These surgeries only require a small incision and are considered much less invasive. Conventional tools are inadequate and master-slave robots are needed to perform these types of surgeries. Another expansion of the endoscopic robot is the robotic use of long forceps with multiple degrees of freedom. The surgeon holds the robot in his or her hand while performing the endoscopic surgery. Yamashita et al. (Citation2003) has developed a 2DOF manipulator that uses rigid links to maintain a sufficient amount of torque (Figure ).

Figure 10 Forceps robot for endoscopic fetus surgery; small diameter, 2DOF, rigid forceps robot with LASER fiber (color figure available online).

Figure 10 Forceps robot for endoscopic fetus surgery; small diameter, 2DOF, rigid forceps robot with LASER fiber (color figure available online).

Figure 11 Flexible-rigid changeable endoscope guide for NOTES/SPS surgery (color figure available online).

Figure 11 Flexible-rigid changeable endoscope guide for NOTES/SPS surgery (color figure available online).

The type of robot that is located midway between an image-guided robot and a master-slave robot is referred to as homing robot. This type of robot uses advanced surgical devices and image information. For example, Sakuma et al. (Citation2007) developed a robotic LASER ablation system for precise neurosurgeries that incorporates the use of intra-operative 5-ALA-induced Pp-IX fluorescence detection. This system is a combination of tumor diagnosis and therapeutic system that contains an operating microscope for neurosurgery, an excitation light source for the detection of the 5-ALA-induced Pp-IX fluorescence, and a LASER for tumor evaporation (Sakuma et al. Citation2007; Noguchi et al. Citation2006). Once a surgeon indentifies the original contours of a tumor, the system can precisely track and ablate the tumor using the LASER. Using this technique, semi-automatic medical robots will be widely available in the near future. LASER fiber is set for tissue coagulation.

4.4. Outlook for Medical Robotics

Further studies are required in the field of medical robotics. One of the most difficult subjects in this field is haptics. When using the master-slave system, the surgeon needs to know the feeling of the organ surface, and a haptic interface would be very useful for accomplishing this. Many researchers are trying to develop a haptic device for use in surgeries, and some are being applied in virtual training systems (Okamura Citation2004; Katsura et al. Citation2005; Basdogan et al. Citation2001). For clinical situations, sensors and display methods will need to be further developed. In addition to information technology, “surgical scenarios” are attracting attention for future developments in surgical robot technology. Some surgical procedures, e.g., cholecystectomy, are relatively easy to analyze and model using computer descriptions. In addition, the automatic recognition of endoscopic images is being realized. Using this information and knowing the surgical state during an operation, assistant robots could help complete a surgeon's tasks in advance for a smoother task flow in the operating room (Miyawaki et al. Citation2005; Kochan 2005; Yoshimitsu et al. Citation2005).

One of the biggest issues in medical robotics is commercialization. Some of the hurdles to commercialization are ingrained in our societies, such as government approval processes, product liability law, and other economic conditions, but the most important issue is balancing the risks and benefits of using robots during surgeries. The robot's role in a surgery should be carefully considered. Sometimes it's very important to reconsider the medical purpose and the role of medical robots. In laparoscopic surgery, for example, the role of the laparoscopic robot should not be to control the laparoscope, but to observe the internal view of the body using a small camera. Kobayashi et al. (Citation2000) developed a novel endoscope that uses two wedge prisms to move the field of view without moving the scope itself. This system is rather simple and meets the surgeons’ demand (Figure ).

Figure 12 The concept of endoscope robot using two wedge prisms: By rotating two wedge prisms, arbitrary view is obtained without moving the endoscope itself (color figure available online).

Figure 12 The concept of endoscope robot using two wedge prisms: By rotating two wedge prisms, arbitrary view is obtained without moving the endoscope itself (color figure available online).

Furthermore, the combination of robotics with other fields of research, such as regenerative, gene induction, and biochemical therapies, is quite remarkable and will provide the next generation of less invasive, pin-point surgeries. On-site diagnostic and therapeutic robots could also be developed in the near future.

5. CONCLUSION

Imaging and robotics technologies provide surgeons an advanced eye and hand to perform their surgeries in safer and more accurate manner. Recent medical images have been utilized in the operating room as well as in the diagnostic stage. If the image to patient registration is done with sufficient accuracy, medical images can be used as “a map” for guidance to the target lesion. Optical or electromagnetic position tracking system follows the surgical instrument and displays its position on the medical images. Accuracy and reliability of surgical navigation system should be sufficiently verified before applying it to patient. Particularly FRE values suggested from the system requires careful interpretation, especially when the target is far from the markers’ location. The real-time imaging and function imaging will more employed in the future. In addition, the fundamental issues regarding use of medical robots and examples of their current use were described. Under the current surgical CAD/CAM system and guidelines for image-guided surgery, the use of intra-operative imaging devices are required to correct the targeted position because of deformations in the target organ.

In all the master-slave systems, da Vinci is the most successful and representative medical robot; however, more functional end-effectors and tools will need to be included in the next generation of slave robots. Intelligent operating rooms and assistant robots are also big areas of research in the field of medical robotics. These developments require image/sensor-based recognition systems, statistical analysis software, and related information technologies. Combining these technologies will help to improve the precision and reliability of surgeries in the future. However, the most important factor to consider is determining the demand, and the strategy for the use of medical robots in operating procedures, and how it aids patients in the hospital. Medical doctors and researchers should always think from the patient's point of view.

ACKNOWLEDGEMENTS

This article was written in the support in part by the DGIST R&D Program of the Ministry of Education, Science and Technology of Korea (11-BD-0402).

REFERENCES

  • Arun , K. S. and S. D. Blostein . 1987 . Least-square fitting of two 3-D point sets . IEEE Transactions on Pattern Analysis and Machine Intelligence 9 ( 5 ): 698 – 700 .
  • Bargar , William L. , André Bauer , Martin Börner , and Anthony M. DiGioia . 1998 . Primary and revision total hip replacement using the Robodoc(R) System . Clinical Orthopaedics & Related Research 354 : 82 – 91 .
  • Basdogan , C. , C. H. Ho , and M. A. Srinivasan . 2001 . Virtual environments for medical training: graphical and haptic simulation of laparoscopic common bile duct exploration . IEEE/ASME Transactions on Mechatronics 6 ( 3 ): 269 – 285 .
  • Besl , P. and N. McKay . 1992 . A method for registration of 3-D shapes . IEEE Transactions on Pattern Analysis and Machine Intelligence 14 ( 2 ): 239 – 256 .
  • Caversaccio , M. , D. Zulliger , R. Bachler , L. P. Nolte , and R. Hausler . 2000 . Practical aspects for optimal registration (matching) on the lateral skull base with an optical frameless computer-aided pointer system . American Journal of Otology 21 ( 6 ): 863 – 870 .
  • Caversaccio , M. and W. Freysinger . 2003 . Computer assistance for intraoperative navigation in ENT surgery . Minimally Invasive Therapy & Allied Technologies 12 ( 1 ): 36 – 51 .
  • Chang , L. , R. M. Satava , C. A. Pellegrini , and M. N. Sinanan . 2003 . Robotic surgery: Identifying the learning curve through objective measurement of skill . Surgical Endoscopy 17 ( 11 ): 1744 – 1748 .
  • Chinzei , K. , N. Hata , F. A. Jolesz , and R. Kikinis . 2000 . Surgical assist robot for the active navigation in the intraoperative MRI: Hardware design issues. Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems 2000 (IROS 2000) 1: 727–732.
  • Claes , J. , E. Koekelkoren , F. L.Wuyts, G. M. E. Claes , L. Van Den Hauwe , and P. H. Van de Heyning . 2000 . Accuracy of computer navigation in ear, nose, throat surgery . Otolaryngology-Head and Neck Surgery 126 ( 12 ): 1462 – 1466 .
  • Copeland , B. J. , B. A. Senior , C. A. Buchman , and H. C. Pillsbury . 2005 . The accuracy of computer-aided surgery in neurotologic approaches to the temporal bone: A cadaver study . Otolaryngology-Head and Neck Surgery 132 ( 3 ): 421 – 428 .
  • daVinci. Accessed October 13, 2011. http://drhikawa.luna.bindsite.jp/diary/pg131.html .
  • Dohi , T. 1995 . Computer aided surgery and micro machine. Proceedings of the 6-th International Symposium on Micro Machine and Human Science 21–24.
  • Dorr , L. D. , Y. Hishiki , W. Z. Wan , D. Newton , and A. Yun . 2005 . Development of imageless computer navigation for acetabular component position in total hip replacement . IOWA Orthopedic Journal 25 : 1 – 9 .
  • Eggers , G. , J. Muhling , and R. Marmulla . 2006 . Image-to-patient registration techniques in head surgery . International Journal of Oral and Maxillofacial Surgery 35 ( 12 ): 1081 – 1095 .
  • Eggers , G. and J. Muhling . 2007 . Template-based registration for image-guided skull base surgery . Otolaryngol Head Neck Surgery 136 ( 6 ): 907 – 913 .
  • Finlay , P. A. and M. H. Ornstein . 1995 . Controlling the movement of a surgical laparoscope . IEEE Engineering in Medicine and Biology 14 ( 3 ): 289 – 299 .
  • Fischer , G. S. , I. Iordachita , C. Csoma , J. Tokuda , S. P. DiMaio , C. M. Tempany , N. Hata , and G. Fichtinger . 2008 . MRI-Compatible pneumatic robot for transperineal prostate needle placement . IEEE/ASME Transactions on Mechatronics 13 ( 3 ): 295 – 305 .
  • Fitzpatrick , J. M. , J. B. West , and C. R. Maurer . 1998 . Predicting error in rigidbody point-based registration . IEEE Transactions on Medical Imaging 17 ( 5 ): 694 – 702 .
  • Guthart , G. S. and J. K. Salisbury . 2000. The intuitive telesurgery system: Overview and application. Proc. of the 2000 IEEE International Conference on Robotics & Automation 1: 618–621.
  • Glauser , D. , H. Fankhauser , M. Epitaux , J. L. Hefti , and A. Jaccottet . 1995 . Neurosurgical robot minerva: First results and current developments . Computer Aided Surgery 1 ( 5 ): 266 – 272 .
  • Gumprecht , H. K. , D. C. Widenka , and C. B. Lumenta . 1999 . BrainLab vectorvision neuronavigation system: Technology and clinical experiences in 131 cases . Neurosurgery 44 ( 1 ): 97 – 104 .
  • Gunkel , A. R. , M. Vogele , A. Martin , R. J. Bale , W. F. Thumfart , and W. Freysinger . 1999 . Computer-aided surgery in the petrous bone . Laryngoscope 109 ( 11 ): 1793 – 1799 .
  • Harris , S. J. , F. Arambula-Cosio , Q. Mei , R. D. Hibberd , B. L. Davies , J. E. A. Wickham , M. S. Nathan , and B. Kundu . 1997 . The Probot—An active robot for prostate resection . Proceedings of the Institution of Mechanical Engineers. Part H: Journal of Engineering in Medicine 211 ( 4 ): 317 – 325 .
  • Hawes , R. H. 2006 . ASGE/SAGES working group on natural orifice translumenal endoscopic surgery. white paper . Gastrointestinal Endoscopy 63 : 119 – 203 .
  • Hong , J. , Y. Muragaki , R. Nakamura , M. Hashizume , and H. Iseki . 2007 . A neurosurgical navigation system based on intraoperative tumour remnant estimation . Journal of Robotic Surgery 1 ( 1 ): 91 – 97 .
  • Hong , J. , and M. Hashizume . 2010 . An effective point-based registration tool for surgical navigation . Surgical Endoscopy 24 ( 4 ): 944 – 948 .
  • Hong , J. , T. Dohi , M. Hashizume , K. Konishi , and N. Hata . 2004 . An ultrasound-driven needle insertion robot for percutaneous cholecystostomy . Phys Med. Biol. 49 ( 3 ): 441 – 455 .
  • Hong , J. , N. Matsumoto , R. Ouchida , S. Komune , and M. Hashizume . 2009 . Medical navigation system for otologic surgery based on hybrid registration and virtual intraoperative computed tomography . IEEE Transactions on Biomedical Engineering 56 ( 2 ): 426 – 432 .
  • Hong , J. , H. Nakashima , K. Konishi , S. Ieiri , K. Tanoue , and M. Hashizume . 2006 . Interventional navigation for abdominal surgery by simultaneous use of MRI and ultrasound . Medical and Biological Engineering and Computing 44 ( 12 ): 1127 – 1134 .
  • Katsura , S. , W. Iida , and K. Ohnishi . 2005 . Medical mechatronics—An application to haptic forceps . Annual Reviews in Control 29 ( 2 ): 237 – 245 .
  • Kawamata , T. H. , T. Iseki , Shibasaki , and T. Hori . 2002 . Endoscopic augmented reality navigation system for endonasal transsphenoidal surgery to treat pituitary tumors: Technical note . Neurosurgery 50 ( 6 ): 1393 – 1397 .
  • Kim , S. , J. Hong , S. Joung , A. Yamada , N. Matsumoto , S. I. Kim , Y. Kim , and M. Hashizume . 2011 . Dual surgical navigation using augmented and virtual environment techniques . International Journal of Optomechatronics 15 : 155 – 169 .
  • Knott , P. D. , C. R. Maurer , R. Gallivan , H. J. Roh , and M. J. Citardi . 2004 . The impact of fiducial distribution on headset-based registration in image guided sinus surgery . Otolaryngology-Head and Neck Surgery 131 ( 5 ): 666 – 672 .
  • Kobayashi , E. , K. Masamune , I. Sakuma , and T. Dohi . 2000 . A wide-angle view endoscope system using wedge prisms . Lecture Notes in Computer Science 1935 : 661 – 668 .
  • Kobayashi , E. , K. Masamune , I. Sakuma , T. Dohi , and D. Hashimoto . 1999 . A new safe laparoscopic manipulator system with a five-bar linkage mechanism and an optical zoom . Computer Aided Surgey 4 ( 4 ): 182 – 192 .
  • Kwoh , Y. S. , J. Hou , E. A. Jonckheere , and S. Hayati . 1988 . A robot with improved absolute positioning accuracy for CT guided stereotactic brain surgery . IEEE Transactions on Biomedical Engineering 35 ( 2 ): 153 – 160 .
  • Kwon , D. S. , J. J. Lee , Y. S. Yoon , S. Y. Ko , J. Kim , J. H. Chung , C. H. Won , and J. H. Kim . 2002. The mechanism and the registration method of a surgical robot for hip arthroplasty. Proceedings of IEEE International Conferene on Robotics and Automation 1889–2949.
  • Labadie , R. F. , R. J. Shah , S. S. Harris , E. Cetinkaya , D. S. Haynes , M. R. Fenlon , A. S. Juszczyk , R. L. Galloway , and J. M. Fitzpatrick . 2005 . In vitro assessment of image-guided otologic surgery: Submillimeter accuracy within the region of the temporal bone . Otolaryngology-Head and Neck Surgery 132 ( 3 ): 435 – 442 .
  • Labadie , R. F. , R. J. Shah , S. S. Harris , E. Cetinkaya , D. S. Haynes , M. R. Fenlon , A. S. Juscyzk , R. L. Galloway , and J. M. Fitzpatrick . 2004 . Submillimetric target-registration error using a novel, non-invasive fiducial system for image-guided otologic surgery . Computer Aided Surgery 9 ( 4 ): 145 – 153 .
  • Liao , H. , H. Ishihara , H. H. Tran , K. Masamune , I. Sakuma , and T. Dohi . 2010 . Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay . Computerized Medical Imaging and Graphics 34 ( 1 ): 46 – 54 .
  • Liu , H. , Y. Yu , M. C. Schell , W. G. O'Dell , R. Ruo , and P. Okunieff . 2003 . Optimal marker placement in photogrammetry patient positioning system . Medical Physics 30 ( 103 ): 103 – 110 .
  • Low , D. , C. K. Lee , L. L. Dip , W. H. Ng , B. T. Ang , and I. Ng . 2010 . Augmented reality neurosurgical planning and navigation for surgical excision of parasagittal, falcine and convexity meningiomas . British Journal of Neurosurgery 24 ( 1 ): 69 – 74 .
  • Luebbers , H. T. , P. Messmer , J. A. Obwegeser , R. A. Zwahlen , R. Kikinis , K. W. Graetz , and F. Matthews . 2008 . Comparison of different registration methods for surgical navigation in craniomaxillofacial surgery . Journal of Cranio-Maxillo-Facial Surgery 36 ( 2 ): 109 – 116 .
  • Maeda , T. , J. Hong , K. Konishi , T. Nakatsuji , T. Yasunakga , Y. Yamashita , A. Taketomi , K. Kotoh , M. Enjoji , H. Nakashima , K. Tanoue , Y. Maehara , and M. Hashizume . 2009 . Tumor ablation therapy of liver cancers with an open magnetic resonance imaging-based navigation system . Surgical Endoscopy 23 ( 5 ): 1048 – 1053 .
  • Masamune , K. , E. Kobayashi , Y. Masutani , M. Suzuki , T. Dohi , H. Iseki , and K. Takakura . 1995a . Development of an MRI-compatible needle insertion manipulator for stereotactic neurosurgery . Journal of Image Guided Surgery 1 ( 4 ): 242 – 248 .
  • Masamune , K. , M. Sonderegger , H. Iseki , K. Takakura , M. Suzuki , and T. Dohi . 1995b . Robots for stereotactic neurosurgery . Advanced Robotics 10 ( 4 ): 391 – 401 .
  • Metzger , M. C. , A. Rafii , B. Holhweg-Majert , A. M. Pham , and B. Strong . 2007 . Comparison of 4 registration strategies for computer-aided maxillofacial surgery . Otolaryngology-Head and Neck Surgery 137 ( 1 ): 93 – 99 .
  • Mitsuishi , M. , S. Tomisaki , T. Yoshidome , H. Hashizume , and K. Fujiwara . 2000 . Tele-micro-surgery system with intelligent user interface . IEEE International Conference on Robotics and Automation 2 : 1607 – 1614 .
  • Mitsuishi , M. , J. Arata , K. Tanaka , M. Miyamoto , T. Yoshidome , S. Iwata , M. Hashizume , and S. Warisawa . 2003 . Development of a remote minimally-invasive surgical system with operational environment transmission capability. Proc. IEEE International Conference on Robotics and Automation 2663–2670.
  • Miyawaki , F. , K. Masamune , S. Suzuki , K. Yoshimitsu , and J. Vain . 2005 . Scrub nurse robot system-intraoperative motion analysis of a scrub nurse and timed-automata-based model for surgery . IEEE Transactions on Industrial Electronics 52 ( 5 ): 1227 – 1235 .
  • Morioka , T. , S. Nishio , K. Ikezaki , Y. Natori , T. Inamura , H. Muratani , M. Muraishi , K. Hisada , F. Mihara , T. Matsushima , and M. Fukui . 1999 . Clinical experience of image-guided neurosurgery with a frameless navigation system (StealthStation) . No shinkei geka, Neurological Surgery 27 ( 1 ): 33 – 40 .
  • Mouret , P. 1990 . Surgery. Evolution or revolution? Chirurgie 116 ( 10 ): 829 – 32 .
  • Muragaki , Y. , H. Iseki , T. Maruyama , M. Tanaka , C. Shinohara , T. Suzuki , K. Yoshimitsu , S. Ikuta , M. Hayashi , M. Chernov , T. Hori , Y. Okada , and K. Takakura . 2011. Information-guided surgical management of gliomas using low-field-strength intraoperative MRI. Acta Neurochirurgica Supplementum 109: 67–72.
  • Ng , W. S. , B. L. Davis , R. D. Hibberd , and A. G. Timoney . 1993 . Robotic surgery—A first-hand experience in transurethal resection of the prostate . IEEE Engineering in Medicine and Biology Magazine 12 ( 1 ): 120 – 125 .
  • Noguchi , M. , E. Aoki , D. Yoshida , E. Kobayashi , S. Omori , Y. Muragaki , H. Iseki , K. Nakamura , and I. Sakuma . 2006 . A novel robotic laser ablation system for precision neurosurgery with intraoperative 5-ALA-Induced PpIX fluorescence detection . Medical Image Computing and Computer-Assisted Intervention 9 ( Pt1 ): 543 – 550 .
  • Nottmeier , E. W. and T. L. Crosby . 2007 . Timing of paired points and surface matching registration in three-dimensional (3-D) image-guided spinal surgery . Journal of Spinal Disorders & Techniques 20 ( 4 ): 268 – 270 .
  • Okamura , A. M. 2004 . Methods for haptic feedback in teleoperated robot-assisted surgery . International Journal of Industrial Robot 31 ( 6 ): 499 – 508 .
  • Sackier , J. M. and Y. Wang . 1994 . Robotically assisted laparoscopic surgery: From concept to development . Surgical Endoscopy 8 ( 1 ): 63 – 66 .
  • Sakuma , I. , E. Noguchi , H. Aoki , E. Liao , S. Kobayashi , Y. Omori , K. Muragaki , H. Nakamura , and H. Iseki . 2007 . Precise micro-laser ablation system with intraoperative fluorescence image guidance. 19th International Conference of Society for Medical Innovation and Technology (SMIT2007) 275–276.
  • Schicho , K. , M. Figl , R. Seemann , M. Donat , M. L. Pretterklieber , W. Birkfellner , A. Reichwein , F. Wanschitz, F. Kainberger , H. Bergmann , A. Wagner , and R. Ewers . 2007 . Comparison of laser surface scanning and fiducial marker-based registration in frameless stereotaxy . Journal of Neurosurgery 106 ( 4 ): 704 – 709 .
  • Seemann , R. and A. Wagner . 2005 . Basic research and 12 years of clinical experience in computer-assisted navigation technology: A review . International Journal of Oral and Maxillofacial Surgery 34 ( 1 ): 01 – 08 .
  • Stoianovici , D. , L. Whitcomb , J. Anderson , R. Taylor , and L. Kavoussi . 1998 . A modular surgical robotic system for image-guided percutaneous procedures . Lecture Notes in Computer Science 1496 : 404 – 410 .
  • Stoianovici , D. , D. Song , D. Petrisor , D. Ursu , D. Mazilu , M. Mutener , M. Schar , and A. Patriciu . 2007 . MRI stealth robot for prostate interventions . Minimally Invasive Therapy & Allied Technologies 16 ( 4 ): 241 – 248 .
  • Strauss , G. , K. Koulechov , M. Hofer , E. Dittrich , R. Grunert , H. Moeckel , E. Muller , W. Korb , C. Trantakis , T. Schulz , J. Meixensberger , A. Dietz , and T. Lueth . 2007 . The navigation-controlled drill in temporal bone surgery: A feasibility study . Laryngoscope 117 ( 3 ): 434 – 441 .
  • Su , L. M. , B. P. Vagvolgyi , R. Agarwal , C. E. Reiley , R. H. Taylor , and G. D. Hager . 2009 . Augmented reality during robot-assisted laparoscopic partial nephrectomy: Toward real-time 3D-CT to stereoscopic video registration . Urology 73 ( 4 ): 896 – 900 .
  • Sugimoto , M. , H. Yasuda , K. Koda , M. Suzuki , M. Yamazaki , T. Tezuka , C. Kosugi , R.Higuchi, Y. Watayo , Y. Yagawa , S. Uemura , H. Tsuchiya , and T. Azuma . 2010 . Image overlay navigation by markerless surface registration in gastrointestinal, hepatobiliary and pancreatic surgery . Journal of Hepato-Biliary-Pancreatic Sciences 17 ( 5 ): 629 – 636 .
  • Taylor , R. , J. Funda , B. Eldridge , S. Gomory , K. Gruben , D. LaRose , M. Talamini , L. Kavoussi , and J. Anderson . 1995 . A telerobotic assistant for laparoscopic surgery . IEEE Engineering in Medicine and Biology Magazine 14 ( 3 ): 279 – 288 .
  • Taylor , R. H. , L. Joskowicz , B. Williamson , A. Gurziec , A. Kalvin , P. Kazanzides , R. Van Vorhis , J. Yao , R. Kumar , A. Bzostek , A. Sahay , M. Brrner , and A. Lahmer . 1999. Computer-integrated revision total hip replacement surgery: concept and preliminary results. Medical Image Analysis 3 (3): 301–319.
  • Tomikawa , M. , J. Hong , S. Shiotani , E. Tokunaga , K. Konishi , S. Ieiri , K. Tanoue , T. Akahoshi , Y. Maehara , and M. Hashizume . 2010 . Real-time 3-dimensional virtual reality navigation system with open MRI for breast-conserving surgery . Journal of the American College of Surgeons 210 ( 6 ): 927 – 933 .
  • Wein , W. , S. Brunke , A. Khamene , M. R. Callstrom , and N. Navab . 2008 . Automatic CT ultrasound registration for diagnostic imaging and image-guided intervention . Medical Image Analysis 12 ( 5 ): 557 – 585 .
  • West , J. B. , J. M. Fitzpatrick , S. A. Toms , C. R. Maurer , and R. J. Maciunas . 2001 . Fiducial point placement and the accuracy of point-based rigid body registration . Neurosurgery 48 ( 4 ): 810 – 816 .
  • Yamashita , H. , D. Kim , N. Hata , T. Dohi . 2003 . Multi-slider linkage mechanism for endoscopic forceps manipulator . Proc IEEE/RSJ International Conference on Intelligent Robots and Systems 3 : 2577 – 2582 .
  • Yoshimitsu , K. , T. Tanaka , K. Ohnuma , F. Miyawaki , D. Hashimoto , and K. Masamune . 2005 . Prototype development of scrub nurse robot for laparoscopic surgery . International Congress Series 1281 : 845 – 850 .
  • Zuo , S. , N. Yamanaka , I. Sato , K. Masamune , H. Liao , K. Matsumiya , and T. Dohi . 2008 . MRI-Compatible rigid and flexible outer sheath device with pneumatic locking mechanism for minimally invasive surgery . Lecture Notes in Computer Science 5128 : 210 – 219 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.