808
Views
5
CrossRef citations to date
0
Altmetric
Research Article

Computer guidance system for single-incision bimanual robotic surgery

, , , , , , & show all
Pages 161-171 | Received 11 Nov 2011, Accepted 28 Mar 2012, Published online: 11 Jun 2012

Abstract

The evolution of surgical robotics is following the progress of developments in Minimally Invasive Surgery (MIS), which is moving towards Single-Incision Laparoscopic Surgery (SILS) procedures. The complexity of these techniques has favored the introduction of robotic surgical systems. New bimanual robots, which are completely inserted into the patient's body, have been proposed in order to enhance the surgical gesture in SILS procedures. However, the limited laparoscopic view and the focus on the end-effectors, together with the use of complex robotic devices inside the patient's abdomen, may lead to unexpected collisions, e.g., between the surrounding anatomical organs not involved in the intervention and the surgical robot.

This paper describes a computer guidance system, based on patient-specific data, designed to provide intraoperative navigation and assistance in SILS robotic interventions. The navigator has been tested in simulations of some of the surgical tasks involved in a cholecystectomy, using a synthetic anthropomorphic mannequin. The results demonstrate the usability and efficacy of the navigation system, underlining the importance of avoiding unwanted collisions between the robot arms and critical organs. The proposed computer guidance software is able to integrate any bimanual surgical robot design.

Introduction

Minimally Invasive Surgery (MIS) has been enormously beneficial to the overall quality of surgical outcomes, leading to extensive efforts to improve current surgical techniques. The last decade has seen substantial changes in MIS, from the use of smaller access ports and instruments to the advent of robotic surgery and the maturation of scar-less surgical procedures such as Natural Orifice Transluminal Endoscopic Surgery (NOTES) Citation[1]. The growing prominence of Single-Incision Laparoscopic Surgery (SILS) follows this trend in MIS by approaching the goal of fewer incisions, less morbidity, and improved cosmetics Citation[2], Citation3..

Although SILS techniques allow the use of manual laparoscopic instruments, thus favoring their wide-scale implementation, they also involve an intrinsic limitation for the surgeon. The single access port imposes an unnatural arrangement of the instruments, resulting in difficult maneuverability due to the relative pivoting of the instrument tips inside the patient's abdominal cavity Citation[4]. The robotic approach to SILS using the da Vinci® Surgical System (Intuitive Surgical, Inc., Sunnyvale, CA) mitigates this limitation thanks to its wristed instruments (the da Vinci® EndoWrist® tools). In spite of this, however, there are many unresolved issues related to the proper positioning of the surgical robot arms (e.g., how to avoid external collisions when working coaxially) and improving the surgeon's dexterity in single-access robotic surgery Citation[5], Citation6..

Recently, Intuitive Surgical has marketed a dedicated set of instruments for the da Vinci® robot: the VeSPA surgical instruments specifically designed for SILS procedures Citation[7]. Unfortunately, even if these tools offer a solution to the internal coaxiality issues, they do not help in avoiding external collisions. Therefore, correct positioning of the surgical robot arms is still of paramount importance.

Additionally, the latest trend in research is focused on the development of innovative robotic platforms, e.g., bimanual robots with anthropomorphic arms, bringing all the degrees of freedom (DOF) inside the abdomen of the patient, as unveiled by Intuitive Surgical at the 2010 IEEE International Conference on Robotics and Automation in Anchorage, Alaska, and also by other research groups Citation[8-11]. These surgical robotic systems ease the surgical gesture and increase the workspace reachable by the end-effectors, but they also introduce additional challenges. One of the main issues is the risk of unwanted collisions between the surgical robot arms and anatomical organs not involved in the intervention. During the performance of a SILS robotic procedure, it is mandatory for the surgeon to be aware of the position of each part of the robot arms, but this can be difficult, given that the surgeon focuses on the robot end–effectors. The problem is further exacerbated by using the typical laparoscopic view, which, being limited and centered on the surgical instrument tips, facilitates not only the loss of orientation, but also the loss of perception of the absolute relationships (i.e., distances) to the abdominal anatomy of the patient.

To overcome these limitations, intraoperative surgical navigators have been proposed. These systems enable additional virtual viewing modalities which exploit the fusion of the patient-specific 3D anatomy, reconstructed from preoperative medical datasets, with the virtual surgical instrumentation. Image guidance therefore improves the orientation capability of the surgeon by allowing inspection of the virtual surgical field from various viewpoints Citation[12]. Moreover, “blind” guidance and closed-loop control during a real surgery would require the registration of the patient-specific anatomy with high accuracy.

Commercial surgical navigators are currently limited to orthopedic surgery, neurosurgery, ear, nose and throat (ENT) surgery and a few other surgical applications. However, many research groups are confronting the challenge of building assessed surgical navigators for use in other anatomical regions Citation[12-14]. In particular, Herrell et al. Citation[15] demonstrated the benefits to be gained by using an intraoperative surgical navigator in robotic surgery, augmenting the laparoscopic images with updated preoperative images. Intraoperative image guidance for the da Vinci® Surgical System has the potential to improve performance Citation[15], as demonstrated by Kenngott et al. Citation[16] in testing a navigation system which provides real-time information on the position and orientation of the active surgical instrument in relation to the target lesion. These robotic surgical navigators specifically designed for innovative robotic platforms are becoming increasingly common, allowing precise control of the surgical instruments and improving the accuracy and efficiency of many surgical procedures Citation[17].

This paper presents the Computer Guidance Module for Intraoperative Navigation and Assistance which we have designed and developed in the context of the ARAKNES European Project Citation[18], which aims to realize a robotic surgical system for endoluminal and SILS interventions. The proposed surgical robotic platform consists of a SPRINT bimanual robot Citation[11] tele-operated by the surgeon through a customized 7-DOF console with 3D visualization and haptic capabilities. The complete robotic platform also integrates software modules for preoperative planning and simulation Citation[19], Citation[20] and intraoperative diagnosis.

Materials and methods

This section describes in detail the approach chosen to design and implement the Computer Guidance Module for the ARAKNES surgical robotic platform.

The ARAKNES system is based on the SPRINT surgical robot Citation[11], a bimanual and tele-operated robotic device designed for SILS procedures which was developed by the Biorobotics Institute of the Scuola Superiore Sant’Anna in Pisa. The complete surgical robotic platform includes two 6-DOF arms provided with interchangeable end-effectors and a stereoscopic camera; a single access port equipped with a panoramic camera; and a dedicate console for the robot control Citation[11] ().

Figure 1. Overview of the Computer Guidance Module during the preliminary test session. Top left: the ARAKNES SPRINT robot approaching the synthetic anatomy. Top center: The two haptic interfaces used to tele-operate the robot. Top right: The ARAKNES visualization system, including the stereoscopic monitor displaying the live video stream from the robot stereoscopic camera, and the Computer Guidance Module display showing the virtual surgical scenario.

Figure 1. Overview of the Computer Guidance Module during the preliminary test session. Top left: the ARAKNES SPRINT robot approaching the synthetic anatomy. Top center: The two haptic interfaces used to tele-operate the robot. Top right: The ARAKNES visualization system, including the stereoscopic monitor displaying the live video stream from the robot stereoscopic camera, and the Computer Guidance Module display showing the virtual surgical scenario.

The surgeon performs the intervention while visualizing the surgical field on a 3D monitor displaying the images transmitted by the stereoscopic camera of the surgical robot. The two PHANTOM Omni® haptic interfaces (SensAble Technologies, Inc., Wilmington, MA) equipped with custom handles and the foot pedals allow the surgeon to control both robotic arms at the same time.

The Master Workstation receives the data from the user interface and computes the new configuration of the robot arms through inverse kinematics. These data (i.e., the robot joint angles) and the end-effector state (i.e., the gripper angle) are then sent to the robot low-level control boards, which command the robot's movements toward the new configuration.

An overview of the system at work is presented in .

Computer Guidance Module: overview and functionalities

The complete hardware architecture of the Computer Guidance System () includes three main components: an electromagnetic tracking system, the surgical robotic platform, and the Computer Guidance Software Platform.

Figure 2. System diagram for the Computer Guidance Module illustrating all communications between the hardware devices and software modules of the ARAKNES surgical robotic platform. The Computer Guidance Module relies on two different types of data: the positions of the sensors as localized and transmitted by the electromagnetic tracking system; and the robot control data as computed and dispatched by the robot control system running on the Master Workstation. These data allow the Computer Guidance Module to update the virtual surgical robot in order to maintain coherence with the real robotic device.

Figure 2. System diagram for the Computer Guidance Module illustrating all communications between the hardware devices and software modules of the ARAKNES surgical robotic platform. The Computer Guidance Module relies on two different types of data: the positions of the sensors as localized and transmitted by the electromagnetic tracking system; and the robot control data as computed and dispatched by the robot control system running on the Master Workstation. These data allow the Computer Guidance Module to update the virtual surgical robot in order to maintain coherence with the real robotic device.

To localize the SPRINT robot throughout the entire surgical procedure, we have placed a 6-DOF electromagnetic sensor on the robot base, tracked using an Aurora® Electromagnetic Tracking System (NDI, Waterloo, Ontario). This device communicates with the Computer Guidance Module via an RS232 serial interface, and permits precise real-time spatial measurements tracking the position and orientation of the electromagnetic sensor coils without the need for a clear line of sight.

Once the integration of all hardware and software modules for the ARAKNES project is completed, the surgical robotic platform will be provided with an external robot manipulator (Dionis Manipulator) Citation[21] able to accurately position the SPRINT robot and the single access port. At that time, the electromagnetic sensor on the robot base will be removed, and tracking will henceforth be performed simply by monitoring the external robot manipulator configuration.

The localization of the two robot arms also requires the tracking of the opening angle of each robot arm joint. For this reason, the Computer Guidance Module is connected to the same communication network as described previously (). Therefore, the information (opening angles of the robot joints) sent by the Master Workstation to the low-level robot control boards can also be shared by the Computer Guidance Module through standard UDP communications. Thus, we are able to update the configuration of the virtual robot coherently with the real device.

The Computer Guidance Module provides intraoperative navigation functionalities in three different modalities: PASSIVE as a surgical navigator, ASSISTIVE as a guide for the single port placement, and ACTIVE as a tutor preventing unwanted collisions during the intervention.

The main purpose of the module (PASSIVE modality) is to offer the surgeon a complete view of the patient's virtual anatomy (exploiting patient-specific 3D models) and of the virtual bimanual robot in a virtual environment coherent with the real surgical scenario. Preoperative diagnostic exams (e.g., CT or MRI) of the patient undergoing surgery are elaborated to generate the patient-specific virtual 3D models of the abdominal anatomical organs. This virtual anatomy is then loaded into the virtual surgical scenario together with the virtual bimanual surgical robot, whose position is updated with the data received from the real robot.

The visualization of the virtual scene aims to improve the surgical performance, enabling the surgeon to avoid visual occlusions; to hide selected anatomies in order to see hidden structures; to change the point of view, modifying the position of the virtual camera () to view the surgical scene from a different perspective (and also allowing the use of the same viewpoint of the real robot stereo endoscopic camera); to perform quick intraoperative diagnostic exams; and to integrate a visualization tool for the intraoperative diagnostic exams performed.

Figure 3. The Computer Guidance Module during a simulated cholecystectomy, working in PASSIVE modality. The surgeon can switch between different viewing modalities, e.g., using the panoramic view mode (top) or the same robot stereoscopic camera placed between the two robotic arms (bottom).

Figure 3. The Computer Guidance Module during a simulated cholecystectomy, working in PASSIVE modality. The surgeon can switch between different viewing modalities, e.g., using the panoramic view mode (top) or the same robot stereoscopic camera placed between the two robotic arms (bottom).

The ASSISTIVE modality can be used in the initial phase of the surgical procedure, i.e., when the surgeon has to decide on the insertion point for the single access port. The Computer Guidance Module is able to mark the virtual anatomy (i.e., the abdomen) to show the optimal port placement point as identified during the preoperative planning. Thus, this critical and time-consuming phase can be performed easily and quickly, accurately following the planned surgical strategy Citation[22] ().

Figure 4. The Computer Guidance Module during a simulated cholecystectomy, working in ASSISTIVE modality. In this modality, the surgeon is able to load the planned position for the single access port, as decided using the Planning and Simulation Module during the preoperative phase. The optimal port placement is marked on the virtual abdomen of the patient, assisting the surgeon in the insertion of the trocar.

Figure 4. The Computer Guidance Module during a simulated cholecystectomy, working in ASSISTIVE modality. In this modality, the surgeon is able to load the planned position for the single access port, as decided using the Planning and Simulation Module during the preoperative phase. The optimal port placement is marked on the virtual abdomen of the patient, assisting the surgeon in the insertion of the trocar.

The optimal access port placement can be chosen using the preoperative Planning and Simulation Module, as described in references Citation[19] and Citation[20]. This software application allows loading of the patient anatomical 3D models and simulation of the movements of the robot arms interacting with the virtual anatomy. Furthermore, it is possible to change the position and orientation of the access port to determine empirically the optimal configuration of the bimanual surgical robot.

During the surgical intervention, the Computer Guidance Module (working in ACTIVE modality) is able to prevent unwanted robot impacts against delicate organs (e.g., vessels). When the module detects a possible collision, visual and acoustic warnings are sent to alert the surgeon that a robot arm is too close to a critical area (). During the initial phase of the intervention, the surgeon can arbitrarily select the anatomical organs to be designated as critical, and the software automatically monitors these structures throughout the surgical procedure.

Figure 5. The Computer Guidance Module during a simulated cholecystectomy, working in ACTIVE modality. Throughout the intervention the system continuously monitors the risk of collisions between the robot arms and critical organs surrounding the target of the intervention. If the system detects a robot part approaching a delicate organ, the surgeon is alerted by visual (the red dot in the lower left corner of the screen) and acoustic (a simple beep sequence) warnings. This functionality assists the surgeon during navigation inside the patient's anatomy, overcoming the difficulties associated with the use of the standard endoscopic view. The panoramic view (top) shows the “elbow” of the robot's right arm approaching the pancreas and arterial vessels (critical structures). This dangerous situation is not easily detectable by the surgeon using the standard endoscopic view (bottom).

Figure 5. The Computer Guidance Module during a simulated cholecystectomy, working in ACTIVE modality. Throughout the intervention the system continuously monitors the risk of collisions between the robot arms and critical organs surrounding the target of the intervention. If the system detects a robot part approaching a delicate organ, the surgeon is alerted by visual (the red dot in the lower left corner of the screen) and acoustic (a simple beep sequence) warnings. This functionality assists the surgeon during navigation inside the patient's anatomy, overcoming the difficulties associated with the use of the standard endoscopic view. The panoramic view (top) shows the “elbow” of the robot's right arm approaching the pancreas and arterial vessels (critical structures). This dangerous situation is not easily detectable by the surgeon using the standard endoscopic view (bottom).

Computer Guidance Module: technical description

The Computer Guidance Module is a multi-threaded software application. We have developed the complete code in C++ (on MS WIN XP-Vista-7 32-64), relying on the following libraries: the OpenSG® framework for the scene graph management and visualization Citation[23]; the Qt™ framework (Nokia™, Helsinki, Finland) for the Graphical User Interface (GUI) and networking functionalities Citation[24]; the CollDet libraries for collision detection algorithms Citation[25]; and the MATLAB® Component Runtime (MathWorks, Natick, MA) for the rigid body registration algorithm Citation[26].

The main window includes the 3D visualization of the virtual scene, and a control panel to manage hardware components and networking settings.

Modeling of the virtual scene

As briefly described above, the virtual anatomy visualized is a patient-specific 3D model of the abdominal organs of the patient undergoing the surgical procedure.

Preoperative medical diagnostic datasets (e.g., CT or MRI) are processed using our segmentation pipeline integrated in ITK-SNAP Citation[27], Citation[28]. The result is the virtual 3D anatomy of the patient, generated through a fast semi-automatic segmentation process. These high-resolution 3D organs are then optimized (cleaned, simplified, and smoothed) using MeshLab Citation[29], thus obtaining a good trade-off between computational load and visual quality without losing specific anatomical details. Next, color information is added to the virtual models using vertex coloring techniques to enhance the realism of the virtual organ surfaces Citation[28]. Finally, during the set-up of the operating room, the patient-specific anatomy can be simply loaded on the Computer Guidance Module, selecting the proper configuration file (ANATOMY custom file format).

The 3D visualization also includes the bimanual surgical robot, automatically loaded during the application start-up. To adapt the software to each kind of bimanual robot, a configuration file describing the kinematics and the 3D models of the arms is used (ROBOT custom file format).

Patient registration

Providing the surgeon with a virtual environment coherent with the real surgical scenario requires the registration of the patient's 3D anatomy and the virtual bimanual surgical robot with the real patient and robot. To accomplish this, the virtual and real worlds are calibrated by means of the electromagnetic tracking system, which also allows real-time monitoring of the position and orientation of the robot. Furthermore, the configuration of each robot arm is updated using the information dispatched by the Master Workstation, i.e., the opening angles of the robot joints and the state of the end-effector. It is thus possible to maintain coherence between the virtual bimanual surgical robot and the real robotic device.

To obtain coherence between real and virtual elements, we replicate in the 3D scene the same relationships of the robot and the real anatomy with respect to the tracker reference frame.

The virtual 3D anatomy is registered to the real patient using fiducial markers. During the preoperative diagnostic exams, radiopaque markers are placed on the patient's skin, and their exact position is determined in the scanner reference frame. As the patient is positioned on the surgical bed for the intervention, the positions of the same fiducial markers are acquired using the Aurora® digitizer. These data, together with the preoperative positions of the markers, are the inputs for a point-based rigid registration algorithm Citation[12] based on the SVD decomposition Citation[30]. This algorithm has been developed in MATLAB® and relies on the MATLAB® Component Runtime.

The entire registration process can be controlled through a specific panel of the Computer Guidance Module GUI, which also shows the fiducial registration error.

Software implementation

The Computer Guidance Module software is a multi-threaded application dealing with four processes: the application GUI and graphic rendering, the real-time tracking, the network communication manager, and the collision detection process.

The application GUI consists of a main window, developed using the Qt framework, which includes an OpenGL® widget for the 3D rendering of the virtual scene. Additionally, a side control panel enables the surgeon (or an assistant) to hide some virtual anatomies, thereby removing visual occlusions. The application menu allows control of the network settings and management of the patient registration process. The virtual scenario is managed using a 3D scene graph, built upon the OpenSG® libraries. The scene graph reproduces the 3D virtual scene on the basis of the spatial relationship between all the objects involved; its structure is designed to include each part of the scene in different branches, allowing the optimization of the rendering process. Therefore, the virtual scenario has been subdivided into the patient's anatomy, the virtual bimanual surgical robot, and additional components (e.g., the reference frame axes, tracking system device, etc.). Finally, each of these parts is structured as a sub-graph including nodes for the specific part elements (i.e., the organs composing the patient anatomy).

The real-time tracking process receives the data from the Aurora® localization device and updates the position and orientation of the virtual robot. This process is responsible for maintaining the correspondence between the virtual and real surgical robot. The real-time tracking process relies on the NDI Aurora® localization device integrated in the surgical platform. A dedicated software module provides access to the configuration, initialization, and management functionalities of the Aurora® tracker using the related Application Programming Interface (API).

The network communication manager maintains the coherence between the virtual and real surgical robot arms. This process receives the data from the Master Workstation and updates the state (configuration of the joints) of the virtual robot. A specific thread manages the network data transmissions, using the UDP protocol and guaranteeing low-latency communications, which are mandatory for real-time intraoperative navigation. The surgical robot data are encapsulated into UDP datagrams, each storing 14 values: six for the opening angles of the robot joints and one for the state (i.e., the gripper opening) of the end-effector for each surgical robot arm.

The collision detection process continuously checks whether any part of one of the robot arms is approaching any of the user-selected critical organs of the virtual anatomy. Whenever a risk of impact is detected, the process alerts the surgeon with visual and acoustic warnings in order to prevent unwanted collisions. This process relies on the CollDet libraries, which are fully compatible with the OpenSG® scene graph. It runs on a separate thread launched automatically and enrolls the selected anatomies, parameterizing their geometry and continuously checking their position with respect to each robot link.

Preliminary evaluation

Preliminary tests have been performed to evaluate the usability and efficacy of the Computer Guidance Module in terms of intraoperative navigation accuracy and reliability of the communication protocols.

The tests consisted of simulating a surgical procedure using the SPRINT robot and a silicone replica of the abdominal anatomy of a real patient, embedded in a commercial mannequin Citation[31]. Since cholecystectomy can be considered a benchmark for surgical procedures and devices Citation[32], it was chosen as the preliminary test for our Computer Guidance Module. Image data for the mannequin was previously acquired using a CT scanner, and the virtual anatomy of the patient was generated as already described. Radiopaque markers (3-mm colored pins) were used for rigid body registration purposes.

During the initial set-up of the operating room, the optimal port placement was planned and saved using the Planning and Simulation Module. Registration of the patient then commenced. The registration procedure was performed 10 times to evaluate the drift of the error. The robot was then loaded into the virtual scenario, using the optimal port placement data. Finally, exploiting the features of the GUI, the proper robot position was fixed, taking into consideration both the patient anatomy and the surgical procedure to be performed.

The simulated surgical procedure tasks consisted of exposing and stretching the gallbladder, pushing up the liver with the left robot arm, and then stretching the gallbladder using the right robot arm to expose Calot's triangle (i.e., the cystohepatic triangle), thereby allowing the exclusion of the cystic artery and the bile duct.

Five expert surgeons evaluated the Computer Guidance Module, testing the main functionalities and subsequently answering a questionnaire.

Results

Technical considerations

The Computer Guidance Module was tested on a Dell Alienware M15x consumer laptop, running Windows 7 64-bit (Intel Core [email protected] GHz, 6 GB RAM, NVIDIA GeForce GTX 260M GPU). The complete virtual patient anatomy was composed of approximately 89,000 vertices and 180,000 triangles, while the graphic process ranged from 30 to 90 fps.

The application required approximately 60–200 MB of memory, depending on the loaded anatomy and on the number of structures identified as critical. When considering only the anatomy needed to navigate a cholecystectomy (the gallbladder, liver, pancreas, and arterial and venous tree) and the minimum number of structures to be enrolled for the collision detection thread (the cystic artery and pancreas), the minimum memory needed to run the navigator is in the order of 60 MB and requires approximately 3 seconds to be loaded. However, when considering the complete abdominal anatomy, including all the upper abdominal structures, and designating as critical the entire arterial and venous tree and some delicate organs, and also depending on the intervention to be performed, the process can require up to 256 MB of memory and take approximately 10 seconds to load.

The mean latency in data transmission between the Master Workstation and the Computer Guidance Module has been evaluated as negligible, given that such a connection relies on a dedicated network. Moreover, the limiting thread for the frame rate of the scene relies on the sampling rate of the magnetic localizer, which is stated to be 40 Hz in an interference-free environment, though this sampling rate decreases if magnetic interference is present. Because of the ferromagnetic components of the ARAKNES platform, our test suffered from such interference, and the sampling rate was reduced to 20-30 Hz. Nonetheless, this latency is still compatible with the typical dynamics of the surgical procedures which are generally quite slow. Furthermore, this limitation should be overcome by the next-generation Aurora® localizers that will be marketed soon and are better shielded against ferromagnetic interference.

Preliminary test results

A registration accuracy of between 0.7 and 1.5 mm was observed for each of the ten registration tests.

During the manipulation of the gallbladder using the right robot arm, the Computer Guidance Module activated visual and acoustic warnings to notify the surgeon of the risk of unwanted collisions with the pancreas, the abdominal aorta, and the vena cava ().

The results of the evaluation questionnaire are showed in , where the mean values are reported. The five surgeons evaluated all the navigator functionalities positively. However, it is worth analyzing in greater depth some of the responses made by the surgeons to the questions. With regard to PASSIVE functionality, while two of them reported that the option to see hidden structures (organ transparency) was interesting but not so useful, they all considered the possibility of changing the viewpoint of the virtual surgical scene, thereby enabling understanding of the spatial relationship between the robot arms and the nearby anatomy, to be of the utmost importance. All the surgeons agreed on the importance of the ASSISTIVE functionality in guiding the surgeon toward the optimal insertion point, but assistance in access port placement would be even more useful – according to these surgeons – in a standard (not single access) robotic laparoscopy procedure. Finally, all the surgeons were enthusiastic about the warning function of the ACTIVE functionalities, noting its importance in avoiding possible collisions between anatomical organs not involved in the intervention and the surgical robot links: In practice, during a surgical intervention, the surgeon focuses attention on the end-effectors, increasing the probability of accidentally injuring critical organs.

Table I.  Evaluation of the different navigation functionalities by a group of 5 expert surgeons. The scores were on a scale from 1 (poor) to 5 (very good).

Discussion

This paper has described a computer guidance system, based on patient-specific data, for intraoperative navigation and assistance in SILS robotic interventions. It is designed for the ARAKNES robotic platform.

The Computer Guidance Module which we have designed and developed has yielded good results in term of usability and performance (latency, 3D environment, and frame rate), while the registration accuracy is acceptable, at least for a mannequin test. Regarding this last point, image guidance in a real patient, particularly with respect to soft tissues, is very useful for improving the surgeon's orientation because it offers the possibility of inspecting the surgical field from various viewpoints Citation[12], even though it still cannot be used for ‘blind’ guidance during a surgical intervention or to implement closed-loop control with operative robots where high registration precision is required.

Furthermore, the registration could be improved with the use of intraoperative imaging sources, such as intraoperative CT and MRI scanners and 3D rotational angiography (3D RA), which are becoming increasingly common. During the set-up for the surgery, once the patient has been secured to the surgical bed, these systems enable the 3D reconstruction of the patient's anatomy directly inside the operating room. Thus, we could obtain a precise correspondence between the volumetric medical dataset and the patient's anatomy, at least in the undeformed state. Several studies have proposed theoretical models for the deformation of various anatomical regions, due to the respiration and/or heartbeat of the patient [33–35].

The intraoperative navigator has been tested during a simulated cholecystectomy performed on a synthetic anthropomorphic mannequin. On the one hand, our software enables the surgeon to easily navigate inside the patient anatomy, relying on the virtual view of the 3D surgical scenario; on the other hand, the application offers the surgeon ASSISTIVE functionalities to facilitate the single port placement, and ACTIVE functionalities to preserve the safety of critical organs during the intervention. This functionality is particularly important for robots that bring all the degrees of freedom inside the patient's anatomy. As demonstrated by our preliminary tests, when using bimanual surgical robots there is a real risk of unwanted collisions between the robot arms and the anatomical organs not involved in the intervention.

Finally, the Computer Guidance Module software architecture is flexible, and has been implemented to easily integrate any bimanual robot design.

Acknowledgments

The authors would like to express their sincere thanks to Eng. Giuseppe Tortora for the design of the haptic device custom handle interface, Andrea Moglia, Ph.D., for support in the modeling of the virtual robot, and Eng. Sara Condino for the fabrication of the synthetic anatomy of the dummy patient.

Declaration of interest: This research has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 224565 (ARAKNES Project).

References

  • Haber GP, Crouzet S, Kamoi K, Berger A, Aron M, Goel R, Canes D, Desai M, Gill IS, Kaouk JH. Robotic NOTES (Natural Orifice Translumenal Endoscopic Surgery) in reconstructive urology: Initial laboratory experience. Urology 2008; 71(6)996–1000
  • Froghi F, Sodergren MH, Darzi A, Paraskeva P. Single-Incision Laparoscopic Surgery (SILS) in general surgery: A review of current practice. Surg Laparosc Endosc Percutan Tech 2010; 20(4)191–204
  • Romanelli JR, Earle DB. Single-port laparoscopic surgery: An overview. Surg Endosc 2009; 23(7)1419–1427
  • Bucher PBP, Buchs N, Pugin F, Ostermann S, Morel P. Single port access laparoscopic cholecystectomy (with video): Reply. World J Surg 2011; 35(5)1150–1151
  • Kaouk JH, Goel RK, Haber G-P, Crouzet S, Stein RJ. Robotic single-port transumbilical surgery in humans: Initial report. BJU International 2009; 103(3)366–369
  • Joseph R, Goh A, Cuevas S, Donovan M, Kauffman M, Salas N, Miles B, Bass B, Dunkin B. “Chopstick” surgery: A novel technique improves surgeon performance and eliminates arm collision in robotic single-incision laparoscopic surgery. Surg Endosc 2010; 24(6)1331–1335
  • Haber GP, White MA, Autorino R, Escobar PF, Kroh MD, Chalikonda S, Khanna R, Forest S, Yang B, Altunrende F, et al. Novel robotic da Vinci instruments for laparoendoscopic single-site surgery. Urology 2010; 76(6)1279–1282
  • Kai X, Goldman RE, Jienan D, Allen PK, Fowler DL, Simaan N, System design of an insertable robotic effector platform for single port access (SPA) surgery. Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), Saint Louis, MO, October 2009. pp 5546–5552
  • Shang J, Noonan DP, Payne C, Clark J, Sodergren MH, Darzi A, Yang GZ, An articulated universal joint based flexible access robot for minimally invasive surgery. Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA 2011), Shanghai, China, May 2011. pp 1147–1152
  • Lehman AC, Wood NA, Farritor S, Goede MR, Oleynikov D. Dexterous miniature robot for advanced minimally invasive surgery. Surg Endosc 2011; 25(1)119–123
  • Piccigallo M, Scarfogliero U, Quaglia C, Petroni G, Valdastri P, Menciassi A, Dario P. Design of a novel bimanual robotic system for single-port laparoscopy. IEEE-ASME Trans Mechatronics 2010; 15(6)871–878
  • Megali G, Ferrari V, Freschi C, Morabito B, Turini G, Troia E, Cappelli C, Pietrabissa A, Tonet O, Cuschieri A, et al. EndoCAS navigator platform: A common platform for computer and robotic assistance in minimally invasive surgery. Int J Med Robot 2008; 4(3)242–251
  • Condino S, Freschi C, Ferrari V, Berchiolli R, Mosca F, Ferrari M, Electromagnetic navigation system for endovascular surgery. In: Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K, Ratib OM, editors. Computer Assisted Radiology and Surgery. Proceedings of the 24th International Congress and Exhibition (CARS 2010), Geneva, Switzerland, June 2010. Int J Comput Assist Radiol Surg 2010;5 Suppl 1:S411–S412
  • Wang TM, Zhang DP, Da L. Remote-controlled vascular interventional surgery robot. Int J Med Robot 2010; 6(2)194–201
  • Herrell SD, Kwartowitz DM, Milhoua PM, Galloway RL. Toward image guided robotic surgery: System validation. J Urol 2009; 181(2)783–789; discussion 789–790
  • Kenngott HG, Neuhaus J, Müller-Stich BP, Wolf I, Vetter M, Meinzer HP, Köninger J, Büchler MW, Gutt CN. Development of a navigation system for minimally invasive esophagectomy. Surg Endosc 2008; 22(8)1858–1865
  • Pratt P, Stoyanov D, Visentini-Scarzanella M, Yang G-Z. Dynamic guidance for robotic surgery using image-constrained biomechanical models. In: Jiang T, Navab N, Pluim JPW, Viergever MA, editors. Proceedings of the 13th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2010), Beijing, China, September 2010. Part I. Lecture Notes in Computer Science 6361. Berlin: Springer; 2010. pp 77–85
  • www.araknes.org
  • Moglia A, Turini G, Ferrari V, Ferrari M, Mosca F. Patient specific surgical simulator for the evaluation of the movability of bimanual robotic arms. Stud Health Technol Inform 2011; 163: 379–385
  • Turini G, Moglia A, Ferrari V, Ferrari M, Mosca F. Patient specific surgical simulator for the pre-operative planning of bimanual robots for single incision laparoscopic surgery. Comput Aided Surg 2012; 17(3)103–112
  • Beira R, Santos-Carreras L, Sengül A, Samur E, Bleuler H, Clavel R. An external positioning mechanism for robotic surgery. J System Design Dynamics 2011; 5(5)1094–1105
  • Ferzli GS, Fingerhut A. Trocar placement for laparoscopic abdominal procedures: A simple standardized method. J Am Coll Surgeons 2004; 198(1)163–173
  • www.opensg.org
  • http://qt.nokia.com/
  • Weller R, Mainzer D, Sagardia M, Hulin T, Zachmann G, Preusche C, A benchmarking suite for 6-DOF real time collision response algorithms. Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology (VRST 2010), Hong Kong, November 2010. pp 63–70
  • http://www.mathworks.it/
  • Ferrari V, Cappelli C, Megali G, Pietrabissa A, An anatomy driven approach for generation of 3D models from multi-phase CT images. In: Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K, editors. Computer Assisted Radiology and Surgery. Proceedings of the 22nd International Congress and Exhibition (CARS 2008), Barcelona, Spain, June 2008. Int J Comput Assist Radiol Surg 2008;3 Suppl 1:S271–S273
  • Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, Gerig G. User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. Neuroimage 2006; 31(3)1116–1128
  • Cignoni P, Corsini M, Ranzuglia G. MeshLab: An open-source 3D mesh processing system. ERCIM News 2008; 73: 45–46
  • Arun KS, Krogmeier JV, Potter LC, Identification of 2-D noncausal systems. Proceedings of the 26th IEEE Conference on Decision and Control, Los Angeles, CA, December 1987. pp 1056–1060
  • Condino S, Carbone M, Ferrari V, Ferrari M, Mosca F, Building patient specific synthetic abdominal anatomies. An innovative approach for surgical simulators from physical toward hybrid (submitted)
  • Breitenstein S, Nocito A, Puhan M, Held U, Weber M, Clavien PA. Robotic-assisted versus laparoscopic cholecystectomy: Outcome and cost analyses of a case-matched control study. Ann Surg 2008; 247(6)987–993
  • Olbrich B, Traub J, Wiesner S, Wichert A, Feussner H, Navab N, Respiratory motion analysis: Towards gated augmentation of the liver. In: Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K, editors. Computer Assisted Radiology and Surgery. Proceedings of the 19th International Congress and Exhibition (CARS 2005), Berlin, Germany, June 2005. Amsterdam: Elsevier; 2005. pp 248–253
  • Blackall JM, Ahmad S, Miquel ME, McClelland JR, Landau DB, Hawkes DJ. MRI-based measurements of respiratory motion variability and assessment of imaging strategies for radiotherapy planning. Phys Med Biol 2006; 51(17)4147–4169
  • McClelland JR, Blackall JM, Tarte S, Chandler AC, Hughes S, Ahmad S, Landau DB, Hawkes DJ. A continuous 4D motion model from multiple respiratory cycles for use in lung radiotherapy. Med Phys 2006; 33(9)3348–3358

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.