1,029
Views
7
CrossRef citations to date
0
Altmetric
Research Article

Patient-specific surgical simulator for the pre-operative planning of single-incision laparoscopic surgery with bimanual robots

, , , &
Pages 103-112 | Received 30 Sep 2011, Accepted 12 Jan 2012, Published online: 11 Apr 2012

Abstract

Introduction: The trend of surgical robotics is to follow the evolution of laparoscopy, which is now moving towards single-incision laparoscopic surgery. The main drawback of this approach is the limited maneuverability of the surgical tools. Promising solutions to improve the surgeon's dexterity are based on bimanual robots. However, since both robot arms are completely inserted into the patient's body, issues related to possible unwanted collisions with structures adjacent to the target organ may arise.

Materials and Methods: This paper presents a simulator based on patient-specific data for the positioning and workspace evaluation of bimanual surgical robots in the pre-operative planning of single-incision laparoscopic surgery.

Results: The simulator, designed for the pre-operative planning of robotic laparoscopic interventions, was tested by five expert surgeons who evaluated its main functionalities and provided an overall rating for the system.

Discussion: The proposed system demonstrated good performance and usability, and was designed to integrate both present and future bimanual surgical robots.

Introduction

During the past few decades there has been a paradigm shift in the methods of performing surgery. The advent of minimally invasive surgery (MIS) reduced significantly the degree of invasiveness, resulting in better outcomes for patients Citation[1]. MIS has evolved substantially over the last decade, with innovations ranging from the use of smaller ports and instruments to robotic surgery and even natural orifice transluminal endoscopic surgery (NOTES). In particular, the da Vinci® surgical robot (Intuitive Surgical®, Sunnyvale, CA), which was approved by the Food and Drug Administration in 2000, offers the following improvements over conventional techniques, especially laparoscopy: increased maneuverability of instruments; a higher number of degrees of freedom (DOF) (7 for da Vinci® EndoWrist® vs. 5 for standard laparoscopic tools like scissors or grippers); elimination of the trocar fulcrum effect (inversion of movements); tremor minimization; motion scaling; and 3D visualization which partly offsets the absence of force feedback Citation[2].

The ultimate goal of MIS is to enable procedures to be performed with fewer incisions, less morbidity, and improved cosmetic results Citation[3]. At present, single-incision laparoscopic surgery (SILS) is one of the emerging approaches and can be performed using a variety of tools and techniques Citation[4]. SILS has the advantage of entailing only one incision and thus shortening the recovery time. Nonetheless, what seems to be an advantage of SILS from the aesthetic point of view is also an intrinsic limitation of this procedure: having only a single access point imposes a coaxial arrangement of the instruments, thereby resulting in difficult maneuverability owing to the proximity of the instrument tips inside the abdominal cavity Citation[3]. The 7 DOF for da Vinci® EndoWrist® could overcome the issue of limited maneuverability; however, the robotic arms do not work well when arranged coaxially through a single incision due to the risk of possible collision of the instruments with each other or with the camera, potentially leading to instrument malfunctions. For this reason, new configurations of robot arms are needed to improve the surgeon's dexterity in single-access robotic surgery. In this sense, the “chopstick” configuration of the da Vinci® robot arms represents a first attempt at avoiding collisions of the external abdominal arms Citation[3]. More recently, the VeSPA surgical instruments were introduced by Intuitive Surgical specifically to offset many of the limitations encountered with SILS. Initial experience with VeSPA instruments in urology has been encouraging [5].

The realization of miniaturized surgical robots with bimanual arms simplifies the insertion of the robot and enhances maneuverability inside the abdominal wall. An initial robot with 4 DOF at each arm was used to perform a non-survival cholecystectomy in a porcine model Citation[6], while the ARAKNES (Array of Robots Augmenting the KiNematics of Endoluminal Surgery) project aims at realizing bimanual robotic arms, completely inserted in the abdomen, with 6 DOF to further improve maneuverability Citation[7], Citation[8].

Bringing all the DOF inside the abdomen simplifies the surgical gesture and increases the workspace reachable by the end-effectors, but it also introduces additional challenges. One of the main issues is the risk of undesired collisions of the arms with organs not involved in the intervention. In practice, this means that during the execution of a robotic SILS intervention, the surgeon has to be aware of the position of each link in the robot arm. This can be difficult, considering that the surgeon focuses on the end-effector using the endoscopic view, which is close to the instrument tips. The orientation perception of the surgeon is generally very difficult to maintain in both traditional and robotic laparoscopy.

Surgical simulators could overcome these limitations by offering the surgeon the possibility of performing pre-operative planning of single-incision laparoscopic procedures in order to evaluate the optimal access port placement and the robot workspace.

Despite the success of robotic surgery, developing a surgical simulation system has been a challenge. Simulators based on virtual reality may help novice surgeons to acquire more confidence with surgical robotics, thus shortening the learning curve Citation[9]. Solutions currently on the market include the RoSS™ training module by Simulated Surgical Systems (Williamsville, NY), the dV-Trainer™ system by Mimic Technologies, Inc. (Seattle, WA), and the SEP Robot by SimSurgery® (Oslo, Norway)[Citation9–14.

This paper presents a virtual reality-based surgical simulator which reproduces the clinical scenario, based on patient-specific anatomy, for correct positioning of a robot with bimanual capabilities. The set-up of a surgical robot before performing the actual intervention is of paramount importance due to the limited workspace for the robot arms. Although in SILS procedures the umbilicus is typically selected as the point of insertion for the access port, in some cases it is advantageous to choose another location on the abdomen Citation[15]. Hence, the intent of this simulator is to provide surgeons with a tool for the optimal placement of the access port.

A solution for robotic assisted laparoscopy using the da Vinci® robot was previously reported Citation[16]. However, at present there is no published work discussing the placement of robots with bimanual arms for single-port procedures.

Given a bimanual robot with its own geometry and kinematics, and a patient-specific virtual anatomy, the proposed simulator allows determination of the optimal position of the access port for the robot insertion. In addition, it is useful to simulate and rehearse the motion of the robot to evaluate if dexterous movability is feasible, thereby avoiding potential damage to the surrounding anatomy. This simulation software, an extended version of the prototype described in reference Citation[17], is a software module of the ARAKNES project and can serve as an educational tool for teaching how to set up bimanual surgical robots. Although the described simulator was specifically designed for the robot developed within the ARAKNES project, it can integrate any present or future bimanual surgical robot. An overview of the simulator is depicted in .

Figure 1. Overview of the surgical simulator, showing the software application running on a workstation and the hardware interface for the control.

Figure 1. Overview of the surgical simulator, showing the software application running on a workstation and the hardware interface for the control.

Materials and methods

This section describes in detail the chosen approach for the design and implementation of all the components of the surgical simulator: the generation of the patient-specific anatomy, the modeling of the surgical robot, the development of the simulation functionalities, and the required algorithms and data structures. Screenshots of the simulator are shown in .

Figure 2. Screenshots during a surgical simulation, showing the virtual environment with a patient-specific anatomy and the bimanual robot, and a side panel for the settings. (a) Port placement with the abdomen rendered opaque. (b) Workspace evaluation with the abdomen rendered transparent.

Figure 2. Screenshots during a surgical simulation, showing the virtual environment with a patient-specific anatomy and the bimanual robot, and a side panel for the settings. (a) Port placement with the abdomen rendered opaque. (b) Workspace evaluation with the abdomen rendered transparent.

Overview and functionalities of the surgical simulation

The complete surgical simulator system comprises a multi-threaded application running on a workstation connected to two haptic devices (with or without force-feedback capabilities), each equipped with a customized gripper. The user, controlling the robot arms by means of the two haptic devices, can plan and simulate the surgical procedure in a virtual scenario, as rendered on the workstation screen.

The surgical simulator application offers a wide range of functionalities for planning and training for an intervention using the bimanual robot, enhancing the basic features of the prototype Citation[17].

The user can load the patient-specific anatomy to acquire knowledge of the 3D structure of the organs involved in the surgical procedure. Moreover, the surgeon can select the insertion point by simply clicking on the virtual abdomen. In this way, he or she can easily try different approaches, evaluating the robot workspace with respect to the patient-specific anatomical features. The current robot position, including its orientation and rotation, can be saved or loaded at any time during the simulation. These data could also be used to show the surgeon the planned positioning of the robot during the surgical intervention.

Furthermore, the robot's capability to accomplish specific tasks can be tested virtually using the deformable models, i.e., by directly trying to interact with the deformable organs while avoiding unwanted collisions with surrounding anatomical structures not involved in the intervention.

Multiple viewpoints are available: a panoramic view (from outside the body of the patient); an access port view (from the camera placed on the trocar); and a surgical robot view (directly from the camera on the robot). These viewing modalities are represented in , and the user can switch between them by pressing the right foot pedal, as described in the Hardware interface sub-section below.

Figure 3. Available viewing modalities: (a) panoramic, (b) from the access port camera, and (c) from the robot camera.

Figure 3. Available viewing modalities: (a) panoramic, (b) from the access port camera, and (c) from the robot camera.

Finally, as already mentioned, the surgeon can hide or show each organ included in the patient's anatomy in order to avoid unwanted occlusions and vary the transparency of the patient's abdomen.

Patient-specific virtual anatomy

The patient-specific virtual anatomy is generated by elaborating CT datasets of the patient undergoing a surgical intervention. These datasets (in DICOM format) are processed by our custom segmentation pipeline (based on the open-source software ITK-SNAP) Citation[20], 32] that allows extraction of 3D surface information from the voxels of the dataset grids. This phase can also be performed starting with MRI datasets [21].

In order to be functional for an interactive simulation, the virtual models obtained have to be optimized through mesh simplification, artifact removal, hole filling, smoothing and texturing. The whole mesh editing is accomplished using VCGLab MeshLab Citation[18] and Autodesk® Maya® Citation[19]. Thus, the 3D models, reduced in size and cleaned out, are ready to statically represent the virtual organs of the patient anatomy.

The virtual anatomy obtained in this way is static, and not deformable during the simulation. Therefore, to enable interaction of the robot with organs and also to simulate realistic behavior of the deformable anatomy, specific models able to represent deformable objects are required.

Our virtual surgery simulator supports two different types of physical models: a skeleton-based representation Citation[22] and a mass-spring-damper model (hereafter MSDm) Citation[23]. Both are generated from a volumetric (tetrahedral) representation of the specific organ to be modeled. Starting from an optimized surface mesh generated as described above, we created a tetrahedral model using the NETGEN software Citation[24] and stored the result in a MESH file (Neutral File Format). Hence, the simulation software allows the user to load the structure of the tetrahedral mesh from a pre-configured MESH file, which can also be used to define automatically the skeleton or the spring network of the desired deformable model. Finally, we have defined a custom file format (SKL) to easily configure the properties of a skeleton-based deformable model.

The entire patient anatomy is described in a custom file format (ANATOMY) that can be easily configured by the user and loaded during the surgical simulation.

Virtual bimanual surgical robot

The bimanual surgical robot for single-incision procedures is represented using an abstract model to support different joint-link configurations of the robot arms. This allows the integration “on the fly” of new robot designs, making the simulator a virtual testing platform for the mechanical design process of innovative surgical robots.

The complete robot model, including two independent 6-DOF arms, an introducer, and an access port, is shown in . Each arm is modeled as a kinematic chain and ends with an end-effector. This concept isolates the kinematic structure from the 3D models of the robot, making any change in shape transparent to the robot designers (unless a kinematic change is also involved). Additionally, it offers the possibility of freely integrating various surgical instruments for a wide range of surgical tasks, such as gripping tools, forceps and scalpels.

Figure 4. Virtual representation of the surgical bimanual robot, showing the two independent robotic arms (green dot), a single access port (red dot), and a robot introducer (blue dot).

Figure 4. Virtual representation of the surgical bimanual robot, showing the two independent robotic arms (green dot), a single access port (red dot), and a robot introducer (blue dot).

The robot arm description is stored in a custom file format (ROBOT), loaded at simulator start-up, and is simply reconfigurable by reloading another specific ROBOT file during the simulation.

Both the introducer and the access port are static and include laparoscopic cameras and lights controllable by the user during the interactive simulation.

The user has complete control of the whole bimanual robot: the two arms are controlled using a pair of haptic devices, while the keyboard is needed to control the access port and the introducer.

Interactive simulation algorithms and data structures

The simulator software has been developed in C++ (on MS WIN 32-64 and Mac OS X) and relies on the following cross-platform open-source libraries: CHAI 3D framework Citation[25] for the visualization and dynamics simulation, and Nokia™ Qt™ libraries Citation[26] for the Graphical User Interface (GUI), multi-threading and network management.

The MSDm developed is an extension of the basic mass-spring network available in CHAI 3D. Our model has integrated topologies in order to properly configure nodes and links integrating the biomechanical parameters of the desired soft tissue Citation[23].

The skeleton-based model consists of an internal skeleton connected to a high-resolution surface model. This approach decouples global deformations, simulated using the skeleton structure, from local changes, represented via surface vertex displacements Citation[22].

The Collision Detection (CD) process, in order to detect both collisions between the surgical robot and deformable organs and those between the two robot arms, requires appropriate representation of all the interacting entities. For this purpose, each robot arm model has been enhanced with a data structure including a sphere tree for each of its joint links. In this way, we can set up different levels of detail for every joint link, i.e., provide precise CD for those robot components with a high probability of interaction and rough (but fast) CD for the others. The end-effector was considered a special joint, and was therefore designed to include custom CD data structure, depending on the type of end-effector. This solution also allows efficient detection of collisions between the two robot arms.

The robot-tissue interaction modeling adopted is a standard penalty-based method Citation[27]. The software simulator is a multi-threaded application merging together the graphic, dynamic and haptic processes ().

Figure 5. Flowchart of the simulation system showing visualization, dynamics, and haptic client/server processes, and interaction with the hardware interface.

Figure 5. Flowchart of the simulation system showing visualization, dynamics, and haptic client/server processes, and interaction with the hardware interface.

The graphic thread manages the GUI developed entirely using the Nokia™ Qt™ framework Citation[26] and the visualization of the virtual environment via 3D OpenGL rendering, thanks to the scene graph functionalities provided by the CHAI 3D library Citation[25].

The dynamic thread handles the computation related to the physics engine, including collision detection, robot kinematics, force response, and, obviously, the dynamics of deformable organs. All these have been implemented, exploiting and extending the CHAI 3D Citation[25] dynamics functionalities.

The haptic thread, communicating with the haptic devices, manages the force-feedback rendering, ensuring the minimum frequency threshold (approximately 1 kHz) to obtain a realistic force response.

Hardware interface

Control of each robot arm is performed by means of a haptic interface with 6 DOF (three active and three passive). From those currently available on the market, Phantom Omni® by Sensable (Wilmington, MA) was chosen ().

Figure 6. The complete hardware interface: the Phantom Omni® haptic device (blue dot), the custom gripper (green dot), and the pedals (red dot) connected to the Picolog data logger (yellow dot).

Figure 6. The complete hardware interface: the Phantom Omni® haptic device (blue dot), the custom gripper (green dot), and the pedals (red dot) connected to the Picolog data logger (yellow dot).

To drive the opening and closing of the end-effector in the virtual scene, in the simplest case a surgical gripper, a customized device was introduced to replace the standard Phantom Omni® stylus. The device was manufactured by means of 3D printing, a rapid-prototyping process, using an Elite 3D Printer (Dimension, Eden Prairie, MN). The opening angle is tracked using a potentiometer, whose value is managed by a USB Picolog 1012 data logger (Picotech, St. Neots, United Kingdom). The customized gripper is depicted in .

Additionally, two foot pedals were added to let the surgeon use two of the most important functionalities without losing control of the robot arms. By using the left pedal the user is able to activate a friction to freeze the robot arms’ position while repositioning the haptic interface handle. In this way, the robot workspace is not constrained by the haptic interface workspace, even if motion scaling is applied. The right pedal can be used to change the viewing modality.

Results

The proposed simulator was conceived for the pre-operative planning of single-port robotic laparoscopic interventions. Since cholecystectomy has a proven safety record and is a form of benchmark for surgical devices, it was selected as the target procedure Citation[28].

A patient underwent computed tomography (CT) scanning with contrast agent (with the stomach insufflated with carbon dioxide) at the Radiology Department of Cisanello Hospital in Pisa, Italy. This dataset was segmented, generating a virtual anatomy including the stomach, liver, gallbladder, pancreas, kidneys, spine, rib cage, portal vein, and aorta. The two target organs, i.e., the gallbladder and liver, were modeled as deformable objects using a skeleton-based model with 63 and 513 nodes, respectively. In contrast, all the remaining organs were modeled as static (with approximately 78,000 faces).

Five expert surgeons from different specialties (general surgery, thoracic surgery, urology, and gynecology), each with at least two years of experience in robotic-assisted surgery, were selected to test the proposed simulator. In particular, they were asked to evaluate the placement of the robotic access port and the movability of the bimanual robot. At the end of the test session, each surgeon was able to rate the system's usability (related to the system's user-friendliness) and the robot guidance learning (intended to shorten the learning curve). The results, ranging from 1 (poor) to 5 (excellent), are shown in . Overall, the results were encouraging, with an average evaluation rating of 4 (very good). However, a gynecology surgeon rated the port placement planning as 1 (poor), highlighting the lack of a tool to provide automatic assistance in the determination of the optimal insertion point for a given surgical target. The general surgeon rated the usability of the system as 1 (poor), since the present version does not allow evaluation of the configuration of the external robot used for the positioning of the introducer.

Table I.  Evaluation of the simulator by the group of surgeons.

In addition, the simulator offered the opportunity to evaluate the following aspects of a cholecystectomy procedure: the possibility of reaching the target organ, namely the gallbladder; how to interact with the liver to obtain exposure of the gallbladder ( and ); and – once the target is reached – the movability of the robot arms in order to avoid unwanted collisions with delicate tissues.

Figure 7. Interactions with the liver to expose the gallbladder during the simulation of a cholecystectomy. (a) The non-deformed anatomy. (b) The deformed liver.

Figure 7. Interactions with the liver to expose the gallbladder during the simulation of a cholecystectomy. (a) The non-deformed anatomy. (b) The deformed liver.

The simulation was tested on a workstation running Windows Vista Ultimate 32 bit (Intel Core i7 – 3 GHz, 12 GByte RAM, two GPU nVidia GTX 285) in a virtual scene composed of approximately 45,000 vertices and 90,000 triangles. The graphic process ranges from 35 to 55 fps, while the frequency of the dynamics ranges from 300 to 500 Hz. The memory requirement to launch the simulation ranges from 50 to 80 MB, depending on the loaded anatomy.

Discussion

As surgical robotics opens new opportunities in surgical practice, special training and experience, along with high-quality assessment, are required. Drawing on the successful paradigm of flight simulation, Richard Satava first proposed virtual-reality training for surgical skills in the early 1990s Citation[29].

The ARAKNES planning and simulation module was developed as a platform for pre-operative planning for the ARAKNES bimanual robot Citation[8]. On the one hand, this simulator enables the surgeon to determine the proper placement of the robot in a virtual environment representing a surgical scenario incorporating the anatomy of a real patient. On the other hand, it permits interaction with the organs, some of which are deformable, for a preliminary evaluation of the robot's behavior. In particular, given the ARAKNES robot and a patient-specific virtual anatomy, the simulator allows evaluation of whether dexterous movability, i.e., avoiding collisions with the surrounding virtual anatomy in order to prevent potential damage during the real surgical procedure, is achievable. During the test session, the simulator, evaluated by the expert surgeons, proved an excellent system for pre-operative planning and robot guidance learning. However, two surgeons pointed out the need for two additional functionalities: a method for automatic assistance in the port placement, and a visualization of the complete robotic platform, including the external robot, to enable the evaluation of the configuration of the external manipulator. In this regard, we are integrating an external robot arm (Dionis Manipulator) controlling the placement of the access port Citation[30], and taking into account the adjustment of the positioning of the operating table.

At present, during the simulation, the surgeon interacts with the virtual anatomy obtained after segmenting radiological data in the pre-operative phase. In the future, the availability of intra-operative imaging devices (e.g., the 3D rotational angiograph) could enable updates of the virtual anatomy, taking into account the global deformation due to insufflation or a different patient position on the surgical bed, e.g., Trendelenburg or decubitus.

The proposed simulator is a versatile solution as it is cross-platform, can be used with haptic interfaces from different vendors, and can integrate further devices. In the present version, the surgeon controls the robot arms’ movement using a pair of Phantom Omni® haptic devices with a customized gripper. However, this software application is now ready to integrate an advanced haptic interface (Omega by Force Dimension, customized by EPFL) with 7 DOF (three for translations, three for rotations, and one for the handle) in both tracking and force feedback Citation[31].

The present ARAKNES planning and simulation module enhances the features of the prototype Citation[17]. We are currently working to integrate the biomechanical parameters (density, Young's modulus, etc.) into the MSDm to realistically simulate the behavior of a patient-specific organ Citation[23] and to implement complex surgical tasks, such as cutting and bimanual grasping.

The developed simulator can be used to simulate any present or future bimanual surgical robot for use not only in abdominal surgery, but also in other specialties such as thoracic surgery and gynecology, provided that the robotic surgical design is appropriate. Although the simulator was designed for single-port surgical robotic procedures, the port placement planning can be enhanced, allowing the surgeon to select further insertion points for additional instruments in case there is a need for a complex task during surgery.

Acknowledgments

The authors would like to express their sincere thanks to Dr. Lorenzo Faggioni for acquiring the CT datasets, Mrs. Marina Carbone for performing the segmentation, Mr. Giuseppe Tortora for the gripper design, and Mrs. Marta Niccolini and Mr. Gianluigi Petroni for their assistance during the test session.

Declaration of interest: This research has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 224565 (ARAKNES Project).

References

  • Mack MJ. Minimally invasive and robotic surgery. JAMA 2001; 285: 568–572
  • van der Meijden OA, Schijven MP. The value of haptic feedback in conventional and robot-assisted minimal invasive surgery and virtual reality training: A current review. Surg Endosc 2009; 23: 1180–1190
  • Joseph RA, Goh AC, Cuevas SP, Donovan MA, Kauffman MG, Salas NA, Miles B, Bass BL, Dunkin BJ. “Chopstick” surgery: A novel technique improves surgeon performance and eliminates arm collision in robotic single-incision laparoscopic surgery. Surg Endosc 2010; 24: 1331–1335
  • Romanelli JR, Earle DB. Single-port laparoscopic surgery: An overview. Surg Endosc 2010; 23: 1419–1427
  • Haber GP, White MA, Autorino R, Escobar PF, Kroh MD, Chalikonda S, Khanna R, Forest S, Yang B, Altunrende F, et al. Novel robotic da Vinci instruments for laparoendoscopic single-site surgery. Urology 2010; 76: 1279–1282
  • Lehman AC, Wood NA, Farritor S, Goede MR, Oleynikov D. Dexterous miniature robot for advanced minimally invasive surgery. Surg Endosc 2011; 25: 19–23
  • www.araknes.org
  • Piccigallo M, Scarfogliero U, Quaglia C, Petroni G, Valdastri P, Menciassi A, Dario P. Design of a novel bimanual robotic system for single-port laparoscopy. IEEE/ASME Trans Mechatronics 2010; 15: 871–878
  • Seixas-Mikelus SA, Kesavadas T, Srimathveeravalli G, Chandrasekhar R, Wilding GE, Guru KA. Face validation of a novel robotic surgical simulator. Urology 2010; 76: 357–360
  • www.simulatedsurgicals.com
  • www.mimic.ws
  • Sethi AS, Peine WJ, Mohammadi Y, Sundaram CP. Validation of a novel virtual reality robotic simulator. J Endourol 2009; 23: 503–508
  • www.simsurgery.com
  • Lin DW, Romanelli JR, Kuhn JN, Thompson RE, Bush RW, Seymour NE. Computer-based laparoscopic and robotic surgical simulators: performance characteristics and perceptions of new users. Surg Endosc 2009; 23: 209–214
  • Lauritsen ML, Bulut O, Single-port access laparoscopic abdominoperineal resection through the colostomy site: A case report. Tech Coloproctol 2011 Jun 11 [Epub ahead of print]
  • Hayashibe M, Suzuki N, Hashizume M, Kakeji Y, Konishi K, Suzuki S, Hattori A. Preoperative planning system for surgical robotics setup with kinematics and haptics. Int J Med Robot 2005; 1: 76–85
  • Moglia A, Turini G, Ferrari V, Ferrari M, Mosca F. Patient specific surgical simulator for the evaluation of the movability of bimanual robotic arms. Stud Health Technol Inform 2011; 163: 379–385
  • Cignoni P, Corsini M, Ranzuglia G. Meshlab: An open-source 3d mesh processing system. ERCIM News 2008; 73: 45–46
  • http://usa.autodesk.com/maya/
  • Ferrari V, Cappelli C, Megali G, Pietrabissa A, An anatomy driven approach for generation of 3D models from multi-phase CT images. In: Computer Assisted Radiology and Surgery. Proceedings of the 22nd International Congress and Exhibition, Barcelona, Spain, June 2008 (CARS 2008). Int J Comput Assist Radiol Surg 2008;3 Suppl 1:S271–S273
  • Wells WM, Grimson WL, Kikinis R, Jolesz FA. Adaptive segmentation of MRI data. IEEE Trans Med Imaging 1996; 15: 429–442
  • Conti F, Khatib O, Baur C, Interactive rendering of deformable objects based on a filling sphere modeling approach. Proceedings of the 2003 IEEE International Conference on Robotics and Automation (ICRA 2003), Taipei, Taiwan, September 2003. pp 3716–3721
  • Sala A, Turini G, Ferrari M, Mosca F, Ferrari V, Integration of biomechanical parameters in tetrahedral mass-spring models for virtual surgery simulation. Proceedings of the 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2011), Boston, MA, August 2011. pp 4550–4554
  • Schöberl J. NETGEN An advancing front 2D/3D-mesh generator based on abstract rules. Computing and Visualization in Science 1997; 1: 41–52
  • Conti F, Barbagli F, Morris D, Sewell C, CHAI 3D – An open-source library for the rapid development of haptic scenes. Proceedings of the 2005 IEEE World Haptics Conference, Pisa, Italy, March 2005
  • http://qt.nokia.com
  • Moore M, Wilhelms J, Collision detection and response for computer animation. Proceedings of the International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 1988), Atlanta, GA, August 1988. pp 289–298
  • Breitenstein S, Nocito A, Puhan M, Held U, Weber M, Clavien PA. Robotic- assisted versus laparoscopic cholecystectomy: outcome and cost analyses of a case-matched control study. Ann Surg 2008; 247: 987–993
  • Satava RM. Virtual reality surgical simulator. The first steps. Surg Endosc 1993; 7: 203–205
  • Beira R, Santos-Carreras L, Sengül A, Samur E, Bleuler H, Clavel R, An external positioning mechanism for robotic surgery. JSME Technical Journal 2011 (In press)
  • Santos Carreras L, Beira R, Bleuler H. Ergonomic handle for haptic devices. US patent 61/453.972. Filing date 03/18/2011
  • Paul A. Yushkevich, Joseph Piven, Heather Cody Hazlett, Rachel Gimpel Smith, Sean Ho, James C. Gee, and Guido Gerig. User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. Neuroimage 2006 Jul 1;31(3):1116–28.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.