2,572
Views
9
CrossRef citations to date
0
Altmetric
Original Articles

Influence of sampling accuracy on augmented reality for laparoscopic image-guided surgery

ORCID Icon, , , , ORCID Icon, , ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon show all
Pages 229-238 | Received 29 Aug 2019, Accepted 10 Jan 2020, Published online: 05 Mar 2020

Abstract

Purpose

This study aims to evaluate the accuracy of point-based registration (PBR) when used for augmented reality (AR) in laparoscopic liver resection surgery.

Material and methods

The study was conducted in three different scenarios in which the accuracy of sampling targets for PBR decreases: using an assessment phantom with machined divot holes, a patient-specific liver phantom with markers visible in computed tomography (CT) scans and in vivo, relying on the surgeon’s anatomical understanding to perform annotations. Target registration error (TRE) and fiducial registration error (FRE) were computed using five randomly selected positions for image-to-patient registration.

Results

AR with intra-operative CT scanning showed a mean TRE of 6.9 mm for the machined phantom, 7.9 mm for the patient-specific phantom and 13.4 mm in the in vivo study.

Conclusions

AR showed an increase in both TRE and FRE throughout the experimental studies, proving that AR is not robust to the sampling accuracy of the targets used to compute image-to-patient registration. Moreover, an influence of the size of the volume to be register was observed. Hence, it is advisable to reduce both errors due to annotations and the size of registration volumes, which can cause large errors in AR systems.

Introduction

Image-guided surgery (IGS) systems aim to provide navigation to surgeons in order to improve the accuracy and safety of the procedures. IGS utilizes computer-based systems to provide virtual image overlays and target the surgical sites [Citation1]. In the past 30 years, with the technological advances in computer science and medical imaging, IGS has greatly expanded [Citation1–3]. IGS combines medical images, such as magnetic resonance imaging (MRI) or computed tomography (CT), with the intra-operative images shown during the surgery to the surgeon through a laparoscope camera, endoscope camera or ultrasound (US) image. This information is displayed to surgeons either through 3D models on separate monitors or overlaid as augmented reality (AR). Instrument tracking technologies are used in IGS to provide reliable information regarding position and orientation of surgical instruments. Moreover, tracking technologies are also used for registration tasks [Citation4].

This study’s field of application of IGS is Liver Laparoscopic Resection Surgery (LLRS). Conventionally, before LLRS, the patient undergoes volumetric scans, such as CT or MRI, to diagnose and to plan optimal treatment. These scans, known as pre-operative scans, are used by the surgeons to plan the removal of tumours from the liver [Citation5]. However, when the patient is on the surgical table, his/her position and orientation are different from the scanning position. For this reason, surgeons use their understanding of the anatomy of the liver and intra-operative imaging modalities (such as laparoscopic video and US) to spatially correlate the anatomy of the liver to the diagnostic CT or MRI scans. This approach may not only introduce inaccuracies, but also makes the surgery more dependent on the experience of the surgeon. Moreover, in laparoscopic surgery, pneumoperitoneum (inflation of the patient’s abdomen) is always performed to have sufficient space to introduce laparoscopic instruments and the laparoscopic camera into the abdomen. This is problematic because pneumoperitoneum also deforms the shape of the organs [Citation6], making the anatomical correlation with the CT scan even more complicated for the surgeon.

IGS can help the surgeon avoid an intra-operative unfavourable incidence by providing surgical navigation. This is becoming more important especially because parenchyma-sparing (PS) liver resection has become a standard surgical treatment for colorectal liver metastases (CRLM) [Citation7] because it facilitates repeated liver resections, which can increase survival [Citation8]. During these procedures, lesions need to be located intra-operatively, resection performed with planned margin, and vessels cut at optimal locations. IGS can provide surgeons with an overview of the vascular structures, position of the lesion, etc. directly in the operating field. For example, for resection in posterosuperior liver segments, which in some cases are more complicated than formal resections due to accessibility and visibility [Citation9], IGS could be used to focus attention in the correct direction and on the structure where the lesion is located.

Within IGS, AR is a computer vision technique in which computer-generated images are superimposed onto video frames to enhance the visualization and improve the spatial understanding of the scene. In IGS, AR is achieved via superposition of 3D reconstructions of segmented medical volumes, such as the CT scan, of structures of interest (e.g. tumours or blood vessels as shown in ), on the laparoscope camera [Citation10].

Figure 1. Use of segmentation models and CT scans as a navigation map for laparoscopic liver resection surgery.

Figure 1. Use of segmentation models and CT scans as a navigation map for laparoscopic liver resection surgery.

This study aims to understand the influence of human-induced error for AR on the laparoscope camera during LLRS. This experimental work studies AR accuracy in three experiments: an accuracy verification phantom, a patient-specific liver phantom and an in vivo porcine model with intra-operative CT scan.

Material and methods

Background methodology

In LLRS, the surgeon conventionally performs surgery visualizing the organs through the video of a laparoscopic camera, together with assistance of medical images and, when available, 3D reconstructed surface models through segmentation. All this information can be displayed on a separate screen within the OR (as shown in ) or can be combined to the laparoscope perspective into AR (as shown in ). As aforementioned, this study focuses on AR. The following chapter describes the main algorithms used to achieve AR in this study: hand-eye camera calibration, point-based registration (PBR) and AR re-projection.

Figure 2. Frames of AR showing re-projected blood vessel structures in the phantom (left) and the in vivo study (right).

Figure 2. Frames of AR showing re-projected blood vessel structures in the phantom (left) and the in vivo study (right).

Hand-eye camera calibration

‘Hand-eye’ camera calibration was coined in the robotics field [Citation11]. In IGS for LLRS, hand-eye camera calibration aims to compute the transformation between a stereo laparoscope camera (the ‘eye’), ENDOEYE flex3D (Olympus, Tokyo, Japan), and the instrument markers rigidly attached to the camera (the ‘hand’), as shown in . Instrument tracking was achieved in this study using the optical tracking system Polaris Spectra (NDI, Waterloo, Canada). Optical markers were rigidly attached to the laparoscope camera using an NDI Polaris Rigid Body (Part Number 8700449). The six degrees of freedom of the laparoscope camera were then tracked at a sampling rate of 60 Hz. To perform camera calibration and hand-eye camera calibration, a calibration plate with a 96-dot pattern, with four optical markers at accurately machined locations, was manufactured by Cascination (Cascination, Bern, Switzerland). In this study, the left channel of the stereo laparoscope camera was calibrated and used for AR evaluation. The calibration plate is white with laser-printed black circles, OpenCV algorithms were used for detection of the ellipsoidal centroids and for camera calibration [Citation12]. To compute hand-eye calibration, EquationEquation (1) could have been used in order to compute the hand-eye calibration transform from a single pose (i.e. TCM, according to ) (1) TCM= TOM· TSO· TSCS· TCSC(1)

Figure 3. Hand-eye camera calibration transformation diagram, from the laparoscope camera (C) to the optical markers attached to it (M).

Figure 3. Hand-eye camera calibration transformation diagram, from the laparoscope camera (C) to the optical markers attached to it (M).

where O is the coordinate system for the optical tracking system, M for the optical markers attached to the laparoscope camera, S for the optical markers on the calibration plate, SC for the calibration plate and C is the coordinate system origin for the camera pose. The notation used in this paper indicates as superscript the coordinate system with respect to which the transformation is applied, and subscripted is the towards which coordinate system. Moreover, all transformations described in this study are 4 × 4 matrices in homogeneous coordinates. TOM and TOS are provided by the optical tracking system, whereas TSCS  is obtained though pose estimation of the calibration plate according to Zhang [Citation12]. The calibration plate was manufactured so that the axes and origins of S and SC coincide, therefore, for this study, TSCS is a 4 × 4 identity matrix.

In order to improve both accuracy and reliability of the hand-eye camera calibration, instead of using EquationEquation (1), the authors of this study implemented a multiple posed (N-posed) hand-eye camera calibration, based on Lee et al. [Citation13]. In order to extend the set of equation for multiple (N) poses, we can rewrite EquationEquation (1) as follows: (2) IROM1Z9,3Z3,9ROM11IROM1Z9,3Z3,9ROM1N×vecRCMtCM= vecRCOtCO+ ROM1·tOM 1vecRCOtCO+ ROM1·tOM N(2)

Minimization of this linear system of matrices through least squares estimation results in matrix TCM. More information regarding the accuracy and theory of the algorithms is available from studies by Lee et al. [Citation13] and Lai and Shan [Citation14].

Point-based registration

In the medical field, image registration aims to establish spatial correspondences between volumetric datasets [Citation15]. The alignment between volumetric CT/MRI images (and segmented models) to the liver configuration when the patient is on the operating table is a field of image registration known as image-to-patient or image-to-physical registration. Two solutions are most commonly in use in the literature to perform image-to-patient registration: PBR [Citation16] and the single landmark registration method [Citation17]. Both algorithms work by sampling and matching a set of correspondent positions between coordinate systems. Within this study, PBR was implemented and tested to connect the image space I (the CT/MRI coordinates) and the patient’s position on the surgical table P. Consequently, image-to-patient registration is represented by transform TIP. To achieve AR on the camera perspective, we use image-to-patient registration to transform the image coordinates I into camera coordinates C, as shown in following the equation: (3) TIC= TCM1 ·TOM· TPO· TIP(3)

Figure 4. AR transformation diagram which combines hand-eye calibration and image-to-patient registration.

Figure 4. AR transformation diagram which combines hand-eye calibration and image-to-patient registration.

However, if the points used for registration are already with respect to the optical tracking system, transformation TPO  will be an identity matrix. This can happen if we are registering to coordinate system O by sampling positions using an optically tracked instrument.

Re-projection of volumes in augmented reality

To complete the AR after EquationEquation (3), re-projection of 3D volumes to 2D images was performed using an additional transformation, commonly referred to as the perspective projection matrix or camera intrinsic matrix [Citation12]. Re-projection of models in the I (Image) coordinate system as AR on the camera view can be performed through EquationEquation (4): (4) suv1= fxγu00fyv0001 K·TCM1 ·TOM· TPO· TIP·xyz1(4) where u and v are the 2D positions on the image plane in pixels, x, y, z are the 3D positions in the P coordinate system and matrix K is the intrinsic parameters matrix, computed through camera calibration. t is a nonhomogeneous transformed vector 3D positions in C coordinates (3 × 1 vector of the resulting 4 × 1 transformed points). Rectification parameters are also used during projection of the volumes.

Experimental protocol

The experiment protocol of this study aims to examine the influence of human errors (registration-related errors) on AR re-projection errors. The algorithms to generate the AR were kept consistent throughout each experiment. Three experiments were conducted with decreasing accuracy of sampling positions during image-to-patient registration. The first experiment makes use of a precisely machined, custom built optical validation phantom which follows ASTM standard F2554-10 for optical tracking accuracy measurement, described by Teatini et al. [Citation4]. The second experiment evaluates AR accuracy on a patient-specific liver phantom with markers visible in the CT. Finally, in order to test the AR in a fully clinical scenario, the algorithms described above were tested through a porcine experiment (in vivo model).

For each experiment, image-to-patient registration was performed multiple times, always using five registration landmarks and the rest of the landmarks to compute the inaccuracy of the AR. Re-projection error in AR was computed as distance between manually annotated positions (ground truth positions) and their correspondent re-projected positions on the AR frames, as done also by Teatini et al. [Citation18]. An example of the re-projected points, for all three experiments, is visible in . Two parameters were evaluated in this study: fiducial registration error (FRE) and target registration error (TRE). FRE represents the accuracy of re-projected markers used to compute image-to-patient registration, and TRE the accuracy for all other points across the rest of the volume [Citation16]. Both errors are computed as re-projection errors. These TRE and FRE fully represent, in our opinion, the accuracy of AR in terms of how well re-projected volumes are in comparison to ground truth positions. Moreover, to provide the reader with FRE and TRE of the AR in millimetres, and not only in pixels, the authors make use of the inverse of matrix K in EquationEquation (4), as described by Thompson et al. [Citation19].

Figure 5. AR frames for each experiment procedure, validation phantom (left top), liver phantom (right top) and in vivo study (bottom). The registration targets were manually annotated from the frames, TRE and FRE were computed as the distances to the re-projected corresponding dots.

Figure 5. AR frames for each experiment procedure, validation phantom (left top), liver phantom (right top) and in vivo study (bottom). The registration targets were manually annotated from the frames, TRE and FRE were computed as the distances to the re-projected corresponding dots.

Validation phantom

The optical validation phantom was custom produced with 28 titanium target divot pins designed for TRE calculation on various planes and orientations, more details are available in [Citation4]. Based on the algorithms previously described, the divot pins were registered and then re-projected onto the laparoscope frame as AR (shown in ). In this scenario, the accuracy of sampling registration targets is very good because they are precisely machined targets at measured locations.

Liver phantom

The patient-specific liver phantom is a phantom designed based on the CT scans of a patient. The liver phantom used throughout the experiments was produced by the ARTORG research centre (ARTORG, Bern, Switzerland) according to Pacioni et al. [Citation20]. Fourteen stainless steel metallic M6 washers were glued around the whole surface of the liver phantom and served as landmarks. An intra-operative SOMATON Definition Edge CT scan (SIEMENS, Munich, Germany) was used to obtain a CT scan in the OR with the liver phantom positioned on the surgical table. The washers were segmented from the CT scan through intensity-based thresholding and clustered into positions through fuzzy means classification.

In comparison to the previous experiment on the validation phantom, the liver phantom introduces more inaccuracy in determining the correct positions to sample. This is due to the deformability of the phantom as well as the fact that the surgeon has to aim to the centre of the washers with a tracked pointer, instead of using precisely machined divot holes. Moreover, the 3D spatial locations of the markers are not precisely measured (clustered from the CT scan, as aforementioned), which further increases the sampling error.

In vivo model

The pre-clinical trial was necessary to calculate the AR accuracy in a more realistic clinical scenario: the positions for PBR are points sampled directly on the liver surface. For this reason, they are neither visible in the CT scan (like the metallic washers in the liver phantom), nor precisely machined divot positions (as in the validation phantom). Hence, the correspondence between the positions in the CT scan and the laparoscopic camera perspective is based on the surgeon’s anatomical understanding of the liver when required to annotate the locations sampled laparoscopically on the CT scan. An in vivo model of 59.5 kg was positioned on the surgical table in an OR equipped with the intra-operative CT scanner. After establishing the pneumoperitoneum to 13 mmHg through a Veress needle, an intra-operative CT scan was performed. Intra-operative imaging was performed rather than pre-operative imaging to minimize the inaccuracy due to soft-tissue deformation due to pneumoperitoneum. Successively, through an optically tracked monopolar cauterizer (Aesculap, Tuttlingen, Germany) 15 cauterization marks (ablation marks) were performed by the surgeon on the liver surface across the whole visible surface (similarly to [Citation15]).

Reproducing the previous experiments, five of the targets were used to perform image-to-patient registration, whereas the other 10 were used to compute the accuracy. The cauterization marks performed on the liver surface were matched with the annotations made by the surgeon on a segmentation model of the liver parenchyma from the intra-operative CT scan.

This experiment relied on the surgeon’s anatomical understanding to calculate the positions of the landmarks on the CT scan. These annotation errors are computed through fiducial localization error (FLE) [Citation21]. FLE was evaluated by laparoscopic insertion of needles at the centre location of the ablation marks, as to provide approximate ground truth positions for the annotations (approximate because insertion of the fiducials can cause some deformation to the liver tissue; moreover, segmentation and reconstruction errors may also present).

Results

For each experiment, a total of 100 AR frames were manually annotated to evaluate TRE and FRE. Five markers were used for registration and the rest for accuracy evaluations. summarizes the average TRE for all three experiments for each registration procedures in millimetres.

Table 1. Results of TRE and FRE in (mm) for each experiment, separated into registration procedure, with standard deviation.

Validation phantom

A total of 748 re-projected divot positions were used to evaluate TRE and 164 for FRE. The average TRE across registrations was found to be µ = 6.87 mm, with standard deviation of σ = 1.95 mm. FRE resulted in µ = 6.93 mm, with standard deviation of σ = 1.29 mm.

Liver phantom

A total of 1403 re-projected metallic marker centroid positions were used to evaluate TRE and 450 for FRE. The average TRE across registrations was found to be µ = 7.85 mm, with standard deviation of σ = 6.19 mm. FRE resulted in µ=7.13 mm, with standard deviation of σ = 5.68 mm.

In vivo model

A total of 3074 re-projected cauterization points were used in order to compute TRE and 1137 for FRE. The average TRE across registrations was found to be µ = 13.37 mm, with standard deviation of σ = 6.25 mm. FRE resulted in µ = 11.84 mm, with standard deviation of σ = 6.44 mm. FLE, computed as the rms between the inserted fiducials and the cauterization points annotated was, on average, 16.40 mm.

Because of non-normality of parts of the data, six Kruskal-Wallis tests were conducted in SPSS (IBM, Armonk, NY) to compare the TRE and FRE across registration procedures for each experiment. Significant differences (p < .05) between registration procedure accuracies were found for the validation phantom, for both TRE and FRE, χ2(9) = 290.06, p = 3.34E−57 and χ2(9) = 78.69, p = 2.93E−13, respectively. No significant differences were found for the liver phantom TRE but for the FRE χ2(9) = 21.28, p = .011. Significant differences between registration procedures were found for TRE and FRE the in vivo experiment, χ2(9) = 152.04, p = 3.33E−28 and χ2(9) = 29.96, p = .00045, respectively.

Successively, an additional Kruskal-Wallis test (because of non-normality of the data) showed that there was a statistically significant difference between the different experiments in terms of TRE, χ2(2) = 1223.61, p = 1.976E−266, with a mean rank score of 1666.21 for the validation phantom, 1781.41 for the liver phantom and 3222.93 for the in vivo study. FRE also revealed significant differences, χ2(2) = 254.72, p = 4.88E−56, with a mean rank score of 668.46 for the validation phantom, 595.34 for the liver phantom and 1017.01 for the in vivo study.

Discussion

Based on the results obtained from the three experiments, the accuracy with which a position is annotated on a CT scan volume affects both the TRE and FRE in AR. This is inferred from the comparison between the results in the validation phantom and liver phantom, where points are measured or automatically clustered, and the results on the in vivo model, which depends on human interaction through annotation.

It is noteworthy to mention that, although the average TREs and FREs of the validation and the liver phantoms are very similar, TRE varies significantly between registration procedure in the validation phantom but not for the liver phantom. This can be explained by the differences in sizes between the volumes to be registered. The validation phantom presents a volume of 4320 cm3, whereas the patient-specific phantom is 1882 cm3 (the in vivo liver was 2393 cm3). Moreover, the validation phantom targets are partially symmetrical, and some positions are almost collinear (as can be seen in ). Sampling of five positions across the volume was performed randomly in each of the experiments. Therefore, depending on the positions of the targets, a larger volume will be more affected than a smaller volume. Based on these results, for registration of large volumes, it is preferable to use a larger number of targets to compute the image-to-patient registration and possibly better spatial disposition of these targets with respect to the volume (such as ensuring non-collinearity between registration targets or closeness to areas of interest such as tumours).

The statistical differences between TRE and FRE in the in vivo study may depend on the FLE for the cauterization markers used to compute image-to-patient registration. If, e.g. the accuracy of annotating a position were to be inexact, this could greatly affect the image-to-patient registration matrix, causing an increase in both TRE and FRE (meaning, a decrease in AR accuracy). In order to mitigate the effect of sampling error in PBR, the authors propose using intra-operative fiducials on the liver surface, which could be removed post-surgery (or made of biocompatible/biodegradable material). These fiducials could be detected in the intra-operative CT/MRI scan and used to perform PBR. This would probably reduce the sampling error to the accuracy evaluated in the liver phantom testing (it would greatly reduce FLE) and may greatly improve the accuracy of the AR re-projection. However, this may prolong and complicate the surgical procedure.

Within this study, targets located on the centre of the liver surface were the most complicated to be annotated correctly and should therefore be avoided. Alternatively, using targets on the edges of the liver could also reduce FLE (if the parenchyma were rigid enough during sampling). Positions that are also stable in the liver include intersections of the parenchyma with major blood vessels (such as the portal vein) or bifurcations of blood vessel structures. These structures can serve as solid registration targets; however, they can only be sampled from within the liver after resection.

Some limitations to this study include the fact that the scan used to perform AR was intra-operative, which is currently not within the surgical workflow for most hospitals. Pre-operative imaging does not account for intra-operative deformations, such as pneumoperitoneum (static deformations); hence, inaccuracies due to the non-rigidity of the liver would be present in the current surgical workflow, as mentioned by Thompson et al. [Citation19,Citation22]. However, intra-operative CT and MRI scanners are under production for Hybrid OR suites, and show very large increases in AR accuracy and alternative methods of acquiring intra-operative data (such as stereo surface reconstruction [Citation23] or laparoscopic US [Citation24]) could be used. Alternatives to intra-operative imaging, such as the use of biomechanical modelling are a valid solution, as described in [Citation25–28], but may be affected by the need of estimating of viscoelastic properties of soft tissue and boundary conditions. If elastography were to be used to characterize the viscoelasticity of the tissue, and the boundary conditions were known for each patient, biomechanical modelling could be used to account for deformations.

It would be interesting to study the effect of spatial disposition of registration targets across volumes of variable size, to validate the assumption that the volume changes caused the differences in accuracy between the validation phantom and the other experiments. Moreover, another limitation to the study is that the cauterization marks were performed on the liver surface and might not represent fully the error in the depth axis. However, the marks were performed across the liver as deep as possible towards the diaphragm, which allowed us to calculate TRE and FRE for positions at various depths, though not as much as a blood vessel or a tumour deep within the tissue. Another limitation is the fact that both camera calibrations and hand-eye calibrations are performed as they would be in a surgical environment, without thorough refinement of the calibration procedures. Furthermore, the annotations for PBR could have been performed by multiple surgeons to further validate the hypothesis that annotations (FLE) could cause significant differences in terms of registration accuracy for AR.

The investigated inaccuracy in terms of TRE and FRE is larger than that accepted by surgeons. However, the use of AR, complemented with intra-operative US, could still be useful for visualization of the structures in the resection field. Thus, it might help surgeons better understand spatial distribution of anatomical structures and lead to safer surgery. Even with current quality, where resection lines cannot be followed blindly due to the system’s inaccuracy, there is still clinical value in the use of this AR system, especially for spatial understanding.

Conclusions

This study aims to show that accuracy in sampling registration targets can contribute to decreases/increases in the accuracy of the AR through PBR. The laparoscope camera, CT scanner, optical markers and algorithms used were consistent throughout all experiments, the only difference was the volumes to be registered. Results show that the accuracy through PBR can change based on the accuracy in sampling the positions to compute image-to-patient registration, possibly also the size of the volume to be registered and spatial disposition of registration targets used to perform image-to-patient registration.

The overall accuracy for the AR in terms of TRE for the in vivo model was around 13 mm, and 11 mm in terms of FRE. However, the results also say that the TRE AR accuracy can worsen greatly based on the registration procedure, if the targets are based on the surgeon’s annotations (TRE can result to be larger than a centimeter, as shown in and validated in other studies [Citation22,Citation18]).

The main indication for liver resection is CRLM in the western world. If the proposed solution were to be used for clinical use in CRLM, the error for the AR should not exceed 6 mm of inaccuracy according to the authors. The reason is that 6 mm are acceptable for surgeons because the safety margin for CRLM (resection margin) is 1–3 mm [Citation29], and normally, surgery is planned with a 1 cm margin. Therefore, a planned resection line with 1 cm of margin using an AR with 6 mm of error, will allow 3 mm of space in addition to the 1 mm safety margin.

Based on the results of this study, it is necessary to improve the image-to-patient registration, possibly with the use of user-independent fiducials for registration and a smaller volume to be registered. Overall, improvements of this AR system are necessary; however, we have proven that better sampling accuracy can lead to much better accuracies, which will allow AR to be of use for surgery.

Ethical approvals

All procedures performed in studies involving animals were in accordance with the ethical standards of the institution or practice at which the studies were conducted. This article does not contain any studies with human participants performed by any of the authors.

Supplemental material

Acknowledgements

The authors would like to express their gratitude to all the OR team at ‘The Intervention Centre’, Oslo University Hospital.

Declaration of interest

The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

This work was supported by H2020-MSCA-ITN Marie Skodowska-Curie Actions, Innovative Training Networks (ITN) 2016 GA EU project number 722068 High Performance Soft Tissue Navigation (HiPerNav).

References

  • Cleary K, Peters TM. Image-guided interventions: technology review and clinical applications. Annu Rev Biomed Eng. 2010;12(1):119–142.
  • Hallet J, Soler L, Diana M, et al. Trans-thoracic minimally invasive liver resection guided by augmented reality. J Am Coll Surg. 2015;220(5):e55–e60.
  • Bernhardt S, Nicolau SA, Soler L, et al. The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal. 2017;37:66–90.
  • Teatini A, Frutos JD, Langø T, et al. Assessment and comparison of target registration accuracy in surgical instrument tracking technologies. Conf Proc IEEE Eng Med Biol Soc. 2018;2018:1845–1848.
  • Palomar R, Cheikh FA, Edwin B, et al. Surface reconstruction for planning and navigation of liver resections. Comput Med Imaging Graph. 2016;53:30–42.
  • Heiselman JS, Clements LW, Collins JA, et al. Characterization and correction of intraoperative soft tissue deformation in image-guided laparoscopic liver surgery. J Med Imag. 2018;5(2):021203.
  • Aghayan DL, Pelanis E, Fretland ÅA, et al. Laparoscopic parenchyma-sparing liver resection for colorectal metastases. Radio Oncol. 2017;52(1):36–41.
  • Fretland AA, Dagenborg VJ, Bjørnelv GMW, et al. Laparoscopic versus open resection for colorectal liver metastases. Ann Surg. 2018;267(2):199–207.
  • Aghayan DL, Fretland ÅA, Kazaryan AM, et al. Laparoscopic versus open liver resection in the posterosuperior segments: a sub-group analysis from the OSLO-COMET randomized controlled trial. HPB (Oxford). 2019;21(11):1485–1490.
  • Mountney P, Fallert J, Nicolau S, et al. An augmented reality framework for soft tissue surgery. Lecture Notes Comput Sci. 2014;17:423–431.
  • Thompson S, Stoyanov D, Schneider C, et al. Hand-eye calibration for rigid laparoscopes using an invariant point. Int J Comput Assist Radiol Surg. 2016;11(6):1071–1080.
  • Zhang Z. A flexible new technique for camera calibration. IEEE Trans Pattern Anal Machine Intell. 2000;22(11):1330–1334.
  • Lee S, Lee H, Choi H, et al. Effective calibration of an endoscope to an optical tracking system for medical augmented reality. Cogent Eng. 2017;4:1–11.
  • Lai M, Shan C. Hand-eye camera calibration with an optical tracking system. Proceedings of the 12th International Conference on Distributed Smart Cameras. New York (NY): ACM; 2018. p. 18.
  • Modat M, Ridgway GR, Taylor ZA, et al. Fast free-form deformation using graphics processing units. Comput Methods Programs Biomed. 2010;98(3):278–284.
  • Fitzpatrick JM. Fiducial registration error and target registration error are uncorrelated. 2009;7261:726102. Available from http://proceedings.spiedigitallibrary.org/proceeding.aspx?doi=10.1117/12.813601.
  • Pérez de Frutos J, Hofstad EF, Solberg OV, et al. Laboratory test of Single Landmark registration method for ultrasound-based navigation in laparoscopy using an open-source platform. Int J Comput Assist Radiol Surg. 2018;13(12):1927–1936.
  • Thompson S, Totz J, Song Y, et al. Accuracy validation of an image guided laparoscopy system for liver resection. Proc SPIE. 2015;9415:941509.
  • Thompson S, Schneider C, Bosi M, et al. In vivo estimation of target registration errors during augmented reality laparoscopic surgery. Int J Comput Assist Radiol Surg. 2018;13(6):865–874.
  • Pacioni A, Carbone M, Freschi C, et al. Patient-specific ultrasound liver phantom: materials and fabrication method. Int J Comput Assist Radial Surg. 2015;10(7):1065–1075.
  • Liu W, Ding H, Han H, et al. The study of fiducial localization error of image in point-based registration. Proceedings of the 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society Engineering the Future of Biomedice; EMBC; 2009. p. 5088–5091.
  • Teatini A, Pelanis E, Aghayan DL, et al. The effect of intraoperative imaging on surgical navigation for laparoscopic liver resection surgery. Sci Rep. 2019;9(1):18687 doi:10.1038/s41598-019-54915-3. PMC: 31822701
  • Andrea T, Congcong W, Rafael P, et al. Validation of stereo vision based liver surface reconstruction for image guided surgery. Colour Vis Comput Symp. 2018;2018:1–6.
  • Fusaglia M, Tinguely P, Banz V, et al. A novel ultrasound-based registration for image-guided laparoscopic liver ablation. Surg Innov. 2016;23(4):397–406.
  • Faure F, Duriez C, Delingette H, et al. SOFA: a multi-model framework for interactive physical simulation. 2012.
  • Nikolaev S, Peterlik I, Cotin S, et al. Stochastic correction of boundary conditions during liver surgery to cite this version: HAL Id: hal-01823810 Stochastic Correction of Boundary Conditions during Liver Surgery. 2018.
  • Özgür E, Koo B, Le Roy B, et al. Preoperative liver registration for augmented monocular laparoscopy using backward–forward biomechanical simulation. Int J Comput Assist Radiol Surg. 2018;13(10):1629–1640.
  • Peterlik I, Courtecuisse H, Rohling R, et al. Fast elastic registration of soft tissues under large deformations to cite this version: HAL Id: hal-01613757. 2018.
  • Postriganova N, Kazaryan AM, Røsok BI, et al. Margin status after laparoscopic resection of colorectal liver metastases: does a narrow resection margin have an influence on survival and local recurrence? HPB. 2014;16(9):822–829.