496
Views
1
CrossRef citations to date
0
Altmetric
Biomedical Paper

A quantitative evaluation of human coordination interfaces for computer assisted surgery

, , , & , PhD
Pages 71-81 | Received 12 Apr 2006, Accepted 28 Dec 2006, Published online: 06 Jan 2010

Abstract

Computer assisted surgery (CAS) for tumor resection can assist the surgeon in locating the tumor margin accurately via some form of guidance method. A wide array of guidance methods can be considered, including model-based visual representations, symbolic graphical interfaces, and those based on other sensory cues such as sound. Given the variety of these guidance methods, it becomes increasingly important to test and analyze guidance methods for CAS in a quantitative and context-dependent manner to determine which is most suitable for a given surgical task. In this paper, we present a novel experimental methodology and analysis framework to test candidate guidance methods for CAS. Different viewpoints and stereographic, symbolic and auditory cues were tested in isolation or in combination in a set of virtual surgery experiments. A total of 28 participants were asked to circumscribe a virtual tumor with a magnetically tracked scalpel while measuring the surgical trajectory. This allowed measurement of surgical accuracy, speed, and the frequency with which the tumor margin was intersected, and enabled a quantitative comparison of guidance approaches. This study demonstrated that adding sound to pictorial guidance methods consistently improved accuracy, speed and margin intersection of the virtual surgery. However, the use of stereovision showed less benefit than expected. While guidance based on a combination of symbolic and pictorial cues enhanced accuracy, we found that speed could be substantially impaired. These studies demonstrate that optimal guidance combinations exist which would not be apparent by studying individual guidance methods in isolation. Our findings suggest that care is needed when using expensive and sometimes cumbersome virtual visualization technologies for CAS, and that simpler, non-stereo presentation may be sufficient for specific surgical tasks.

Introduction

Increasing interest in alternatives to conventional surgery for cancer applications has prompted the development of a wide array of potential solutions, including remote surgery with image-driven, laparoscopic or robotic manipulation Citation[1], direct real-time image guidance integrated into more traditional surgical settings, and the use of pre-operative imaging to generate models of tumor geometry that can then be used to track surgical maneuvers Citation[2], Citation[3]. Regardless of the approach, the need to integrate spatial information into a surgical decision-making framework poses significant challenges to the engineer and surgeon alike. It is particularly challenging when the surgeon attempts to integrate complex guidance data seamlessly into routine surgical manipulations. The ideal guidance method would transfer the maximum amount of spatial information to the surgeon while minimizing fatigue and not overwhelming the surgeon with excessive data or cumbersome hardware. As each surgical task poses differing challenges, the optimum balance will very likely depend on the surgical context.

We have been exploring one such task aimed at integrating pre-surgical MRI information for the purpose of breast conserving surgery (BCS). Our approach is based on a segmented model of a breast tumor for resection derived from DCE-MRI that is coordinated to the patient's actual tumor location and geometry during surgery. However, a key question is what would be the best method to convey this geometrical data to the surgeon?

Many implementations of CAS rely on the use of stereoscopic displays; however, relatively little research has been devoted to characterizing their benefits in comparison to non-stereoscopic presentation. Samset et al. Citation[4] showed a speed advantage with stereo presentation when observers were asked to direct a virtual pointer to a specific point within a virtual surgical field. Stereo displays were shown to be advantageous in a knot-tying task by Tendick et al. Citation[5], while Crosthwaite et al. Citation[6] demonstrated their benefit in endoscopy surgery. While stereoscopy and non-stereo presentation of visual data are obvious choices for CAS guidance, other guidance options exist and should be studied. These include the use of auditory cues of geometrical position/orientation, as well as symbolic directives that aim to direct a surgeon with compass-like structures. One can envision combinations of various guidance methods being presented to the surgeon that may improve overall accuracy and speed. However, it is not obvious which combinations would work best, and this would probably depend on the specific application. While such guidance combinations are feasible, little research has been devoted to quantitatively assessing the efficacy of multiple guidance methods in a well-controlled study with clearly defined objectives Citation[7]. In this paper, we present such an experimental methodology for a breast surgery application which allows the quantitative comparison of various guidance options and measurement of how they may interact when applied in combination Citation[8].

Materials and methods

Surgical platform

As reported previously Citation[8], we constructed a surgical platform () in which a magnetic tracker was used to locate a virtual “scalpel” dynamically with respect to the boundaries of a virtual object which served to represent a tumor. In our study, the participant was asked to move the tip of the scalpel to trace the boundary of the object and thereby simulate tissue cutting. The object being traced was not directly visible to the study participant and only existed in the form of a pre-determined shape and orientation somewhere within in a volume of approximately 25 × 25 × 25 cm. In order for participants to know the relative positions of the scalpel tip and the object boundary, they were provided with a range of human coordination interfaces that would serve to direct them in their task. The goal was to move the scalpel tip over the tumor boundary as quickly and accurately as possible. In , a participant is seen moving the scalpel over the edge of the object in front of them. shows the participant while being directed by a combination of an auditory and a visual coordination interface.

Figure 1. (a) Experimental system showing the positioning table to provide 6-DOF positioning and orientation of the magnetic sensor used to track the virtual tumor. (b) The modified RF scalpel containing a magnetic sensor. (c) Visual interface showing the disk inclined at 30° with the virtual scalpel showing the axial view of the task (top viewport) and coronal view (bottom viewport). (d) The navigation compass. [Color version available online.]

Figure 1. (a) Experimental system showing the positioning table to provide 6-DOF positioning and orientation of the magnetic sensor used to track the virtual tumor. (b) The modified RF scalpel containing a magnetic sensor. (c) Visual interface showing the disk inclined at 30° with the virtual scalpel showing the axial view of the task (top viewport) and coronal view (bottom viewport). (d) The navigation compass. [Color version available online.]

Figure 2. Typical experiment with volunteer accomplishing a “cutting” task on the virtual tumor generated with the setup shown in . Reproduced with kind permission of Springer Science and Business Media. [Color version available online.]

Figure 2. Typical experiment with volunteer accomplishing a “cutting” task on the virtual tumor generated with the setup shown in Figure 1a. Reproduced with kind permission of Springer Science and Business Media. [Color version available online.]

To determine the participants’ accuracy as they were directed by the various coordination interfaces, we mounted a 2-mm-catheter-based, 6-degree- of-freedom (6-DOF) magnetic tracking probe (Ascension Technology, Milton, VT) in a modified scalpel (). The magnetic tracking system provided a measure of the position and orientation of the scalpel in space that could be compared to the actual position of the object under study. The desired trajectory was defined by our virtual tumor, and its location in space was fixed with another channel of the magnetic tracking system. The specific task was to cut along the margins of an object while following its boundary as closely as possible. The object was shaped as a flat disk with radius varying between 15 and 20 cm. Scalpel position data from the magnet tracking system was sampled at 75 Hz. The task was completed when the scalpel tip had swept over the complete periphery of the disk.

This task was presented to a group of participants consisting of 27 lay volunteers from various academic backgrounds and one surgeon. The group was equally balanced for gender, and distributed among various age groups and education levels (). The procedures were conducted in accordance with the ethical standards of the Committee on Human Experimentation and in accordance with the ethical standards of the Helsinki Declaration of 1975.

Table I.  Participant demographics.

Surgeon-computer interfaces

Four types of interface were implemented for this study: three visual and one auditory. They were selected for their simplicity and relevance to the task of tumor resection. Each coordination interface was developed using an open-source software package (Visual Toolkit (VTK); http://www.vtk.org/) which provides a range of utilities to generate images of the virtual tumor in real time. VTK is frequently used in the medical image computing community for medical image analysis and processing. It was also used to generate rendering of virtual 3D structures using OpenGL libraries. Our code was run under the Windows 2000 operating system using a Pentium 4 PC with 512 MB of RAM and 1.5 GHz processor speed. In the sections that follow, we review each of the interfaces that were tested.

Visual interface

An image of the virtual scalpel and the tumor was presented from two possible viewpoints on a CRT monitor. One visual interface consisted of a frontal (axial) viewpoint of the inclined disk where the image was projected on a plane perpendicular to the positioning table. The second visual interface provided the participant with a top (coronal) viewpoint where the image was projected onto a plane parallel to the positioning table. These two viewpoints could be viewed either individually or in combination, as shown in .

In all of these visual interfaces, a small white sphere located on the disk boundary showed the evolution of the virtual surgical task (or how much was “cut” around the tumor boundary). The tip of the scalpel location was accented and shown as a red sphere, as seen in . A green sphere represented the closest point to the scalpel tip on the tumor boundary. As the scalpel was moved in space, the position of the white and green spheres was accurately adjusted at a rate of 50 Hz. Additional information for all visual displays included a numerical display of the positional distance from the tumor margin, with negative numbers indicating that the tumor boundary had been crossed.

Stereo visual interface

A stereo presentation of the 2D visual interfaces was displayed on a CRT monitor with the participant wearing optical shutter glasses (Stereographics Corporation, Beverly Hills, CA). By this means, the observer perceived a clear three-dimensional display of the tracking task and the scalpel.

Symbolic visual interface

Symbolic interfaces allow the full procedure to be done under guidance from the computer, but do not require a visual representation of the tumor. For this condition, a small “navigation compass viewport” () could be presented on the CRT monitor. When presented in combination with other visual interfaces it was juxtaposed to the top visual viewport. The compass operated with a “needle” which dynamically pointed in the direction that the scalpel should move to approach the desired position on the plane of trajectory. Distance from the tumor plane was indicated by needle length: If the tip of the scalpel was in the plane of the tumor, its length fitted exactly in the surrounding circle shown in ; if it was “above” the desired plane, the needle increased in length; and if it was below the tumor plane, it became shorter. A negative sign in front of the “Distance” number () also indicated to the participant that the scalpel was incorrectly touching the inside of the virtual tumor.

Auditory interface

In this condition, an auditory cue was presented through headphones. The amplitude of a tone with constant frequency increased as the scalpel tip approached the tumor boundary. The amplitude of the sound was coded to be directly proportional to the distance in mm between the tumor boundary and the scalpel tip (i.e., between the green and red spheres seen in ). This distance was calculated as the difference between the scalpel tip position reported by the tracking system and the closest point on the tumor boundary. The closest point's position on the tumor boundary (green sphere) was also reported by the tracking system relative to a second sensor located in the center of the virtual tumor. If the scalpel moved to a position inside the tumor, there was a sudden switch to a distinctive, higher frequency, indicating a breach of surgical integrity.

Measures of surgical efficiency

The distance between the surgical path and the virtual tumor boundary provided a measure of positional error (). This was defined as the shortest distance between the tip of the virtual scalpel and the virtual tumor boundary. All distance records were calculated 75 times per second (=a refreshing rate of 75 Hz) using the scalpel tip position as reported by one sensor and the position of the closest point on the tumor boundary reported by the second sensor in the center of the virtual tumor. Every point on the virtual tumor boundary was equidistant from the second sensor, whose position was reported by the tracking system. Hence, the position of each point on the boundary was determined relative to this second sensor's position. All distance data were recorded in real time and compiled immediately. From this compiled data, a root-mean-square (RMS) distance error was calculated as a global measure of accuracy. Encroachment into the tumor is undesirable, as this would degrade the surgical integrity of the specimen for histology. To measure surgical integrity, the fraction of measurements recorded inside the tumor boundary was calculated. Finally, to assess the speed of each guidance method, the time taken to complete the task was also obtained.

Figure 3. (a) Positional error as a function of time for completion of experiment 18, including the total time, RMS error, and fraction of distance measurements taken inside the tumor. (b) The corresponding surgical path traced out by one volunteer in this experiment. [Color version available online.]

Figure 3. (a) Positional error as a function of time for completion of experiment 18, including the total time, RMS error, and fraction of distance measurements taken inside the tumor. (b) The corresponding surgical path traced out by one volunteer in this experiment. [Color version available online.]

Experimental design

The four interfaces were incorporated into a four-way repeated-measures design, which gave rise to 24 conditions as outlined in (3 viewpoints: axial/coronal/both × 2 stereovision: present/absent × 2 symbolic vision: compass present/absent × 2 auditory: present/absent). For example, a study involving the use of a coronal visual representation of the target, seen as a stereo image, with both sound and the compass is designated experiment number 9. As we were interested in the impact of the presence or absence of auditory feedback on symbolic cue guidance, these two additional conditions were also tested. Because of the large number of conditions, the testing was conducted across two sessions of 17 and 9 conditions, respectively. Conditions in each test session were assigned randomly, and the intervals between the two sessions varied from two to nine weeks.

Table II.  Distribution of experiments using various interface combinations for the analyses. The corresponding “experiment numbers” are shown as entries in the table.

For all conditions, participants were asked to move the scalpel around the virtual tumor as close to the boundary as possible. They were informed that no particular weight was given to accuracy versus execution time, so the balance between speed and accuracy was left to their judgment. They were also instructed not to move the scalpel to a point “inside” the tumor, as this would reflect a surgical error corresponding to degrading the tumor margin for histology. To minimize any implicit learning of the tumor margins, the virtual tumor was tilted 30° with respect to the table (with the high end away from the observer), the radius varied randomly between 150 and 200 mm, and its center position was randomly located within a 64 cm3 cube centered on the second magnetic sensor.

Statistical analysis

Two separate repeated-measures analyses of variance (ANOVA) were performed (SPSS v.13). The first analysis tested the hypothesis that the four interfaces (visual, stereovision, navigation compass, and sound guidance) would be equally effective tools for image-guided surgery. The two additional symbolic conditions were incorporated into 3-way repeated-measures ANOVA by adding a fourth level to the viewpoint factor (4 viewpoints: axial/coronal/both/none × 2 symbolic vision: present/absent × 2 auditory: present/absent). Separate analyses were conducted for the three dependent variables: mean RMS error, mean fraction of measurements recorded inside the tumor, and mean execution time. Statistical significance was set at p < 0.05, and only significant results (with Greenhouse-Geisser adjusted degrees of freedom) are reported below. Pairwise differences presented in the results were derived using Newman-Keuls post-hoc tests Citation[9]. In addition, we tested for interactions among the various guidance methods. These interactions can occur when the characteristics of a guidance method measured in isolation change when it is combined with other guidance methods.

Results

An overview of the results can be seen in , where the labels on the x-axis reflect the experiment or condition number shown in . We present the results of these analyses of separate surgical criteria, with favorable results reflected by low RMS error (a measure of good accuracy), low fraction of measurements recorded inside the tumor (a measure of good surgical integrity), and rapid execution time. In the following sections, we review each of the criteria separately for the various guidance methods.

Figure 4. (a) Positional RMS error. (b) Frequency with which the scalpel cuts into the tumor. (c) Mean time to complete the task versus the guidance experiment method (see ). The variation is the standard error of each experiment across all participants.

Figure 4. (a) Positional RMS error. (b) Frequency with which the scalpel cuts into the tumor. (c) Mean time to complete the task versus the guidance experiment method (see Table II). The variation is the standard error of each experiment across all participants.

RMS error distance

Our study supported the observation that the type of visual interface provided to the participant affected RMS error distance (F(1.31, 35.35) = 8.74, p < 0.003). As seen in , RMS error during coronal presentation was significantly larger than with axial or joint axial-coronal presentation. Axial and combined axial-coronal conditions did not differ. A clear beneficial effect was found when sound guidance was present (F(1, 27) = 22.74, p < 0.001, ), and a modest beneficial effect when the compass was provided (F(1, 27) = 4.19, p < 0.051, ).

Figure 5. RMS errors for the different interfaces. (a) Visual interface. (b) Navigation compass. (c) Sound guidance. The variation is the standard error of each interface collapsed across all participants and across all related experimental cells shown in .

Figure 5. RMS errors for the different interfaces. (a) Visual interface. (b) Navigation compass. (c) Sound guidance. The variation is the standard error of each interface collapsed across all participants and across all related experimental cells shown in Table II.

A significant interaction existed between the visual and compass factors (F(1.43, 38.47) = 12.35, p < 0.001), as shown in . Two features can be noticed here. The first is the significant reduction in mean RMS error in the coronal condition when the navigation compass was available for guidance. In this case, the RMS error was reduced such that no differences among the three visual representations were seen. Also, note that the compass had no statistically meaningful impact in the axial or joint axial-coronal conditions.

Figure 6. RMS errors for the various interacting factors. (a) Visual-compass interaction. (b) Sound-stereo interaction.

Figure 6. RMS errors for the various interacting factors. (a) Visual-compass interaction. (b) Sound-stereo interaction.

The sound × stereo interaction (F(1, 27) = 4.73, p < 0.039) is shown in . In the absence of auditory feedback, performance was poorer in the stereo condition compared to that in the non-stereo condition. However, the presence of auditory feedback improved performance in both stereo and non-stereo conditions, resulting in equivalent levels of performance for these two visual representations.

Fraction of measurements recorded inside the tumor

The choice of visual interface had a significant effect on participants’ ability to avoid cutting inside the tumor boundary (F(1.95, 52.60) = 3.65, p < 0.034, ). In this case, there is an advantage in using the axial view over the two other visual interfaces. shows a significant improvement with the addition of sound guidance (F(1, 27) = 92.31, p < 0.001). Providing the compass for guidance () significantly reduced tumor integrity (F(1, 27) = 10.86, p < 0.003) in comparison to the outcome when no compass was used.

Figure 7. Fraction of measurements recorded inside the tumor for the different interfaces and interaction. (a) Visual interface. (b) Sound guidance. (c) Compass. (d) Visual-sound interaction.

Figure 7. Fraction of measurements recorded inside the tumor for the different interfaces and interaction. (a) Visual interface. (b) Sound guidance. (c) Compass. (d) Visual-sound interaction.

When sound was added, a significant interaction (F(1.92, 51.83) = 4.80, p < 0.013) was noted. In the absence of auditory feedback, performance was best for the axial viewport, as seen in . However, with the addition of sound, not only was performance generally improved, but the tumor integrity was equivalent across all viewport conditions, indicating that auditory feedback appears to have minimized the negative effects of the coronal and joint viewports on performance seen in .

Execution time

The type of interface also had a significant effect on execution time (F(1.60, 43.13) = 3.88, p < 0.037), as seen in . We observed no significant differences between the axial and coronal conditions, and found that the subjects were slowest in the joint axial-coronal condition, as seen in . Performance was faster with sound guidance (F(1, 27) = 10.54, p < 0.003, ) and marginally better with stereovision (F(1, 27) = 5.69, p < 0.024, ).

Figure 8. Execution times for the different interfaces. (a) Visual interface. (b) Sound guidance. (c) Stereo vision.

Figure 8. Execution times for the different interfaces. (a) Visual interface. (b) Sound guidance. (c) Stereo vision.

Three interactions were significant. The first was that between the visual and stereovision factors (F(1.95, 53.75) = 9.40, p < 0.001), as shown in . In this case, the benefit normally gained from stereo was cancelled when the joint axial-coronal level was used. The second significant interaction was that between the sound and stereovision factors (F(1, 27) = 6.20, p < 0.019), as shown in . In this case, sound guidance improved execution time only in the non-stereo conditions. Finally, a three-way sound × stereo × compass interaction (F(1, 27) = 9.07, p < 0.006) was dominated by a large improvement in the presence of auditory feedback in the compass conditions without stereovision.

Figure 9. Execution time for the various interacting factors. (a) Visual-stereo interaction. (b) Sound-stereo interaction.

Figure 9. Execution time for the various interacting factors. (a) Visual-stereo interaction. (b) Sound-stereo interaction.

Compass-guided surgery

It is worth considering the case of using only the compass for guidance. We found that it was indeed possible to complete the task under these conditions, but that the execution time was more than twice as long when no visual support was provided (F(1.32, 35.62) = 41.47, p < 0.001), as shown in . Significant visual × sound interaction (F(1.48, 39.97) = 3.88, p < 0.040) reflected the absence of a beneficial effect of auditory feedback in the compass-only condition, and clear benefits of sound with visual support.

Figure 10. Execution time under “pure” computer-guided conditions for the visual-sound interaction for each of the four visual presentations including no visual presentation (shown as “None”).

Figure 10. Execution time under “pure” computer-guided conditions for the visual-sound interaction for each of the four visual presentations including no visual presentation (shown as “None”).

Discussion

The fact that our study showed no significant difference in spatial error between the axial and joint axial-coronal level may indicate that most participants used only one viewport instead of integrating information from both. However, the coronal view produced significantly poorer results than those obtained under the two other conditions. Based on participants’ comments, it appears they had difficulty evaluating the scalpel height with this particular viewpoint due to the orientation of the object. However, these errors were reduced when the compass was used in conjunction with the coronal viewpoint. We speculate that the improvement was due to the varying length of the compass needle, indicating height, being monitored in the observer's peripheral vision. To gain further insight into the mechanisms involved, detailed eye-tracking experiments Citation[10] would be of value and will be the subject of future investigations.

We also found that sound guidance and the presence of the compass improved accuracy. An unexpected result was that stereovision had no significant effect on measured RMS error. This could be due to either insufficient depth-perception information being provided by the interface, a participant's inability to resolve stereovision effects, or the slightly reduced contrast and luminosity compared to the 2D presentation that was noticed by most participants. Also, as sound guidance operates in a fashion that does not require visual interpretation, it appears to operate in parallel with visual clues without interference. As such, sound appears to be available as an otherwise unused and non-competitive channel of information which may complement the observer's visual understanding of the surgical scene. Further investigations will be necessary to explore the interaction between sound and stereovision to further understand the nature of this interesting interaction.

For the fraction of measurements recorded inside the tumor, providing sound resulted in better tumor integrity in all viewport conditions. Its primary effect, however, was to reduce the worsening of performance seen in the coronal and joint conditions in the absence of sound. There were many comments from participants about how sound guidance helped them “get out of the tumor” very rapidly. As soon as the virtual scalpel touched the tumor, the tone frequency would change immediately and direct the participants to move out quickly. In contrast, visual representations required that the participant make a decision as to whether the scalpel had violated the tumor boundary on the basis of visual cues, which took time and was more subject to error. Clearly, this experiment illustrates that sound can provide an unambiguous aid in helping the surgeon in this respect.

It is interesting to note that the joint axial-coronal visual level appeared to slow participants down, but only in a significant manner when compared to the axial level. Since no significant difference was noticed between results with the axial and coronal views, this supports the apparent superiority of the axial view when one adds its effectiveness in improving RMS error accuracy. The compass had no effect on execution time when used in conjunction with visual clues. Hence, while providing the compass improves RMS error, it did not affect the time required to complete the task. Sound guidance also remained a good aid to performing quick virtual surgeries. Stereovision could reduce execution time in some cases, but tended to interact with sound guidance and the compass in a way that did not provide consistent results across all possible combinations of interfaces. Thus, while stereovision had the effect of improving speed, it showed no significant effect on the measured RMS error. In contrast, while the navigation compass had no significant impact on RMS error or tumor integrity, in the absence of visual support it had a major impact on speed. This is probably due to the fact that participants had to concentrate more on the precise orientation and length of the compass in the absence of a pictorial view of the virtual tumor.

This study demonstrates that devising a means to aid in surgical guidance, in a way that truly augments a surgeon's ability without the burden of cumbersome equipment or visual constraints, is not straightforward. While the particular test we set forth was highly idealized and not a true simulation of a realistic surgical setting, our study demonstrated interesting and less than obvious conclusions. To carry this kind of experiment further, the surgical tasks would need to be more surgically realistic and would require refinement of the guidance mechanisms beyond those implemented in this paper. However, we believe that the experimental and statistical approach used in this study establishes a simple and appropriate methodology for comparing guidance methods in a quantitative and rigorous manner. Perhaps more importantly, this experimental method can demonstrate optimal combinations among guidance methods in ways that would not be obvious from testing individual methods in isolation. Our results suggest that care is needed when using these relatively expensive and in some cases cumbersome technologies, as illustrated by the somewhat surprising finding that stereo guidance did not consistently confer an advantage across all guidance settings. It further suggests that simpler, non-stereo presentation may be sufficient for surgical guidance.

In this study, a number of improvements and further research questions could be considered. First we note that our study used only one participant who was experienced in breast surgery, while the majority of the participants had no experience in this field. It would be natural to ask whether surgical experience would alter the general conclusions found in this study. While not tested, we speculate that the results may not be substantially altered as the tracking task is unlike that encountered in traditional surgical contexts and generally tested large-scale maneuvers rather than the fine motors skills characteristic of surgery. Furthermore, the tasks were presented with no force feedback, which further reduces the realism of the simulation. Thus, as the manual task was a comparatively simple tracking task without the kinds of fine motor maneuvers encountered in a detailed surgical setting, we expect that the surgeon and non-surgeon alike would be equally skilled at the task they were asked to perform. At this preliminary stage of development we opted for a simple and easily characterized manual task, which uncovered interesting and novel observations regarding the way tracking mechanisms could interact. Future work will be devoted to verifying and confirming this assertion by collecting more data from experienced surgeons who will be asked to perform more realistic surgical maneuvers.

Further improvements of this study method could be achieved by incorporating eye tracking Citation[10] to identify the specific features that confer benefit and to explain interactions that are currently difficult to understand. Implementing a three-dimensional tracking task, rather than restricting the tracking to a planar task, would serve to increase the surgical relevance of the test. These additions, in combination with this experimental surgical platform, are likely to further refine the list of interesting features that we have already observed in this simplified setting. This type of research may ultimately give new insight into the complex interplay of various psycho-perceptual mechanisms responsible for computer-assisted guidance for surgical applications of the future.

Acknowledgments

We acknowledge useful discussions with Dr. Claire Holloway from the Sunnybrook Health Sciences Centre Division of Surgical Oncology. This project was supported by grants from the Ontario Research and Development Fund (OCITS-ORDCF) and the Terry Fox Foundation of the National Cancer Institute of Canada.

References

  • Riva G. Applications of virtual environments in medicine. Method Inform Med 2003; 42(5)524–534
  • Shuhaiber JH. Augmented reality in surgery. Arch Surg 2004; 139: 170–174
  • Tamaki Y, Sato Y, Nakamoto M, Sasama T, Sakita I, Sekimoto M, Ohue M, Tomita N, Tamura S, Monden M. Intraoperative navigation for breast cancer surgery using 3D ultrasound images. Comput Aided Surg 1999; 4: 37–44
  • Samset E, Talsam A, Kintel M, Jakob Elle O, Aurdal L, Hirschberg H, Fosse E. A virtual environment for surgical imaging guidance in intraoperative MRI. Comput Aided Surg 2002; 7: 187–196
  • Tendick F, Bhoyrul S, Way LW. Comparison of laparoscopic imaging systems and conditions using a knot-tying task. Comput Aided Surg 1997; 2(1)24–33
  • Crosthwaite G, Chung T, Dunkley P, Shimi S, Cuschieri A. Comparison of direction vision and electronic two- and three-dimensional display systems on surgical task efficiency in endoscopic surgery. Br J Surg 1995; 82(6)849–851
  • Gallagher AG, Ritter EM, Champion H, Higgins G, Fried MP, Moses G, Smith CD, Satava RM. Virtual reality simulation for the operating room: Proficiency-based training as a paradigm shift in surgical skills training. Ann Surg 2005; 241(2)364–372
  • Cardin MA, Wang JX, Plewes DB. A method to evaluate human spatial coordination interfaces for computer-assisted surgery. Part II. Lecture Notes in Computer Science 3750, JS Duncan, G Gerig. Springer, Berlin 2005; 9–16, Proceedings of the Eighth International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2005) Palm Springs, CA, October 2005.
  • Winer BJ. Newman–Keuls tests–correlated and uncorrelated measures. Statistical principles in experimental design, BJ Winer. McGraw-Hill, New York 1971; 528–532
  • Krapickler C, Haubner M, Engelbrecht R, Englmeier K. VR interaction techniques for medical imaging applications. Comput Meth Prog Bio 1998; 56: 65–74

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.