486
Views
0
CrossRef citations to date
0
Altmetric
Miccai Special

Photorealistic modeling of tissue reflectance properties

, , &
Pages 289-299 | Received 01 Mar 2006, Accepted 10 Jul 2006, Published online: 06 Jan 2010

Abstract

Objective: For Minimally Invasive Surgery (MIS) procedures, specular highlights constitute important visual cues for gauging tissue deformation as well as perceiving depth and orientation. This paper describes a novel reflectance modeling technique that is particularly suitable for simulating light interaction behavior with mucus-covered tissue surfaces. Methods: The complex and largely random tissue-light interaction behavior is modeled with a noise-based approach. In the proposed technique, Perlin noise is used to modulate the shape of specular highlights and imitate the effects of the complex tissue structure on reflected lighting. For efficient execution, the noise texture is generated in pre-processing and stored in an image-based representation, i.e., a reflectance map. At run-time, the graphics hardware is used to attain per-pixel control and achieve realistic tissue appearance. Results: The reflectance modeling technique has been used to replicate light-tissue reflection in surgical simulation. By comparing the results acquired against those obtained from conventional per-vertex Phong lighting and OpenGL multi-texturing, it is observed that the noise-based approach achieves improved tissue appearance similar to that observed in real procedures. Detailed user evaluation demonstrates the quality and practical value of the technique for increased perception of photorealism. Conclusion: The proposed technique presents a practical strategy for surface reflectance modeling that is suitable for real-time interactive surgical simulation. The use of graphics hardware further enhances the practical value of the technique.

Introduction

One of the most promising applications of computer graphics in medicine is Virtual Reality (VR). Computer-generated Virtual Environments (VEs) can be used in different stages of healthcare delivery, from training of medical staff, through development of new procedures, to providing effective therapy and rehabilitation. In surgery, VE can be used either pre-operatively or intra-operatively. For pre-operative simulation, it can be used to provide patient-specific visualization, procedure rehearsal, and surgical planning. Intra-operatively, computer-generated data can augment the surgeon's operating field of view, allowing imaging data to be seamlessly incorporated into the anatomical volume of interest for high-precision treatment delivery and the avoidance of critical regions. The emergence of VE-based surgical simulation, particularly as an education and assessment tool, has been motivated by a number of factors. Surgical training is a continuous process that does not end after graduation from medical school. Conventional surgical training techniques are limited by the flow and accessibility of suitable patients, and the availability of mentors, as well as by financial and ethical constraints.

The evolution of surgical technology has resulted in increasing procedural complexity. One typical example of this is the introduction of new surgical techniques such as Minimal Invasive Surgery (MIS), also known as Minimal Access Surgery (MAS), where restricted vision, difficult instrument maneuver, and loss of three-dimensional (3D) perception and tactile feedback have introduced significant challenges to manual dexterity and hand-eye coordination Citation[1]. To overcome these difficulties, surgical simulation provides an attractive training alternative Citation[2]. Simulated environments offer a milieu for “road-testing” innovation and techniques without jeopardizing patient safety. They allow for repeated practice of surgical procedures with varying complexity at the surgeon's own pace and time. The method further enriches the surgeon's learning experience by allowing errors without the ethical concerns of harming patients or sacrificing live animals. Surgical simulators can also be used to rehearse the necessary steps of complex surgical procedures to help trainees acquire essential skills Citation[3].

In order to accomplish these goals, however, surgical simulation has to provide a realistic training experience that closely imitates reality. One of the major challenges in surgical simulation is interactive reflectance modeling, which is essential for achieving the visual immersion required for development of higher perceptual and spatial reasoning skills. In general, the local reflection models used in computer graphics specify the reflectance properties of the material by using a Bidirectional Reflectance Distribution Function (BRDF), which describes the light reflected at a surface point in direction φout, γout due to light incident from direction φin, γin, where the reflection and incidence angles are given in polar coordinates. Hence, the BRDF captures the light-incidence and view-dependent nature of the reflected light. To this end, a number of physically based approaches for computing diffuse and specular BRDF components have been proposed. Diffuse reflection modeling considers reflection as the result of light entering the surface, scattering, interacting with its material, and then exiting in random directions Citation[4]. Therefore, diffuse light reflects uniformly in all directions. Specular reflection, on the other hand, is due to light reflecting off shiny surfaces following the law of reflection, which results in dynamic specular highlights appearing on top of the surface.

Specular highlights constitute a major visual cue for gauging tissue deformation, depth and orientation during MIS procedures Citation[5]. Thus far, a number of physically based micro-facet models have been used for analytically simulating specular surfaces Citation[6], Citation[7]. These models generally consider a rough surface topology with perfect micro-reflectors that account for the spectral composition of specular highlights. They are further extended to handle a wide range of surface types Citation[8] and anisotropic distributions with multiple scattering. However, many real-world BRDFs cannot be simulated with analytical models. An alternative approach is to directly acquire measurements of the BRDF by using goniospectro-reflectometers Citation[9–11]. However, measuring BRDFs is an expensive and time-consuming process that requires having a representative sample of the material available Citation[12]. Moreover, a full representation of the BRDF in a tabular form imposes prohibitive storage requirements, thus making it difficult to use in a scene with many different types of materials. For this reason, physically based reflectance modeling is not particularly suitable for interactive surgical simulation, despite the visual realism achieved.

Standard, mainstream graphics APIs such as OpenGL and DirectX are popular in surgical simulation, simulating specular effects by using an empirical model, i.e., the Phong reflection model Citation[13]. However, only a limited number of material-light interaction properties can be specified and only simple plastic-like or polished surfaces can be convincingly simulated with the Phong model. Other techniques use environment mapping to map an image of the specular source onto the surface. Due to the intricacy of internal organs and their complex surface composition, conventional lighting methods have found limited use in advanced surgical training, particularly for those involving patient-specific models. The results obtained with these methods generally lack visual realism due to the nature of tissue composition, which is not perfectly smooth but is comprised of multiple layers covered with translucent mucus.

The purpose of this paper is to present a novel noise-based Citation[14–17] reflectance modeling technique for simulating the appearance of specular highlights reflected from the internal lumen during MIS procedures. With the proposed technique, specular reflections are modeled using a set of image-based structures that encode noise information, i.e., reflectance maps, to define the surface-light interaction properties. During run-time, the programmable graphics pipeline is used to enable high-fidelity rendering at interactive rates by offloading complex vertex and pixel operations from the Central Processing Unit (CPU) to the Graphics Processing Unit (GPU) Citation[18], allowing real-time, high-fidelity surgical scene visualization. In the subsequent sections, we will describe how texture noise is generated at a pre-processing stage and demonstrate how this information is used during run-time by the graphics processor to efficiently achieve pixel-level control and photorealistic tissue appearance. To assess the effectiveness of the proposed technique, the visual realism achieved was assessed by both naïve and expert observers with detailed visual scoring and cross comparison.

Methods

In this paper, the complex and largely random tissue-light reflectance behavior is modeled with a noise-based approach in which Perlin noise Citation[19] is used to modulate the shape of specular highlights and imitate the effects of composite multi-layered tissue structure on reflected light. The proposed technique is divided into two phases that consist of pre-processing and interactive steps. In the first phase, a reflectance map that encodes the noise data is generated. During run-time, per-triangle reflectance information is extracted by texture mapping to calculate the specular highlights. A user study has also been carried out to investigate the use of noise-based reflectance modeling for enhancing the quality of patient-specific simulations. In the following sections, the details of the proposed technique and the visual assessment experiments are presented.

Surface reflectance modeling

To derive a reflectance map that encodes the surface normal distribution, a Perlin noise function is used to obtain a 2D noise image representing the plot of noise samples over all image points. Assuming the noise samples represent a surface defined with a bi-variate scalar function H(u,v), the slopes of the surface tangents along the u and v directions, Hu and Hv, can be computed from the partial derivatives of H with respect to u and v which can be approximated by using finite differences. The normal at a point on the surface can then be defined by using the tangent vectors aswhere × denotes the cross-product. A reflectance map is acquired by storing the normal at each surface point with the components of each normal vector range-compressed and encoded as a red-green-blue color triplet. The use of color channels facilitates the storage of the reflectance information in a standard texture image that can be efficiently manipulated with the graphics hardware. Examples of a noise image and the generated reflectance map are shown in , where (a) is the noise image obtained by adding successive noise functions and (b) shows the reflectance map that stores the normals at each point in the noise image.

Figure 1. An example noise image (a) and the generated reflectance map (b). Since a z-component of less than 0.5 corresponds to a back-facing normal vector that does not occur in reality, the blue channel will always have a value greater than 0.5, hence the dominant blue tone of the reflectance map. [Color version available online.]

Figure 1. An example noise image (a) and the generated reflectance map (b). Since a z-component of less than 0.5 corresponds to a back-facing normal vector that does not occur in reality, the blue channel will always have a value greater than 0.5, hence the dominant blue tone of the reflectance map. [Color version available online.]

Lighting in tangent space

During run-time, texture mapping is used to extract a per-pixel reflectance map normal for each triangle in the geometric model and calculate specular reflections. However, the normals in the reflectance map are defined in the texture coordinate system. In order to consistently orient the normals on the geometric model, a local coordinate system to the triangle being processed has to be defined. Such a coordinate system, known as the tangent space, can be constructed by using three vectors that constitute its basis: the surface tangent T, bi-tangent B, and normal N. For a parametric surface S(X(u,v), Y(u,v), Z(u,v)), the definition of such a system is relatively straightforward from the parametric representation:For piecewise linear surface approximations, such as polygonal models created with arbitrary modeling packages, only the normal is usually provided. In this case, the texture coordinates of the polygonal model provide a consistent parameterization of the surface and can be used to describe the tangent space. Since each triangle in the model is defined in both texture and object spaces, the object space coordinates can be expressed in terms of texture coordinates as follows Citation[20]:withandwhere (x0, y0, z0), (x1, y1, z1), (x2, y2, z2) and (u0, v0), (u1, v1), (u2, v2) represent the triangular object and texture space coordinates, respectively, and • denotes the dot product. The T and B vectors used for defining the tangent space can be calculated from Equation (5) by computing the partial derivatives with respect to texture coordinates u and v. Subsequently, N can be computed from the cross-product of T and B. By defining lighting in the tangent space, the generation of the reflectance map can be decoupled from the object geometry, and thus a reflectance map can be designed and examined independent of surface representations Citation[20]. After computing the basis vectors of the tangent space, the GPU can be used to efficiently transform the object-space lighting and viewing vectors required for specular calculation in the tangent-space by using a rotation matrix (R) that is described as follows Citation[18]:To evaluate the effects of adding specular highlights on visual realism, subject-specific textures of 3D models were derived. For this purpose, video bronchoscope images (Olympus BF Type with a field of view of 120°) were registered with CT scans (Siemens Somaton Volume Zoom 4-channel multi-detector) by using a pq-space-based 2D/3D registration technique Citation[21]. This technique exploits the unique geometrical constraints between the camera and the light source for endoscopic procedures where the point light source is near the camera. In this case, the intensity gradient can be used to reduce the conventional shape-from-shading equations to a linear form, allowing the exact camera pose of the bronchoscope examinations to be identified.

Subsequently, the surface details including texture and shading parameters are extracted. The texture map is derived directly from the video bronchoscope images. The shading parameters are recovered by modeling the bidirectional reflectance distribution function ρ of the visible surfaces using a cubic curve parameterized on γ (the cosine of the angle between the viewing vector V and surface normal N) as follows Citation[22]:whereTo account for shading variations due to the distance between the light source and the surface, another cubic curve was used to model the depth-dependent intensity variation aswhere r = (zzmin)/zmaxzmin) and z is the distance from the surface point to the viewpoint. The shading of a surface point observed from viewpoint p can thus be expressed asFrom the above equations, it can be seen that for each p there is a unique set of parameters, i.e., (c0, c1, c2, c3, d1, d2, d3), as involved in equations (9) and (11), that determine the shading of every visible point of the 3D model. These parameters can be estimated by back-projecting each registered video image onto the 3D geometry and then fitting ϑp to the pixel intensities.

Assessment of visual realism

A user study was conducted to evaluate the effects of the proposed reflectance modeling technique for added visual realism. Participants from two subject groups were considered, where the first group consisted of 16 graduate students and the second group consisted of 7 experienced bronchoscopists who had each performed between 150 and 2000 endoscopy procedures. All subjects had normal or corrected vision and were volunteers, i.e., they were not paid to participate in the study.

During the experiment, static images on a computer monitor were presented to each subject for evaluation, one image at a time. The images show views of the bronchial tree from 8 poses from 5 different patients (estimated from camera parameters and used for generating synthetic views). Five images were created for each pose, comprising five different categories ranging from least realistic to real. Category 5 represents a real captured video bronchoscope image and Category 3 images are those rendered with subject-specific textures extracted with the BRDF method described above after 2D/3D registration. Category 4 is the same as Category 3, but with specular highlights added using the noise-based technique to improve visual realism. For Category 2 and Category 1 images, low-resolution surface textures were used, with the latter representing the lowest quality in the evaluation scale. The participants were first shown two examples of Category 1 (most unrealistic) and Category 5 (real) images displayed side by side for visual calibration. Then the subjects were presented with series of 15 images and asked to rank each image in terms of visual realism using the Likert scale (1 to 5). A two-alternative forced choice (2AFC) test was also conducted. In this test, the subjects viewed side-by-side image pairs from Categories 3, 4 and 5, always from different poses, and were asked to choose which of the two (left or right) was the most realistic. It is worth noting that during the experiments no time limits were imposed and the images were displayed in a random order. The subjects were not told which were the real images, nor were they informed how the images were obtained.

Results

The proposed technique has been used to simulate light-tissue reflection, with the results obtained with Cg Citation[23] implementation and NVIDIA FX graphics hardware being illustrated in . The images acquired using conventional per-vertex Phong lighting and OpenGL multi-texturing are also presented. It can be seen that the noise-based method replicates well the specular reflection behavior of light attributed to the mucous layer, thus achieving tissue reflection comparable to that observed in real procedures. The method also avoids the problem of plastic-like surface appearance by unevenly distributing the highlights and providing realistic highlight shapes in contrast to the hexagonal shapes found in poorly tessellated surfaces rendered with conventional techniques. Different noise functions can be used to simulate light reflection from different tissue types, and demonstrates the effects of using different noise function frequencies on the visual appearance of the rendered surface. In addition, the pixel-level control offered by the programmable graphics pipeline provides further flexibility for managing visual tissue characteristics such as the color and transparency of the mucous layer.

Figure 2. Different views of a surface patch rendered using the proposed method (left) and the OpenGL multi-texturing approach (right). Notice the plastic-like surface and the hexagonal shape of the specular highlights with the multi-texturing method. [Color version available online.]

Figure 2. Different views of a surface patch rendered using the proposed method (left) and the OpenGL multi-texturing approach (right). Notice the plastic-like surface and the hexagonal shape of the specular highlights with the multi-texturing method. [Color version available online.]

Figure 3. The effect of noise frequency on the rendering result, illustrating how different specular appearance of the tissue can be simulated. A higher noise frequency is used in the right image. [Color version available online.]

Figure 3. The effect of noise frequency on the rendering result, illustrating how different specular appearance of the tissue can be simulated. A higher noise frequency is used in the right image. [Color version available online.]

The noise-based reflectance modeling approach can be used to enhance the quality of patient-specific simulations. shows four examples of patient-specific bronchoscope images used as stimuli in the visual realism evaluation experiments described above, whereas shows examples of the five image categories used for one of the viewing directions of the bronchoscope camera. The images show the same surface rendered with and without specular highlights. summarizes the mean score for all the images of each category, averaged over the participants of each group for the first experiment. It is evident that the overall score for all the subjects shows a steady increase in the perceived realism by using the proposed method. It can also be observed that the expert group is not significantly different to the naïve group in judging realism. This shows that results using naïve subjects to test for realism of tissue samples may transfer to training simulators for physicians. Further statistical analysis carried out using the Friedman test and considering the combined realism scores from all the naïve participants for images of Categories 3, 4 and 5 demonstrated significant differences between image categories (χ2 = 41.962, df = 4, p < 0.001). Additional Wilcoxon signed ranks tests for inter-category comparison analysis showed significant differences between all categories when compared to Category 5 (the real image), except when compared to Category 4 and Category 3 (marginal), suggesting that Category 4 images were perceived to be close to photorealistic.

Figure 4. Examples of the patient-specific model rendered with matched BRDF surface texture with and without specular highlights. Models incorporating noise-based specular highlights (left column) generally achieved better scoring in visual realism evaluation experiments than those lacking the same effect (right column). [Color version available online.]

Figure 4. Examples of the patient-specific model rendered with matched BRDF surface texture with and without specular highlights. Models incorporating noise-based specular highlights (left column) generally achieved better scoring in visual realism evaluation experiments than those lacking the same effect (right column). [Color version available online.]

Figure 5. Example images showing the five different categories used for user evaluation. (1) Unreal; (2) BRDF-low resolution; (3) BRDF; (4) BRDF-Specular; and (5) Real bronchoscope image. [Color version available online.]

Figure 5. Example images showing the five different categories used for user evaluation. (1) Unreal; (2) BRDF-low resolution; (3) BRDF; (4) BRDF-Specular; and (5) Real bronchoscope image. [Color version available online.]

Figure 6. The average visual assessment score for all the images of each category across all participants in the naïve and expert groups (error bars show one standard deviation). [Color version available online.]

Figure 6. The average visual assessment score for all the images of each category across all participants in the naïve and expert groups (error bars show one standard deviation). [Color version available online.]

For the 2AFC test, it was found that when comparing Category 3 and Category 4 images side-by-side, the latter was selected as the most realistic 68% of the time. Therefore, missing specular highlights reduced the users' perception of reality. This result was confirmed when comparing Category 3 images with Category 5 images, where the real images were selected as being more realistic 73% of the time. However, when comparing Category 4 images with real images, the real images were selected as being more realistic only 48% of the time. This shows how the addition of noise-based specular highlights positively affected users' judgment of realism.

Recognizing that simple score comparison is a crude method of assessing the effectiveness of image synthesis, a Receiver Operating Characteristic (ROC) analysis was applied to the results of the second experiment. For this analysis, it was assumed that the scores given by each observer could be converted into a binary decision process (synthesized or not synthesized) based on decision level thresholding. For any given threshold value, some synthetic images will be correctly classified (a true positive) while some real images will be classified as synthetic (a false positive). As this threshold changes, the percentage of true positives versus false positives can be plotted on a graph yielding the ROC curve. A perfect classifier has a curve consisting of a single point at 100% true positive and 0% false positive. A classifier relying on a purely random scoring system will, in the long run, have equal percentages of true and false positives and yield an ROC curve along the diagonal of the graph. Observers that perform poorly in separating real bronchoscopy video images from synthesized ones act more like the random classifier, whereas observers who correctly determine the real and synthetic images perform like the perfect classifier. By plotting the ROC curves for our groups of observers for different sets of images, it is possible to determine which methods produce images perceived to be more realistic than others. Since our scoring system ranges from 1 to 5, this effectively allows four decision threshold levels. The video image scores determine the false-positive percentage, while the scores for the each set of synthesized images determine the true-positive percentage. These results are shown in . According to the ROC analysis, observers performed better in differentiating between the images without specular highlights and the real image. This can be seen from the “No Specular” curve being closer to the upper left-hand corner of the graph, which implies a high true positive (or lower false positive) rate and hence better differentiation. On the other hand, observers had a more difficult time classifying the images rendered with the specular highlights, as can be noticed from the “Specular” curve being close to the “Random Classifier” curve, which reveals random observer behavior. Clearly, these results show the quality of the BRDF rendering technique used in the patient-specific experiment and the importance of specular highlights.

Figure 7. The ROC curves for different classes of images. Images without specular highlights are more easily spotted as synthetic. It can also be seen that observers had significantly more difficulty with the specular images and performed only marginally better than a random classifier, thus illustrating the visual realism achieved by the method proposed in this paper.

Figure 7. The ROC curves for different classes of images. Images without specular highlights are more easily spotted as synthetic. It can also be seen that observers had significantly more difficulty with the specular images and performed only marginally better than a random classifier, thus illustrating the visual realism achieved by the method proposed in this paper.

Discussion and conclusions

The complexity of cellular structures and wet mucous membranes has major effects on light interactions with internal surfaces and organs. Conventional computer graphics reflectance modeling techniques use primitive physical laws and assume simplified surface characteristics. Physical-based models, on the other hand, are constrained by their high computational costs and the lack of measured parameters for real-world surfaces. As a result, simulating light reflection in surgical simulation with conventional approaches is a challenging task. In this paper, a novel noise-based reflectance modeling technique suitable for surgical simulation has been described. The use of noise texture offers a practical approach for simulating the intricate tissue-light interaction behavior. In graphics, computer-generated noise has already been used in many applications to represent the roughness and complexity of surfaces found in nature. It has been shown that Perlin noise offers real-time processing, controllable results, and realistic behavior, and it is used in this study to modulate the shape and distribution of specular highlights for achieving photorealistic rendering. In order to meet the requirement for interactivity in surgical simulation, the proposed technique is divided into pre-processing and interactive stages. In the first stage, the 2D Perlin noise function is used to generate a noise image where noise samples are defined for all image points. The noise image is subsequently converted to a reflectance map by encoding the normal at each point as an RGB color triplet. At the beginning of the interactive stage, the mapping between the reflectance map and tangent coordinate spaces is defined and then used during interactive simulation for lighting computations. The results illustrate the high visual quality that can be achieved compared to conventional lighting approaches. It has also been shown that modifying the noise function can simulate different tissue appearance. The use of the GPU during the interactive stage further accelerates the execution of lighting calculations. Considering the fact that the throughput of the graphics hardware is roughly doubling every six months Citation[24], the potential clinical value of the technique can be steadily improved by making use of faster, up-to-date graphics hardware.

One significant result derived from this paper is the effect of specular highlights on the degree of perceived realism. The visual assessment scorings from the user study experiments have shown that, when combined with subject-specific texture extraction through a BRDF model, the difference between images derived from the proposed method and those from real video bronchoscopy is minimal. The results also showed little difference in the derived visual score between the naïve and expert groups, thus highlighting the potential value of the technique for both basic and advanced surgical skills training and assessment. It is worth noting that, theoretically, specularity can also be modeled explicitly in the BRDF formulation for acquiring subject-specific textures. This, however, is impractical as specular highlights are not always visible in every video frame. Even when they are present, they only occupy a few dozen pixel samples due to the highly reflective mucous membrane. This makes the population of the sampling points that can be used for BRDF fitting rather small, thus prohibiting its reliable estimation. Furthermore, the narrowness of the specular lobe of the BRDF will require highly accurate surface normal information for reliable parameter estimation. In reality, however, the surface normals are estimated from the geometry recovered from the 3D CT data, which contains a certain level of error due to the intrinsic voxel resolution used. As a result, direct incorporation of the specular term into the existing BRDF model would not have yielded usable estimates.

In this study, we used ROC analysis for a comparison of the effects of specular highlights on perceived visual realism. The obtained ROC curves provide graphical representation of the transaction between false positive and true positive rates for different thresholds determined by the scores of every image category. It was observed that the addition of specular highlights complicated the discrimination between real and synthetic images, as indicated by the increased true positive and false positive rates.

In conclusion, we have described a reflectance modeling algorithm that uses noise functions to generate lighting behavior similar to that found in real surgical procedures. The proposed technique can be used in conjunction with patient-specific data for the construction of high-fidelity bronchoscope simulation environments that can be freely navigated with enhanced photorealism. The results of the visual assessment experiments carried out with both naïve and expert subjects demonstrate the value of the system for training and surgical planning. It is possible that additional modifications and improvements can be made to the current technique to further enhance the performance of reflectance modeling. For example, noise generation can be moved from the pre-processing to the interactive stage by using a GPU-based noise implementation Citation[25]. Therefore, some of the limitations of image-based representations, such as aliasing and storage, can be alleviated. Future work may also involve more detailed investigation of the relationship between the noise functions used and the resultant tissue appearance.

Acknowledgments

The authors would like to thank Professor Stella Atkins of Simon Fraser University, Canada, for her assistance in the user evaluation, Dr. Athol Wells, Dr. Pallav Shah and Professor David Hansell for collecting the patient studies, and Fani Deligianni of the Royal Society/Wolfson Foundation Medical Image Computing Laboratory for providing the 2D/3D registration algorithms used for this study. Financial support from the UK Engineering and Physical Sciences Research Council (EPSRC) is acknowledged.

References

  • Bro-Nielsen M. Simulation techniques for minimally invasive surgery. Min Invas Ther Allied Technol 1997; 6(2)106–110
  • Shah J, Darzi A. Simulation and skills assessment. Proceedings of the First International Workshop on Medical Imaging and Augmented Reality (MIAR 2001), Hong KongChina, June 2001. IEEE Computer Society, 2001; 5–9
  • Liu A, Tendick F, Cleary K, Kaufmann C. A survey of surgical simulation: applications, technology, and education. Teleop Virt Env 2003; 12: 599–614, Presence
  • Hanrahan P, Wolfgang K. Reflection from layered surfaces due to subsurface scattering. Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 1993), Anaheim, CA, July 1993, JT Kajiya. ACM Press, New York 1993; 165–174
  • Neyret F, Heiss R, Senegas F. Realistic rendering of an organ surface in real-time for laparoscopic surgery simulation. The Vis Comp 2002; 18: 135–149
  • Blinn JF. Light reflection functions for simulation of clouds and dusty surfaces. Proceedings of the 9th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 1982), Boston, MA, July 1982, RD Bergeron. ACM Press, New York 1982; 21–29
  • Cook RL, Torrance KE. A reflection model for computer graphics. ACM Trans Graphics 1982; 1: 7–24
  • He XD, Torrance KE, Sillion FX, Greenberg DP. A comprehensive physical model for light reflection. Proceedings of the 18th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 1991), Las Vegas, NV, July 1991, JJ Thomas. ACM Press, New York 1991; 175–186
  • Dana KJ, Ginneken BV, Nayar SK, Koenderink JJ. Reflectance and texture of real-world surfaces. ACM Trans Graphics 1999; 18: 1–34
  • Matusik W, Pfister H, Brand M, McMillan L. A data-driven reflectance model. ACM Trans Graphics 2003; 22: 759–769
  • Ward GJ. Measuring and modeling anisotropic reflection. Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 1992), Chicago, IL, July 1992, JJ Thomas. ACM Press, New York 1992; 265–272
  • Shirley P, Smits B, Hu H, Lafortune E. A practitioners' assessment of light reflection models. In:. Proceedings of the 5th Pacific Conference on Computer Graphics and Applications, 1997., Washington, DC. IEEE Computer Society, 1997; 40–49
  • Phong BT. Illumination for computer generated pictures. Commun ACM 1975; 18: 311–317
  • Perlin K, Hoffert E. Hypertexture. Proceedings of the 16th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 1989), Boston, MA, July 1989, JJ Thomas. ACM Press, New York 1989; 253–262
  • David SE. Texturing and Modeling: A Procedural Approach. Morgan Kaufmann. 2002
  • Prusinkiewicz P, Lindenmayer A. The Algorithmic Beauty of Plants. Springer-Verlag, Berlin 1990
  • Weber J, Joseph P. Creation and rendering of realistic trees. Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 1995), Los Angeles, CA, August 1995, SG Mair, R Cook. ACM Press, New York 1995; 119–128
  • Peercy MS, Olano M, Airey J, Ungar J. Interactive multi-pass programmable shading. Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2000), New Orleans, LA, July 2000, K Akeley. ACM Press, New York 2000; 425–432
  • Perlin K. An image synthesizer. Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 1985), San Francisco, CA, July, P Cole, R Heilman, BA Barsky. ACM Press, New York 1985; 287–296
  • Fernando R, Kilgard MJ. The Cg Tutorial. Addison Wesley. 2003
  • Deligianni F, Chung A, Yang GZ. Patient-specific bronchoscope simulation with pq-space-based 2D/3D registration. Comput Aided Surg 2004; 9: 215–226
  • Chung AJ, Deligianni F, Shah P, Wells A, Yang GZ. Enhancement of visual realism with BRDF for patient specific bronchoscopy simulation. Proceedings of the 7th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2004), Saint-MaloFrance, September 2004, C Barillot, DR Haynor, P Hellier. Springer-Verlag, Berlin 2004; 486–493, Part II. Lecture Notes in Computer Science 3217
  • Mark WR, Glanville RS, Akeley K, Kilgard MJ. Cg: a system for programming graphics hardware in a C-like language. ACM Trans Graph 2003; 22: 896–907
  • Owens JD, Luebke D, Govindaraju N, Harris M, Krüger J, Lefohn A, Purcell TJ. A survey of general-purpose computation on graphics hardware. In:. Proceedings of the Annual Conference of the European Association for Computer Graphics (Eurographics05), DublinIreland, August 2005, 21–51
  • Fernando R. GPU Gems. Addison Wesley. 2004

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.