804
Views
15
CrossRef citations to date
0
Altmetric
Research Article

A full geometric and photometric calibration method for oblique-viewing endoscopes

, &
Pages 19-31 | Received 28 Jan 2008, Accepted 20 Sep 2009, Published online: 30 Apr 2010

Figures & data

Figure 1. Stryker 344-71 arthroscope Vista (70 degree, 4 mm). An oblique endoscope consists of a scope cylinder with a lens and point light sources at the tip (which is tilted at an angle from the scope cylinder axis), a camera head that captures video images, and a light source device that supports the illumination. The scope cylinder is connected to the camera head via a coupler. This connection is flexible to enable rotation of the scope cylinder and camera head separately or together.

Figure 1. Stryker 344-71 arthroscope Vista (70 degree, 4 mm). An oblique endoscope consists of a scope cylinder with a lens and point light sources at the tip (which is tilted at an angle from the scope cylinder axis), a camera head that captures video images, and a light source device that supports the illumination. The scope cylinder is connected to the camera head via a coupler. This connection is flexible to enable rotation of the scope cylinder and camera head separately or together.

Figure 2. The geometric model of an endoscope based on a tracking system. A new coupler (see ) has been designed to enable mounting of an optical marker on the scope cylinder, ensuring that the transformation from the scope (marker) coordinates O2 to the lens system (camera) coordinates O3 is fixed. The optical tracker defines the world coordinates O1. Two optical markers are attached to the coupler and camera head separately to compute the rotation θ between them.

Figure 2. The geometric model of an endoscope based on a tracking system. A new coupler (see Figure 1b) has been designed to enable mounting of an optical marker on the scope cylinder, ensuring that the transformation from the scope (marker) coordinates O2 to the lens system (camera) coordinates O3 is fixed. The optical tracker defines the world coordinates O1. Two optical markers are attached to the coupler and camera head separately to compute the rotation θ between them.

Figure 3. A comparison between the system of Yamaguchi et al. and our own system. In the former, the camera head is tracked such that the transformation from the marker to the lens system is not fixed but depends on the rotation angle θ. Using the marker coordinates as a reference, the lens system is rotated around the scope cylinder through θ, but the image plane (that is in the camera head) remains the same. Yamaguchi et al. use two additional transformations to describe the effect of rotation, so their model becomes complicated. Moreover, they must calibrate the axis of both the scope cylinder and the lens system by using another optical marker attached to the scope cylinder. Based on our observation, it is possible to simplify the model if we fix the transformation between the marker and the lens system. We designed a coupler that enables the optical marker to be mounted on the scope cylinder. We then set the marker coordinates as a reference; the lens system remains fixed. The rotation only affects the image plane since the camera head is rotated around the cylinder (reference), and the image plane only rotates around the principal point. Since the principal point is an intrinsic parameter, we need only estimate the rotation angle. As a result, we have a very simple model (see details in the text).

Figure 3. A comparison between the system of Yamaguchi et al. and our own system. In the former, the camera head is tracked such that the transformation from the marker to the lens system is not fixed but depends on the rotation angle θ. Using the marker coordinates as a reference, the lens system is rotated around the scope cylinder through θ, but the image plane (that is in the camera head) remains the same. Yamaguchi et al. use two additional transformations to describe the effect of rotation, so their model becomes complicated. Moreover, they must calibrate the axis of both the scope cylinder and the lens system by using another optical marker attached to the scope cylinder. Based on our observation, it is possible to simplify the model if we fix the transformation between the marker and the lens system. We designed a coupler that enables the optical marker to be mounted on the scope cylinder. We then set the marker coordinates as a reference; the lens system remains fixed. The rotation only affects the image plane since the camera head is rotated around the cylinder (reference), and the image plane only rotates around the principal point. Since the principal point is an intrinsic parameter, we need only estimate the rotation angle. As a result, we have a very simple model (see details in the text).

Figure 4. Relationship between the rotation angle θ and two marker coordinates. O1 is attached to the scope cylinder and O2 is attached to the camera head. A indicates the position of O2 when θ = 0 and B indicates the position of O2 given a rotation θ. Given any point Pr in O2, its trace following the rotation of the camera head is a circle in Marker 1's coordinates O1. It moves from position to in Marker 1's coordinates O1. This circle is also on the plane perpendicular to the axis of the scope cylinder. O is the center of the circle.

Figure 4. Relationship between the rotation angle θ and two marker coordinates. O1 is attached to the scope cylinder and O2 is attached to the camera head. A indicates the position of O2 when θ = 0 and B indicates the position of O2 given a rotation θ. Given any point Pr in O2, its trace following the rotation of the camera head is a circle in Marker 1's coordinates O1. It moves from position to in Marker 1's coordinates O1. This circle is also on the plane perpendicular to the axis of the scope cylinder. O is the center of the circle.

Figure 5. Estimated rotation angles for the two endoscopes. In each trial we rotated the camera head with respect to the scope cylinder and captured an image. We captured a few images for the initial position, then acquired two images for each rotation angle. The red curves are the estimated rotation angles from different RANSAC iterations; the black curve is the average rotation angle.

Figure 5. Estimated rotation angles for the two endoscopes. In each trial we rotated the camera head with respect to the scope cylinder and captured an image. We captured a few images for the initial position, then acquired two images for each rotation angle. The red curves are the estimated rotation angles from different RANSAC iterations; the black curve is the average rotation angle.

Table I.  Pseudo code of RANSAC for estimating the center of the circle

Table II.  Pseudo code of RANSAC for estimating the rotation angle

Figure 6. The endoscopes used in the experiments. (a) Smith & Nephew video arthroscope–autoclavable SN-OH 272589 (30 degree, 4 mm). (b) Stryker 344-71 arthroscope Vista (70 degree, 4 mm).

Figure 6. The endoscopes used in the experiments. (a) Smith & Nephew video arthroscope–autoclavable SN-OH 272589 (30 degree, 4 mm). (b) Stryker 344-71 arthroscope Vista (70 degree, 4 mm).

Figure 7. (a) The back projection with and without rotation compensation. The green points are the ground truth–2D corner pixels on the image of the calibration pattern. The red points are the back projection of the 3D world positions of the corners using the first equation of Equation 2, which has no rotation compensation. The blue points are the back projection using both equations of Equation 2. Since the rotation is included in the camera model, the back-projected pixels are much closer to the ground truth than the red points. (b) An image used in the paper by Yamaguchi et al. Citation[11], Citation[12]. This image has higher resolution, better lighting and less distortion than our own.

Figure 7. (a) The back projection with and without rotation compensation. The green points are the ground truth–2D corner pixels on the image of the calibration pattern. The red points are the back projection of the 3D world positions of the corners using the first equation of Equation 2, which has no rotation compensation. The blue points are the back projection using both equations of Equation 2. Since the rotation is included in the camera model, the back-projected pixels are much closer to the ground truth than the red points. (b) An image used in the paper by Yamaguchi et al. Citation[11], Citation[12]. This image has higher resolution, better lighting and less distortion than our own.

Figure 8. Back-projection errors with respect to the rotation angles for the two systems. (a) Stryker 344-71 arthroscope Vista and Polaris optical tracker in our laboratory. (b) Smith & Nephew video arthroscope and OPTOTRAK optical tracker in the operating room. The three images above the graphs correspond to different rotation angles (specified above each image). The red curves represent the errors without rotation compensation; the blue curves are errors with rotation compensation.

Figure 8. Back-projection errors with respect to the rotation angles for the two systems. (a) Stryker 344-71 arthroscope Vista and Polaris optical tracker in our laboratory. (b) Smith & Nephew video arthroscope and OPTOTRAK optical tracker in the operating room. The three images above the graphs correspond to different rotation angles (specified above each image). The red curves represent the errors without rotation compensation; the blue curves are errors with rotation compensation.

Figure 9. The perspective projection model for an endoscope imaging system with two near point light sources: O is the camera projection center; s1 and s2 indicate two light sources. We assume the plane consisting of O, s1 and s2 is parallel to the image plane. The coordinate system is centered at O and Z is parallel to the optical axis and pointing toward the image plane. X and Y are parallel to the image plane, F is the focal length, and a and b are two parameters related to the position of the light sources. Given a scene point P, the corresponding image pixel is p. Assuming a Lambertian surface, the surface illumination thus depends on the surface albedo, light source intensity and fall-off, and the angle between the normal and light rays.

Figure 9. The perspective projection model for an endoscope imaging system with two near point light sources: O is the camera projection center; s1 and s2 indicate two light sources. We assume the plane consisting of O, s1 and s2 is parallel to the image plane. The coordinate system is centered at O and Z is parallel to the optical axis and pointing toward the image plane. X and Y are parallel to the image plane, F is the focal length, and a and b are two parameters related to the position of the light sources. Given a scene point P, the corresponding image pixel is p. Assuming a Lambertian surface, the surface illumination thus depends on the surface albedo, light source intensity and fall-off, and the angle between the normal and light rays.

Figure 10. Results of photometric calibration. (a) Camera response function in the red channel. The red dots represent the data points and the magenta line represents the nonlinear fit. (b) Camera response function in the green channel. The green dots represent the data points and the magenta line represents the nonlinear fit. (c) Camera response function in the blue channel. The blue dots represent the data points and the magenta line represents the nonlinear fit. (d) Calibrated light intensity at different levels (blue) and the ground truth (green). We use level 6 as a reference and plot levels 1-5 with a small level corresponding to high light intensity. A bit variation in range for the high intensities may be caused by saturation. (e) Original image on color chart. (f) . (g) The cosine term . (h) The spatial distribution function m(x, y).

Figure 10. Results of photometric calibration. (a) Camera response function in the red channel. The red dots represent the data points and the magenta line represents the nonlinear fit. (b) Camera response function in the green channel. The green dots represent the data points and the magenta line represents the nonlinear fit. (c) Camera response function in the blue channel. The blue dots represent the data points and the magenta line represents the nonlinear fit. (d) Calibrated light intensity at different levels (blue) and the ground truth (green). We use level 6 as a reference and plot levels 1-5 with a small level corresponding to high light intensity. A bit variation in range for the high intensities may be caused by saturation. (e) Original image on color chart. (f) . (g) The cosine term . (h) The spatial distribution function m(x, y).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.