Figures & data
Figure 1. A schematic illustration of the basic principle of eye tracking. (a) When infrared light is shone onto the eye, several reflections occur on the boundaries of the lens and cornea, and the first Purkinje image is of particular interest to video-based oculography. (b) As the centers of curvature of the cornea and the eye are different, during a saccade the first Purkinje image moves approximately half as far as the pupil. [Color version available online.]
![Figure 1. A schematic illustration of the basic principle of eye tracking. (a) When infrared light is shone onto the eye, several reflections occur on the boundaries of the lens and cornea, and the first Purkinje image is of particular interest to video-based oculography. (b) As the centers of curvature of the cornea and the eye are different, during a saccade the first Purkinje image moves approximately half as far as the pupil. [Color version available online.]](/cms/asset/19196602-6f67-402f-b5c0-bf58076fd118/icsu_a_197035_f0001_b.jpg)
Figure 2. Left: The relationship of the horizontal disparity between the two retinal images and depth perception, which varies with viewing distance. Ocular vergence is quantified, allowing for the 3D fixation to be determined. Right: A simplified schematic of the stereoscopic viewer with binocular eye tracking. While a subject fuses the parallax images displayed on the monitors, both eyes are tracked. [Color version available online.]
![Figure 2. Left: The relationship of the horizontal disparity between the two retinal images and depth perception, which varies with viewing distance. Ocular vergence is quantified, allowing for the 3D fixation to be determined. Right: A simplified schematic of the stereoscopic viewer with binocular eye tracking. While a subject fuses the parallax images displayed on the monitors, both eyes are tracked. [Color version available online.]](/cms/asset/11a03eb3-02f1-4962-9691-919705d04a05/icsu_a_197035_f0002_b.jpg)
Figure 3. Top: The phantom heart at different deformation levels controlled by the oil-filled pistons (only three are shown here), allowing for reproducible deformation control. Bottom: The reconstructed phantom heart from a series of CT slices. [Color version available online.]
![Figure 3. Top: The phantom heart at different deformation levels controlled by the oil-filled pistons (only three are shown here), allowing for reproducible deformation control. Bottom: The reconstructed phantom heart from a series of CT slices. [Color version available online.]](/cms/asset/29b3f3e3-d004-42be-aef6-a766dfea1436/icsu_a_197035_f0003_b.jpg)
Figure 4. Left: The robot with the mounted optical-tracker retro-reflectors and the stereo camera rig. Right: The configuration of the Polaris optical tracker located in relation to the robot. [Color version available online.]
![Figure 4. Left: The robot with the mounted optical-tracker retro-reflectors and the stereo camera rig. Right: The configuration of the Polaris optical tracker located in relation to the robot. [Color version available online.]](/cms/asset/1affe03d-fb27-4e88-ae54-a03f71708cb0/icsu_a_197035_f0004_b.jpg)
Figure 6. (a) Comparative results of the reconstructed depths from the fixation paths of the five subjects studied along the tissue surface illustrated in (b). The subjects followed a predefined path starting from the bottom of the surface and moving towards the top. [Color version available online.]
![Figure 6. (a) Comparative results of the reconstructed depths from the fixation paths of the five subjects studied along the tissue surface illustrated in (b). The subjects followed a predefined path starting from the bottom of the surface and moving towards the top. [Color version available online.]](/cms/asset/f8e61198-21d4-40c7-9216-3fb5fe5acfc8/icsu_a_197035_f0006_b.jpg)
Figure 7. (a) A comparison of the recovered depths by the five subjects studied against the actual depth of the virtual surface depicted in (b). [Color version available online.]
![Figure 7. (a) A comparison of the recovered depths by the five subjects studied against the actual depth of the virtual surface depicted in (b). [Color version available online.]](/cms/asset/66c316a2-1e82-4149-bd2c-3aca0cabfc59/icsu_a_197035_f0007_b.jpg)
Figure 8. (a) Gaze-contingent motion compensation where the relative shift along the depth axis corresponds to the required reference distance of the gaze-controlled camera from the target. (b) The corresponding linear regression demonstrates the intrinsic accuracy of the method. [Color version available online.]
![Figure 8. (a) Gaze-contingent motion compensation where the relative shift along the depth axis corresponds to the required reference distance of the gaze-controlled camera from the target. (b) The corresponding linear regression demonstrates the intrinsic accuracy of the method. [Color version available online.]](/cms/asset/b6e69542-f522-4dd2-808a-7dc3dbdec9f9/icsu_a_197035_f0008_b.jpg)
Table I. Error analysis comparing the gaze-contingent motion compensation performance of five subjects.
Table II. Error analysis comparing the oculomotor response of six subjects over a range of frequencies.