Abstract
This work explores how people use visual feedback when performing simple reach-to-grasp movements in a tabletop virtual environment. In particular we investigated whether visual feedback is required for the entire reach or whether minimal feedback can be effectively used. Twelve participants performed reach-to-grasp movements toward targets at two locations. Visual feedback about the index finger and thumb was provided in four conditions: vision available throughout the movement, vision available up to peak wrist velocity, vision available until movement initiation, or vision absent throughout the movement. It was hypothesized that vision available until movement onset would be an advantage over a no vision situation yet not attain the performance observed when vision was available up to peak velocity. Results indicated that movement time was longest in the no vision condition but similar for the three conditions where vision was available. However, deceleration time and peak aperture measures suggest grasping is more difficult when vision is not available for at least the first third of the movement. These results suggest that designers of virtual environments can manipulate the availability of visual feedback of one's hand without compromising interactivity. This may be applied, for example, when detailed rendering of other aspects of the environmental layout is more important, when motion lag is a problem or when hand/object concealment is an issue.
This work is supported by the National Science Foundation under Grant No. IIS – 0346871. We also thank Jared Markiewitz, Scott Mason and Nicholas Penwarden for software, data collection, and data analysis support.