Abstract
Human saccades during scene viewing show systematic patterns in amplitude and direction. By using a gaze-contingent display, it is possible to manipulate the features available for planning these saccades by masking the peripheral visual field. Here, we propose several variations of a computational model for predicting the saccade statistics observed empirically during viewing with different gaze-contingent displays. In each case, saccade targets are generated by randomly sampling from a distribution computed according to either the features available at fixation, the intact information in the periphery, or combinations of the two. The results suggest that saccade generation in complex images results from a balance of these computations, and the model provides a simple but rigorous framework for testing hypotheses and making novel predictions about the spatial characteristics of eye movements.