354
Views
6
CrossRef citations to date
0
Altmetric
Forthcoming Special Issue on: Visual Search and Selective Attention

Learned feature variance is encoded in the target template and drives visual search

&
Pages 487-501 | Received 05 Apr 2019, Accepted 12 Jul 2019, Published online: 01 Aug 2019
 

ABSTRACT

Real-world visual search targets are frequently imperfect perceptual matches to our internal templates. For example, a friend on different occasions will have different clothes, hairstyles, and accessories, but some of these may vary more than others. The ability to deal with template-to-target variability is important to visual search in natural environments, but we know relatively little about how this is handled by the attentional system. Here, we test the hypothesis that top-down attentional biases are sensitive to the variance of target features and prioritize less-variable dimensions. Subjects were shown target cues composed of coloured dots moving in a specific direction followed by a working memory probe (30%) or visual search display (70%). Critically, the target features in the visual search display differed from the cue, with one feature drawn from a narrow distribution (low-variance dimension), and the other sampled from a broader distribution (high-variance dimension). The results demonstrate that subjects used knowledge of the likely cue-to-target variance to set template precision and bias attentional selection. Our results suggest that observers are sensitive to the variance of feature dimensions within a target and use this information to weight mechanisms of attentional selection.

Acknowledgements

Support for this work was provided by T-32 EY015387 to P.W. and RO1MH113855-01 to J.J.G. We would like to thank Connor Allen, Henry Moore, April Lou, and Megnha Advani for assistance in data collection.

Data availability statement

These data are available on Open Science Framework (OSF) at this location: https://osf.io/ep9sa/.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported by National Institute of Mental Health [grant number 1R01MH113855-01].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.