287
Views
19
CrossRef citations to date
0
Altmetric
Regular articles

Name–picture verification as a control measure for object naming: A task analysis and norms for a large set of pictures

, , , &
Pages 1581-1597 | Received 09 Oct 2006, Accepted 16 Sep 2008, Published online: 25 Jun 2009
 

Abstract

The name–picture verification task is widely used in spoken production studies to control for nonlexical differences between picture sets. In this task a word is presented first and followed, after a pause, by a picture. Participants must then make a speeded decision on whether both word and picture refer to the same object. Using regression analyses, we systematically explored the characteristics of this task by assessing the independent contribution of a series of factors that have been found relevant for picture naming in previous studies. We found that, for “match” responses, both visual and conceptual factors played a role, but lexical variables were not significant contributors. No clear pattern emerged from the analysis of “no-match” responses. We interpret these results as validating the use of “match” latencies as control variables in studies or spoken production using picture naming. Norms for match and no-match responses for 396 line drawings taken from Cycowicz, Friedman, Rothstein, and Snodgrass (1997) can be downloaded at: http://language.psy.bris.ac.uk/name-picture_verification.html

Acknowledgments

This research was supported by Grant BB/C508477/1 from the Biotechnology and Biological Sciences Research Council (BBSRC) to the second author.

Notes

1 Name–picture verification in this context is similar but not identical to the technique used in neuropsychological studies to assess the integrity of the language-processing system (e.g., Psycholinguistic Assessment of Language Processing in Aphasia, PALPA, Subtest 47; Kay, Lesser, & Coltheart, Citation1992). In those studies there is no time constraint for the patient, and the dependent variable is usually accuracy, not reaction time. In the present study, on the other hand, we are interested in measuring latencies in a speeded task and in using these latencies as a control measure for picture recognition processes.

2 Three items from the original Cycowycz et al. (1997) set (scoop, squash, and pretzel) were excluded because of lack of an appropriate translation into Spanish. An additional item (rosebud) was also excluded because its translation is also used as a very rude word in Spanish.

3 We used computer keyboards as input devices for our experiment, which are associated with a certain degree of measurement error due to infrequent polling of the device. Although this fact adds a small amount of error variance to latencies, we believe it is unlikely that this could have diminished the power of our experiment to reject the null hypothesis.

4 We only included items with monomorphemic picture labels and excluded items in which the modal label did not match the expected picture name.

5 The same criteria as those with the first set were applied for the selection of items used in the analysis. Additionally, “trompeta” (trumpet) was also excluded because of missing image agreement data.

6 Note that Cuetos et al. Citation(1999) used for their analyses different sources for concept familiarity and word frequency from the ones we used. They collected their own familiarity ratings, and their frequency values were taken from Alameda and Cuetos Citation(1995), which are based on a smaller corpus than LEXESP. However, the general outcome of the analyses is the same irrespective of which of these sets of norms is used.

Log in via your institution

Log in to Taylor & Francis Online

There are no offers available at the current time.

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.