415
Views
3
CrossRef citations to date
0
Altmetric
Articles

Connections and selections: Comparing multivariate predictions and parameter associations from latent variable models of picture naming

, &
Pages 50-71 | Received 04 Jun 2019, Accepted 09 Oct 2020, Published online: 05 Nov 2020
 

ABSTRACT

Connectionist simulation models and processing tree mathematical models of picture naming have complementary advantages and disadvantages. These model types were compared in terms of their predictions of independent language measures and their associations between model components and measures that should be related according to their theoretical interpretations. The models were tasked with predicting independent picture naming data, neuropsychological test scores of semantic association and speech production, grammatical categories of formal errors, and lexical properties of target items. In all cases, the processing tree model parameters provided better predictions and stronger associations between parameters and independent language measures than the connectionist simulation model. Given the enhanced generalizability of latent variable measurements afforded by the processing tree model, evidence regarding mechanistic and representational features of the speech production system are re-evaluated. Several areas are indicated as being potentially viable targets for elaboration of the mechanistic descriptions of picture naming errors.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 Walker et al. (Citation2018) used the same scoring protocol as Foygel and Dell (Citation2000), but additionally included frequencies of non-naming trials and also split non-words into two categories based on the response’s phonological relation to the target. Unlike Foygel and Dell (Citation2000), Walker et al. (Citation2018) analyzed responses to each item, rather than frequencies over the entire test.

2 With one exception: the Sem ability is measured directly on the probability scale, without an item difficulty counterpart. This ability governs the probability of a lexical item intruding that is neither semantically nor phonologically related to the target. While items may differ with respect to the amount of semantic or phonological competition that is induced by accessing their representation through a spreading activation network, it is assumed that lexical items do not vary in terms of how much they elicit unrelated responses. These responses typically derive from perseverations or visual recognition errors that depend on the participants’ abilities to successfully access the lexicon rather than to navigate it.

3 One participant from the University of South Carolina archive who was included in Walker et al. (Citation2018) was excluded due to the inability to produce any naming attempts, which cannot be fit by the SP model.

4 For the individual test, the number of parameters (1,057) appears to exceed the number of trial observations (175), and thus the degrees of freedom in the data. This violation of model fitting determinacy is illusory, however, because the MPT-Naming model is never fit to a single test when estimating item parameters (and person parameters, simultaneously). The number of MPT-Naming parameters that are fit to the full data set are (364 × 6) + (175 × 6) + 1 = 3,235, well below the degrees of freedom in the data, which are 364 × 175 = 63,700. Essentially, by contextualizing an individual’s responses within our knowledge of how other people with aphasia respond to each item, we can gain a better understanding of their overt response frequencies. Generating cross-validation predictions does not require any more model fitting (i.e., the parameters are already fixed from fitting the training data).

5 Making item-level predictions is essentially a betting problem, or more academically, an optimal decision problem. When betting on the outcome of a series of weighted coin tosses that have a fixed binomial probability of .6 weighted toward Heads with an even payout (i.e., predicting Heads or Tails results in 1 win), the expected value is greater for a strategy of always betting on Heads (the mode of the outcome probabilities) than a strategy of betting Heads 60% of the time and Tails 40% of the time. The same is true if bets are allowed to be split, assuming a fixed payout regardless of outcome; there is a higher expected value to going all-in on Heads than to splitting the bet with 60% on Heads and 40% on Tails. An example solution to this betting problem is presented in Walker et al. (Citation2018). The result from this binomial situation (flipping a weighted coin) extends to the multinomial situation (rolling a weighted die). The goal of cross-validation at the item-level is to make the best possible bets about the response type on each new naming trial given the information encoded in the parameters of the model after fitting previous trials. Predicting the posterior probability mode is the optimal decision strategy, because it yields the highest expected value in terms of the number of accurate predictions.

6 As pointed out by a reviewer, the number of participants that are better fit by the MPT model than the SP model is relatively small, making one wonder if one model is truly better than another since both fit the vast majority of cases about equally well. We would offer a sporting match analogy to interpret the comparison: Each model is a team, and 353 out of 365 attempts to score were blocked by the other team, but one team scored 11 times while the other team scored once. In this analogy, it is not difficult to select a winning team, and it is doubtful that spectators would judge the teams’ performances to be about equal.

7 Analyses were also performed using a liberal criterion to identify non-nouns, that is, identifying any word for which the noun form was not the primary usage of the word according to the part-of-speech dictionary. This approach yielded 2,263 formal errors from 141 participants who produced at least five formal errors and at least one non-noun for analysis; the median NFER was .16, ranging from .03 to .50. The results were qualitatively the same as using the conservative criterion, except that the item-level risk associations with S-P and LexPhon only exhibited trends in the predicted directions, rather than significant associations (p = .066 and .056, respectively).

Additional information

Funding

This research was supported by the National Institute on Deafness and Other Communication Disorders [grant number P50 DC014664].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 509.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.