2,080
Views
150
CrossRef citations to date
0
Altmetric
Full articles

A linear mixed model analysis of masked repetition priming

, &
Pages 655-681 | Received 01 May 2008, Accepted 01 Mar 2009, Published online: 03 Aug 2009
 

Abstract

We examined individual differences in masked repetition priming by re-analysing item-level response-time (RT) data from three experiments. Using a linear mixed model (LMM) with subjects and items specified as crossed random factors, the originally reported priming and word-frequency effects were recovered. In the same LMM, we estimated parameters describing the distributions of these effects across subjects. Subjects' frequency and priming effects correlated positively with each other and negatively with mean RT. These correlation estimates, however, emerged only with a reciprocal transformation of RT (i.e., − 1/RT), justified on the basis of distributional analyses. Different correlations, some with opposite sign, were obtained (1) for untransformed or logarithmic RTs or (2) when correlations were computed using within-subject analyses. We discuss the relevance of the new results for accounts of masked priming, implications of applying RT transformations, and the use of LMMs as a tool for the joint analysis of experimental effects and associated individual differences.

Acknowledgements

The research was initiated during Michael Masson's residence as a guest professor at the Interdisciplinary Center for Cognitive Studies at the University of Potsdam. He was supported in part by a discovery grant from the Natural Sciences and Engineering Research Council of Canada. We are indebted to Douglas Bates for providing the lme4 package in the R-project and for stimulating conversations about the interpretation of conditional means, formerly known as BLUPs, as well as their correlations. We are also very grateful to Sachiko Kinoshita for making available the data from the second experiment reported in Kinoshita (Citation2006). Harald Baayen, Sachiko Kinoshita, Nicholas Lewin-Koh, John Maindonald, Wayne Murray, Klaus Oberauer, and reviewers commented on an earlier version of the manuscript. This research was supported by Deutsche Forschungsgemeinschaft (KL 955/6 and KL 955/8). Data and R-scripts are provided on request.

Notes

1It would certainly be in the spirit of LMM to use continuous frequency values rather than two extreme frequency categories. However, we prefer to respect the design choices of the original publications for ease of comparison. For continuous, usually log-transformed, frequencies, the fixed effect represents the linear regression slope for RT on word frequency. The random effect of frequency represents the between-subject variance in linear regression slopes. Linear, quadratic, and even cubic fixed effects of log frequency have been reported for single-fixation durations in reading (e.g., Kliegl, 2007).

2There is also the option to use Markov Chain Monte Carlo (MCMC) methods to generate a sample from the posterior distribution of the parameters of a fitted model and determine the approximate highest 95% posterior density (HPD) interval for the coefficients in this sample. In our experience, typically involving large data sets like the present one, inferences based on HPD intervals have been overwhelmingly consistent with the t > 2 criterion.

3Gelman and Hill (2007) illustrate shrinkage for the case of a model without predictors. Applied to our data, if M is the overall mean RT, m j and n j are mean and number of RTs of subject j, and are residual and between-subject variances, then the predicted mean RT α j for subject j can be approximated as a weighted average of the subject's mean RT and the overall mean RT: Then, for the limiting case of n j =0, α j =M, and for n j→∞, α j =m j . Thus, on the one hand, the fewer RTs contributed by a subject, the stronger is the overall mean's contribution to the predicted mean for this subject; indeed, in the case of missing data (n j =0), we simply predict the overall mean M. On the other hand, the larger the number of RTs, the more the prediction is based on the observed subject's mean. Weights also depend on the ratio of residual and between-subject variances. For example, for , subject and overall mean are equally weighted in the prediction; that is, the formula reduces to . Assuming constant residual variances for subjects (which is not necessary in general), if (i.e., for large differences between subjects relative to the number of observations for subject j and the residual variance), α j will move towards m j ; conversely, if (if there is large residual variance or if there are few observations), α j will move towards M.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 238.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.