Publication Cover
Child Neuropsychology
A Journal on Normal and Abnormal Development in Childhood and Adolescence
Volume 23, 2017 - Issue 7
863
Views
12
CrossRef citations to date
0
Altmetric
Original Articles

The test–retest reliability of the latent construct of executive function depends on whether tasks are represented as formative or reflective indicators

, , , &
Pages 822-837 | Received 24 Nov 2015, Accepted 18 Jun 2016, Published online: 29 Jul 2016
 

ABSTRACT

This study investigates the test–retest reliability of a battery of executive function (EF) tasks with a specific interest in testing whether the method that is used to create a battery-wide score would result in differences in the apparent test–retest reliability of children’s performance. A total of 188 4-year-olds completed a battery of computerized EF tasks twice across a period of approximately two weeks. Two different approaches were used to create a score that indexed children’s overall performance on the battery—i.e., (1) the mean score of all completed tasks and (2) a factor score estimate which used confirmatory factor analysis (CFA). Pearson and intra-class correlations were used to investigate the test–retest reliability of individual EF tasks, as well as an overall battery score. Consistent with previous studies, the test–retest reliability of individual tasks was modest (rs ≈ .60). The test–retest reliability of the overall battery scores differed depending on the scoring approach (rmean = .72; rfactor_score = .99). It is concluded that the children’s performance on individual EF tasks exhibit modest levels of test–retest reliability. This underscores the importance of administering multiple tasks and aggregating performance across these tasks in order to improve precision of measurement. However, the specific strategy that is used has a large impact on the apparent test–retest reliability of the overall score. These results replicate our earlier findings and provide additional cautionary evidence against the routine use of factor analytic approaches for representing individual performance across a battery of EF tasks.

Acknowledgements

The views expressed in this manuscript are those of the authors and do not necessarily represent the opinions and positions of the Institute of Educational Sciences or the Kenneth and Anne Griffin Foundation.

Disclosure Statement

No potential conflict of interest was reported by the authors.

Notes

1 The results of the impact of one of the treatment arms conducted in 2010–2012, Parent Academy, are reported in Fryer, Levitt, and List (Citation2015). The preschool programs were conducted in 2012–2014, which is the period during which the children in this study were tested.

Additional information

Funding

This work was supported by the Institute of Educational Sciences [grant number R324A120033]; and the Kenneth and Anne Griffin Foundation [grant number xxx].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 336.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.