Publication Cover
Child Neuropsychology
A Journal on Normal and Abnormal Development in Childhood and Adolescence
Volume 24, 2018 - Issue 6
2,659
Views
5
CrossRef citations to date
0
Altmetric
Original Articles

The association between the behavior rating inventory of executive functioning and cognitive testing in children diagnosed with a brain tumor

, , , , , & show all
Pages 844-858 | Received 14 Oct 2016, Accepted 28 Jun 2017, Published online: 10 Jul 2017

ABSTRACT

Pediatric brain tumor survivors (PBTS) suffer from cognitive late effects, such as deteriorating executive functioning (EF). We explored the suitability of the Behavior Rating Inventory of Executive Function (BRIEF) to screen for these late effects. We assessed the relationship between the BRIEF and EF tasks, and between the BRIEF-Parent and BRIEF-Teacher, and we explored the clinical utility. Eighty-two PBTS (8–18 years) were assessed with EF tasks measuring attention, cognitive flexibility, inhibition, visual-, and working memory (WM), and with the BRIEF-Parent and BRIEF-Teacher. Pearson’s correlations between the BRIEF and EF tasks, and between the BRIEF-Parent and BRIEF-Teacher were calculated. The BRIEF-Parent related poorly to EF tasks (rs < .26, ps > .01), but of the BRIEF-Teacher the WM-scale, Monitor-scale, Behavioral-Regulation-Index, and Meta-cognition-Index, and Total-score (rs > .31, ps < .01) related significantly to some EF tasks. When controlling for age, only the WM scale and Total score related significantly to the attention task (ps < .01). The inhibit scales of the BRIEF-Parent and BRIEF-Teacher correlated significantly (r = .33, p < .01). Children with clinically elevated scores on BRIEF scales that correlated with EF tasks performed worse on all EF tasks (ds 0.56–1.23, ps < .05). The BRIEF-Teacher Total and Index scores might better screen general EF in PBTS than the BRIEF-Parent. However, the BRIEF-Teacher is also not specific enough to capture separate EFs. Solely relying on the BRIEF as a screening measure of EFs in BPTS is insufficient. Questionnaires and tasks give distinctive, valuable information.

Survival rates of children who have been treated for a brain tumor have increased in the last decades due to improved diagnostic and neurosurgical techniques, radiation therapy and chemotherapy. The 5-year survival rate has risen from 57% (1975–1977) up to 74% (2004–2010) (Howlader et al., Citation2014). This increasing survival rate comes with a cost; childhood brain tumors and their treatment have long-lasting developmental consequences, particularly in the cognitive domain (De Ruiter, Van Mourik, Schouten-Van Meeteren, Grootenhuis, & Oosterlaan, Citation2013). Children who suffered from a brain tumor tend to “grow into deficit” (Anderson, Northam, & Wrennall, Citation2014); immediate cognitive impairments are followed by later impairments in higher-order cognitive functions as demands increase during childhood and into adulthood.

It is important that pediatric brain tumor survivor (PBTS) undergo cognitive assessment regularly to obtain information on the nature, severity, and course of cognitive consequences (Walsh et al., Citation2016). “Screening” protocols based on international requirements as formulated in Société Internationale d’Oncologie Pédiatrique and the Children’s Oncology Group include comprehensive cognitive assessment. However, this is not always feasible due to time and resources shortage. It might be preferable to conduct brief, repeated assessments (Walsh et al., Citation2016), or screen all PBTS regularly with a questionnaire, and extensively examine only those children who report cognitive problems. Such a screening questionnaire should measure the sequelae that PBTS suffer from and should correlate to cognitive tasks that measure these sequelae.

Tasks and questionnaires that measure cognitive functioning cannot simply replace each other. Cognitive tasks are designed to measure cognitive (dys)function, whereas questionnaires assess the consequences of the (dys)function in daily life. However, to reliably screen for cognitive (dys)function with a questionnaire, the outcomes of questionnaires should correlate to cognitive tasks for which psychometric properties have been determined.

Cognitive late effects of pediatric brain tumors vary largely depending on location and grade of the brain tumor and on the received treatment (Janzen, Mabbott, & Guger, Citation2015). Profound difficulties are found in attention (De Ruiter et al., Citation2013; Janzen et al., Citation2015), processing speed, working memory (WM), and psychomotor skills (Janzen et al., Citation2015). More specifically, PBTS show deficits in executive functioning (EF) (Netson et al., Citation2016). Executive functions are those cognitive functions that are needed to adapt behavior to varying demands of the environment and are comprised of a multitude of cognitive functions, including WM, inhibition, and cognitive flexibility. The Dutch Behavior Rating Inventory of Executive Function (BRIEF) (Gioia, Isquith, Guy, & Kenworthy, Citation2000; Smidts & Huizinga, Citation2009) is a standardized questionnaire to measure daily life EF. Findings on the correlation between the BRIEF and EF tasks vary widely (0%–50% significant correlations, r = 0 – .48) in both clinical (neurological or psychiatric) and healthy control groups (Toplak, West, & Stanovich, Citation2013). However, the BRIEF-WM parent-report scale correlates significantly to WM tasks in PBTS (Howarth et al., Citation2013). It is particularly interesting to study the relation between the BRIEF-Parent and BRIEF-Teacher because EF demands differ per context, such as home and school. Teachers might be more adept at EF evaluation than parents, as EF correlates to academic skills (Best, Miller, & Naglieri, Citation2011), and teaching includes evaluating academic skills. Correspondingly, BRIEF-Teacher scores appear strongly correlated to school functioning, while small to medium-sized associations have been reported between BRIEF-Parent scores and school functioning (Smidts & Huizinga, Citation2009).

We aimed to study whether the BRIEF-Parent, and/or BRIEF-Teacher are adequate screening instruments for EF problems in PBTS. To do so, we first studied the correlations between the BRIEF (Parent and Teacher) and a battery of EF tasks measuring attention, cognitive flexibility, inhibition, visual memory, and verbal memory. In a previous study, it was found that PBTS show difficulties on these EF tasks (De Ruiter et al., Citation2015). Secondly, we examined the correlation and tested group differences between the scores on the BRIEF-Parent and the BRIEF-Teacher. Thirdly, we explored the clinical utility of the findings by testing whether children who obtained clinically elevated scores (T score > 65) on either the BRIEF-Parent or BRIEF-Teacher also scored worse on the EF tasks.

Methods

Participants

Eighty-two children (8–18 years) diagnosed with a brain tumor participated in the cohort of the PRISMA study. Inclusion criteria for the PRISMA study consisted of having finished treatment at least two years prior, being able (mentally and physically) to undergo cognitive assessment, and parent-reported cognitive problems (a score >14 on the attention scale of the Disruptive Behavior Disorders Rating Scale (Oosterlaan et al., Citation2008), or if parents indicated at least two problems with respect to attention, memory, speed, and information processing). Exclusion criteria were a premorbid attention disorder diagnosis or being non-fluent in Dutch language. See for demographic and medical details.

Table 1. Participants’ demographic and clinical characteristics.

Procedure

The data collection of the current cross-sectional study was part of the baseline assessment of a randomized controlled trial on neurofeedback for PBTS (Ruiter et al., Citation2016). Data collection took place between 2010 and 2012. The study was approved by the Medical Ethical Committee of the Academic Medical Center (METC 09/137). Children were recruited through 5 Dutch academic hospitals (Emma Children’s Hospital/Academic Medical Center Amsterdam, the Vrije Universiteit Medical Center Amsterdam, the University Medical Center Utrecht, the Radboud University Medical Center Nijmegen, and the Maastricht University Medical Center). After receiving an information letter, parents were asked to fill out online screening questionnaires (see the “Participants” section) concerning inclusion and exclusion criteria. Informed consent was signed by the parents and children. Next, parents filled out the BRIEF online and indicated which teacher was most suitable to evaluate the child’s functioning; usually a mentor or otherwise the teacher spending most time with the children. Teachers were contacted by email, and filled out the BRIEF online. Children’s EFs were tested in a quiet room by trained examiners, and tests were administered in a fixed order. The total testing time was approximately 150 min, and regular breaks were taken to avoid fatigue.

Measures

EF questionnaires

To measure EF problems in daily life, the Dutch parent and teacher version of the BRIEF (Gioia et al., Citation2000) were administered. The BRIEF (75 items, 3-point Likert scale) includes a Total score and two indices, subdivided into eight subscales. The Behavior Regulation index consists of the subscales Inhibit, Shift, and Emotional control. The Meta-Cognition Index consists of Initiate, WM, Plan/Organize, Organization of Materials, and Monitor. The BRIEF parent and teacher version has good test–retest stability and high internal consistency in the general population (Cronbach’s αs BRIEF-Parent .78–.96; BRIEF-Teacher .88–.98 (Smidts & Huizinga, Citation2009)), as well as in PBTS (Cronbach’s αs BRIEF-Parent .66–.94; BRIEF-Teacher .82–.97 (de Ruiter et al., Citation2016)). The BRIEF is validated with the Child Behavior Checklist (Verhulst, Van Der Ende, & Koot, Citation1996), the Disruptive Behavior Disorder rating scale (Oosterlaan et al., Citation2008), and the Diagnostic Interview Schedule for Children (Ferdinand & Van Der Ende, Citation1998; Shaffer et al., Citation1993). Moreover, both convergent and divergent validity were adequate and in the expected direction (medium to large correlations with the Child Behavior Checklist (Verhulst et al., Citation1996) and the Disruptive Behavior Disorders Rating Scale (Pelham, Gnagy, Greenslade, & Milich, Citation1992)). The age- and gender standardized T-scores on the subscales, Index scores, and Total score were used as outcome measures. Higher scores indicate more problems. A T-score >65 is considered a clinically elevated score (Smidts & Huizinga, Citation2009).

EF tasks

Attention and cognitive flexibility

To measure several components of attention, we administered an adapted version of the Attention Network Test (ANT) (Fan et al., Citation2002). The ANT measures three subdomains of attention: alerting (maintaining an alert state), orienting (selecting information from sensory input), and executive control (resolving conflict among responses) (Fan et al., Citation2002). In the current version (De Kieviet, Van Elburg, Lafeber, & Oosterlaan, Citation2012), on each trial, a picture of a ship appears on the right or the left side of the computer screen and the child has to respond by pressing the corresponding key. In the different trials, the ship is preceded by different cues. The different trial types measure the before mentioned subdomains of attention: (1) a palm tree appears before the ship appears (neutral cue); (2) no cue, only the ship appears (alerting; the participant has to stay alert without any triggers); (3) a parrot pointing in the direction where the ship will appear precedes the appearance of the ship (orienting cue); and (4) a parrot pointing in the opposite direction of where the ship will appear precedes the appearance of the ship (executive cue). The difference in reaction times between the neutral cue (1) and other cues is used as a measure of the subdomains of attention. The ANT consists of 312 trials. The standard deviation (SD) divided by mean reaction time of the alerting trials (2) is used as a measure of inattention (the intraindividual coefficient of variation [IICV]), and the reaction time on the executive trials is used as a measure of cognitive flexibility. The ANT can be considered a valid measure of the three different subdomains of attention, as the three corresponding distinct subcortical anatomical networks are activated during task performance (Fan, McCandliss, Fossella, Flombaum, & Posner, Citation2005). Higher scores indicate worse functioning.

Inhibition

To measure inhibition, an adaptation of the Stop Signal Task (Logan & Cowan, Citation1984) was administered. The Stop Signal Task is a cognitive inhibition task (as compared to a motivational inhibition task (Kindlon, Mezzacappa, & Earls, Citation1995) with a medium to high temporal stability and discriminates well between typically developing children and children with an impulse control disorder. A picture of an airplane appears on the computer screen, facing left or right. The child has to press a corresponding key. In 25% of the trials, a cross on top of the airplane indicates that the response has to be inhibited (De Zeeuw et al., Citation2008). The time between stimulus and stop signal (stop signal delay) is adapted to the child’s performance throughout the task, to accomplish about 50% correct stop trials. The outcome measure is the stop-signal reaction time (SSRT); the time the child needs to inhibit his/her response. De Zeeuw et al. (Citation2008) explains that “SSRT was estimated using the race model in which the go process and the inhibitory process are conceived of as competing processes (see (Oosterlaan, Logan, & Sergeant, Citation1998) for more details). Whether a response will be executed or inhibited in a stop trial depends on which of these processes ‘wins’ the race. In the case in which 50% of stop trials result in successful stopping, the mean stop signal delay is where both the stop and the go processes have equal probability of ‘winning’ the race (i.e., the mean go process duration [mean reaction time] and the sum of the mean stop signal delay plus the duration of the stop process [SSRT] are approximately equal). It follows that SSRT can be calculated using the equation SSRT = mean reaction time – mean stop signal delay”. A higher score indicates worse inhibition.

Visual short-term memory

To measure visual short-term memory, the valid and reliable Visual Sequencing Task (Nutley, Söderqvist, Bryde, Humphreys, & Klingberg, Citation2010) was administered to the children. In a four-by-four grid, a sequence of dots lights up and has to be repeated on a touch screen. The length of the sequence ranges from two to nine, with four trials per sequence, divided into two difficulty levels; a level of two relatively simple trials and a level of two relatively difficult trials. After an error on both trials of the same difficulty level, the task is terminated. The total number of correctly repeated sequences was the outcome measure; hence, a higher score indicates a better visual memory.

Verbal (working) memory

Digit span from the WISC-III or WAIS-III (Kort et al., Citation2002; Wechsler, Citation1992, Citation2005) was administered to measure verbal WM. Digit span correlates significantly with other verbal WM tasks (Gathercole & Pickering, Citation2000). The participant is asked to verbally repeat sequences of digits (2–9 digits, increasing in length), first in the same order, and next in the reverse order. The task is terminated after two errors on a difficulty level. The total number of correctly repeated sequences (raw score) was used as the outcome measure. Higher scores indicate a better WM.

Statistical analyses

To control for possible non-response bias, we compared BRIEF-Parent, BRIEF-Teacher, and EF task scores of children whose data was complete with children whose data was incomplete using T-tests. For the main analyses, we firstly tested whether the BRIEF-Parent and BRIEF-Teacher correlated with the ANT-IICV (attention), Stop Signal Task SSRT (inhibition), Visual Sequencing (visual short-term memory), Digit Span (WM), and ANT-Executive (cognitive flexibility). We were particularly interested in the Pearson’s correlations between BRIEF-scales and EF tasks that aim to measure the same construct; (1) the Inhibit scale and the Stop Signal Task, (2) the Shift scale and the ANT-Executive, (3) the WM scale, and Visual Sequencing and Digit Span, and (4) the Monitor scale and the ANT-IICV. Moreover, we were interested in the Pearson’s correlations of the BRIEF-Parent and BRIEF-Teacher Index and Total scores with the EF tasks. We additionally explored whether controlling for age would influence the findings.

Secondly, we compared the BRIEF-Parent subscale, Index, and Total scores with the BRIEF-Teacher subscale, Index, and Total scores using Pearson’s correlations. In addition, a repeated measures analysis of variance was performed to study the difference between parent and teacher reports while correcting the analyses for dependency of the parent and teacher reports.

Considering the large number of comparisons, we set the alpha at p ≤ .01 to control for Type I errors.

Finally, we explored the clinical utility of the findings with independent sample t-tests. We tested whether children who showed any clinically elevated score (T-score > 65) on the Index or Total scales of either the BRIEF-Parent or the BRIEF-Teacher, also scored higher on the EF tasks than children without any clinically elevated score. Moreover, we studied EF task performance of children with versus children without T scores > 65 on those BRIEF scales that correlated significantly with the EF tasks. For these explorative analyses, we did not correct for multiple testing.

Results

Response

BRIEF teacher ratings were missing for nine children, and five children did not complete the Stop Signal Task (De Ruiter et al., Citation2015). Children with missing teacher data for the BRIEF scored significantly worse on the parent version of the BRIEF-Parent Monitor scale (= 2.8, = .01), and children who did not complete the Stop Signal Task scored worse on the ANT-IICV (= 2.5, = .02), and Digit span (= −2.3, = .03).

Correlation of brief-parent and brief-teacher with EF tasks

The BRIEF-Parent Total score, Index scores, nor subscales were significantly related to any of the EF tasks (rs −.24–.26, ps > .01).

With respect to expected relations between BRIEF-scales and EF tasks that aim to measure the same construct, the BRIEF-Teacher Inhibit scale was not significantly related to the Stop Signal Task and the Shift scale was not significantly related to the ANT-Executive (ps > .01), but the WM scale was significantly related to the Visual Sequencing (= −.31) and Digit span (= −.36), and the Monitor scale was significantly related to the ANT-IICV (= .35). The Behavioral Regulation and the Total scale were significantly related to the ANT-IICV and Digit span (rs > .31). The Meta-cognition Index was significantly related to the ANT-IICV (= .41) and the Stop Signal Task (= .31). Notably, the ANT-IICV was significantly related to all BRIEF-Teacher scales except Shift, Emotional control and Organization of materials; and the BRIEF-Teacher WM-scale was significantly related to all tasks except the ANT-Executive. See for all correlations. (Note that all EF tasks were also significantly interrelated rs > .35, < .01). When controlling for age, only the correlation between Attention ANT-IICV and BRIEF-Teacher WM and total scale remains significant (ps < .01; see Supplementary Table S1).

Table 2. Correlation between questionnaires and tasks measuring neurocognitive functioning.

Relation between scores on the BRIEF-parent and BRIEF-teacher

Only the Inhibition scales of the BRIEF-Parent and BRIEF-Teacher were significantly related (= .33, < .01). The BRIEF-Parent and BRIEF-Teacher scores did not significantly differ on any of the scales, Indexes, nor on the Total scale. See for all results.

Table 3. Comparison and correlation of BRIEF-Parent and BRIEF-Teacher T scores.

EF task performance of children with clinical versus non-clinical scores on the BRIEF

We compared children who had a clinically elevated score (T > 65) on one or more of the BRIEF-Parent or BRIEF-Teacher Index scores or Total score, with children who did not have a clinically elevated score on any of these scales. There were no significant differences in EF task performance between children with and without a clinical score (see ).

Table 4. Comparison of EF task performance of children with and without clinical scores on the BRIEF-Parent or BRIEF-Teacher Total or Index scores.

Based on the correlational findings (), we explored whether children who had clinically elevated scores (T > 65) on the BRIEF-Teacher WM scale, Index scores, or Total score (the scales that correlated best with the EF tasks) differed in EF task performance from children who did not have clinically elevated scores on these scales. Children with a clinically elevated score performed significantly worse on all EF tasks than children who did not have clinically elevated scores on these scales (ds 0.56–1.23, ps < .05; see ).

Conclusions

We studied the correlation between parent- and teacher-rated EF problems, and test results on experimental, yet appropriate neuropsychological tasks on which PBTS showed difficulties as compared to a control group (De Ruiter et al., Citation2015), designed to assess core EFs.

The aim of our study was to explore whether EF in PBTS can be reliably screened by means of parent or teacher questionnaires, which are less expensive and therefore more feasible than extensive neuropsychological assessment. Parents’ ratings of EF problems were poorly correlated to the EF test performances, but the teachers’ ratings showed several significant correlations of medium size, although many correlations were not significant after controlling for age. Correlations between test performances and questionnaire ratings were highest for the compound Total score on the BRIEF-Teacher. Parents’ and teachers’ ratings were poorly interrelated, though the means of their ratings were not significantly different either. Interestingly, children who showed clinically elevated scores on the BRIEF-Teacher WM scale, Index scale, and/or on the Total scale performed significantly worse on all EF tasks than children who did not show clinically elevated scores on these scales.

The lack of significant relations between parent rated EFs and EF tasks is notable, particularly as parents appear to report more EF problems as compared to the normative population (de Ruiter et al., Citation2016). One might expect that parents from PBTS could evaluate their child’s EF problems relatively well, as there is often a considerable difference in EF from before diagnosis to after treatment. In contrast, parents from children with a heritable condition, such as attention deficit hyperactivity disorder (ADHD) (Faraone & Doyle, Citation2000), might consider EF problems less remarkable as they might exist since young age, and often are also seen in other family members. There are a number of explanations for the current poor relation between parent rated EF problems and children’s performance on EF tasks. Firstly, as all currently studied children had parent reported cognitive problems (such as affected attention, memory, speed, and/or information processing, or a clinical score on the attention scale of the DBD-RS), the variance in the parent ratings might have been too low to find correlations with other measures. However, this explanation seems unlikely as the SDs of the BRIEF in the current study (7.99–12.01) are in line with the BRIEF T-score SDs (10). Secondly, parents of PBTS might condone deviant behavior more easily. This alteration in internal standards, known as response shift (Schwartz, Andresen, Nosek, & Krahn, Citation2007; Schwartz et al., Citation2006), might make parents less apt to rate their child’s EF reliably. On the other hand, teachers might also be prone to response shift, as they often know what illness and treatment the child has been through. Finally, besides the advantages of computerized measures (such as ruling out the influence of the assessor (Schatz & Browndyke, Citation2002), and precise, standardized, and reliably measuring EF), EF tasks may lack ecological validity compared to questionnaires. However, this also applies to teacher reports. The current findings confirm that experienced EF problems in daily life, particularly in the home situation, do not necessarily immediately derive from an assumed underlying cognitive (dys)function as measured with an EF task.

The relation between teacher EF ratings and EF tasks was somewhat stronger. Teachers might be more competent to rate a child’s behavior, as they have other children in the classroom to compare the child with. This might particularly be true in the current sample, where 24% of the children received special education, with relatively small classes and specialized teachers (although these teachers might be more familiar with problem behavior, as expressed by other children within special education, and their evaluation might be biased). Moreover, teachers observe the child in an academic environment, and specific cognitive functions, such as focused attention, might largely influence schoolwork, but might not notably influence behavior in the home situation. Indeed, the BRIEF-Teacher more strongly relates to school functioning than the BRIEF-Parent (Smidts & Huizinga, Citation2009), and the discrepancy between the parent and teacher reports might actually be meaningful (De Los Reyes, Thomas, Goodman, & Kundey, Citation2013).

There are some notable patterns in the BRIEF-Teacher. Firstly, many subscales of the BRIEF-Teacher were related to the attention task, even though the BRIEF does not include a specific attention scale (and the currently used Monitor scale as a measure of attention is hence negotiable). The current findings suggest that EF problems as reported by teachers might largely be influenced by a child’s attention abilities. Secondly, the BRIEF-Teacher WM scale was related to several EF tasks, and this scale might therefore be considered as a broader EF measure, or daily experienced WM might be influenced by several underlying EFs. Thirdly, children with clinically elevated scores on the WM, Index, or Total scale performed worse on all EF tasks with medium to large effect sizes (note that this was not found with the BRIEF-Parent). This suggests that the BRIEF-Teacher might be useful to screen for EF problems, although the explorative character of these findings require replication.

Different informants give distinct information and the use of a specific informant might depend on the goal of an assessment. The current findings suggest that EFs might be better assessed by teachers than by the parents, but that teacher’s reports alone are also not sufficiently informative to screen for EF dysfunctions. An important next step is therefore to evaluate whether parents’ or teachers’ reports of clinically elevated EF difficulties can predict clinically elevated scores on normed EF tests. Importantly, parent ratings should not be ruled out. As the BRIEF-Parent might not be the best measure of EFs in PBTS, alternative questionnaires could be explored to tap EF at home. Questionnaires that could be explored as possible screeners are Pediatric Perceived Cognitive Functioning (Peds-PCF) parent-report or self report (Lai et al., Citation2011), the Childhood Executive Functioning Inventory (CHEXI) parent report (Thorell & Nyberg, Citation2008), or the Subjective Awareness of Neuropsychological Deficits for Children (SAND-C) self-report (Hufford & Fastenau, Citation2005). The Peds-PCF was specifically developed to measure cognitive difficulties in the pediatric population, such as PBTS (Lai et al., Citation2011), and might therefore better suit the PBTS population. The CHEXI aims to measure EFs in an isolated way, and precludes to measure (besides EF) ADHD symptoms, in contrast to the BRIEF, in which several items reflect ADHD symptoms (Thorell & Nyberg, Citation2008). Self-reports might also give important, distinctive information. As Achenbach, Edelbrock, and Howell (Citation1987) and Toplak et al. (Citation2013) suggested, it is important to consult multiple informants to get a good view on the behavior, and cognitive functioning of children or adolescents.

There are some caveats in the current study. Firstly, we only included participants with parent-reported cognitive problems, hence, the current findings are not generalizable to all PBTS. Secondly, we used experimental EF tasks to get clear information of isolated EFs. Using clinically normed EF tests might give important additional information, i.e., whether a clinical score on the BRIEF also indicates a clinical test score as compared to children of the same sex and age. Thirdly, we did not include children’s self-report of EFs (the BRIEF Self-report was not yet available during data collection). Finally, we did not have background information about the teachers (intensity and frequency of interaction with the child, teaching experience, knowledge about EF), and 24% of the children attended special education. These factors might influence teacher reports.

The current findings suggest that the BRIEF-Teacher and BRIEF-Parent are not sufficiently specific to screen for EF deficits in PBTS. Although the BRIEF can give valuable information regarding daily life, the BRIEF-Parent is an unreliable predictor of EF task performance of PBTS; a subclinical BRIEF-Parent score does not indicate that there are no EF problems. The Total and Index scores of the BRIEF-Teacher might be slightly better to measure EF task performance, but the BRIEF-Teacher is not specific enough to capture separate EFs based on the subscales. In sum, solely relying on the BRIEF-Teacher and BRIEF- Parent as screening measures of EFs in BPTS is insufficient. Alternative questionnaires should be explored and individual responder characteristics (e.g., background of the teacher) need to be studied. Importantly, the current findings suggest that questionnaires and tasks both give distinctive, valuable information, and that structural screening of PBTS should include both questionnaires and tasks to get a complete picture.

Supplemental material

Acknowledgment

Research was conducted in the Psychosocial Department of Emma Children’s Hospital AMC, Amsterdam, The Netherlands. The authors would like to thank Juliette Greidanus, Zeliha Pekcan, Leontine Stolk, Dora Csermak, Arnout Smit, and Sander Schippers for their contribution to the data collection for the study. Finally, we wish to thank all participating parents and patients for their cooperation.

Disclosure statement

No potential conflict of interest was reported by the authors.

Supplemental data

Supplemental data for this article can be accessed here.

Additional information

Funding

This work was supported by the Dutch Cancer Society KWF Kankerbestrijding under grant number UVA 2008-4013; and the Tom Voûte Fund, Part of the Dutch Children Cancer Free Foundation, (KiKa) under grant number SKK-PRISMA.

References

  • Achenbach, T. M., Edelbrock, C., & Howell, C. T. (1987). Empirically based assessment of the behavioral/emotional problems of 2- and 3- year-old children. Journal of Abnormal Child Psychology, 15(4), 629–650. doi:10.1007/BF00917246
  • Anderson, V., Northam, E., & Wrennall, J. (2014). Developmental neuropsychology; A clinical approach. East Sussex: Psychology Press Ltd.
  • Best, J. R., Miller, P. H., & Naglieri, J. A. (2011). Relations between executive function and academic achievement from ages 5 to 17 in a large, representative national sample. Learning and Individual Differences, 21(4), 327–336. doi:10.1016/j.lindif.2011.01.007
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, New jersey: L Erlbaum.
  • De Kieviet, J. F., van Elburg, R. M., Lafeber, H. N., & Oosterlaan, J. (2012). Attention problems of very preterm children compared with age-matched term controls at school-age. Journal of Pediatrics, 161(5), 824–829. doi:10.1016/j.jpeds.2012.05.010
  • de Los Reyes, A., Thomas, S. A., Goodman, K. L., & Kundey, S. M. A. (2013). Principles underlying the use of multiple informants’ reports. Annual Review of Clinical Psychology, 9, 123–149. doi:10.1146/annurev-clinpsy-050212-185617
  • de Ruiter, M. A., Grootenhuis, M. A., van Mourik, R., Maurice-Stam, H., Breteler, M. H. M., Gidding, C., & Oosterlaan, J. (2015). Timed performance weaknesses on computerized tasks in pediatric brain tumor survivors: A comparison with sibling controls. Child Neuropsychology : A Journal on Normal and Abnormal Development in Childhood and Adolescence, 53(9), 1689–1699. doi:10.1080/09297049.2015.1108395
  • de Ruiter, M. A., Oosterlaan, J., Schouten-van Meeteren, A. Y. N., Maurice-Stam, H., Van Vuurden, D. G., Gidding, C., & Caron, H. N. (2016). Neurofeedback ineffective in pediatric brain tumor survivors: Results of a double-blind randomized placebo-controlled trial. European Journal of Cancer, 64, 62–73. doi:10.1007/s13398-014-0173-7.2
  • de Ruiter, M. A., Schouten-Van Meeteren, A. Y. N., van Vuurden, D. G., Maurice-Stam, H., Gidding, C., Beek, L. R., & Grootenhuis, M. A. (2016). Psychosocial profile of pediatric brain tumor survivors with neurocognitive complaints. Quality of Life Research, 25(2), 435–446. doi:10.1007/s11136-015-1091-7
  • de Ruiter, M. A., van Mourik, R., Schouten-van Meeteren, A. Y. N., Grootenhuis, M. A., & Oosterlaan, J. (2013). Neurocognitive consequences of a paediatric brain tumour and its treatment: A meta-analysis. Developmental Medicine & Child Neurology, 55(5), 408–417. doi:10.1111/dmcn.12020
  • de Zeeuw, P., Aarnoudse-Moens, C. S., Bijlhout, J., König, C., Post Uiterweer, A., Papanikolau, A., & Oosterlaan, J. (2008). Inhibitory performance, response speed, intraindividual variability, and response accuracy in ADHD. Journal of the American Academy of Child and Adolescent Psychiatry, 47(7), 808–816. doi:10.1097/CHI.0b013e318172eee9
  • Fan, J., McCandliss, B. D., Fossella, J., Flombaum, J. I., & Posner, M. I. (2005). The activation of attentional networks. NeuroImage, 26(2), 471–479. doi:10.1016/j.neuroimage.2005.02.004
  • Fan, J., McCandliss, B. D., Sommer, T., Raz, A., Posner, M. I., & Petersen, S. E. (2002). Testing the efficiency and independence of attentional networks. Journal of Cognitive Neuroscience, 14(3), 340–347. doi:10.1162/089892902317361886
  • Faraone, S. V., & Doyle, A. E. (2000). Genetic influences on attention deficit hyperactivity disorder. Current Psychiatry Reports, 2(2), 143–146. doi:10.1007/s11920-000-0059-6
  • Ferdinand, R. F., & Van Der Ende, J. (1998). Diagnostic interview schedule for children IV parent-version. Rotterdam: Erasmus University Rotterdam, Department of Child and Adolescent Psychiatry.
  • Gathercole, S. E., & Pickering, S. J. (2000). Assessment of working memory in six- and seven-year-old children. Journal of Educational Psychology, 92(2), 377–390. doi:10.1037//0022-0663.92.2.377
  • Gioia, G. A., Isquith, P. K., Guy, S. C., & Kenworthy, L. (2000). TEST REVIEW behavior rating inventory of executive function behavior rating inventory of executive function. Child Neuropsychology, 6(3), 235–238. doi:10.1076/chin.6.3.235.3152
  • Howarth, R. A., Ashford, J. M., Merchant, T. E., Ogg, R. J., Santana, V., Wu, S., … Conklin, H. M. (2013). The utility of parent report in the assessment of working memory among childhood brain tumor survivors. Journal of the International Neuropsychological Society, 19(4), 380–389. doi:10.1017/S1355617712001567
  • Howlader, N., Noone, A., Krapcho, M., Garshell, J., Miller, D., Altekruse, S., & Cronin, K. A. (2014). SEER cancer statistics review, 1975-2011, national cancer institute. Retrieved from http://seer.cancer.gov/csr/1975_2011/
  • Hufford, B. J., & Fastenau, P. S. (2005). Development and validation of the Subjective Awareness of Neuropsychological Deficits Questionnaire for Children (SAND-C). Journal of Clinical and Experimental Neuropsychology, 27(3), 255–277. doi:10.1080/13803390490515478
  • Janzen, L., Mabbott, D., & Guger, S. L. (2015). Neuropsychological outcomes in pediatric brain tumor survivors. In K. Scheinemann & E. Bouffet (Eds.), Pediatric neuro-oncology (pp. 267–276). New York: Springer.
  • Kindlon, D., Mezzacappa, E., & Earls, F. (1995). Psychometric properties of impulsivity measures: Temporal stability, validity and factor structure. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 36(4), 645–661. doi:10.1111/j.1469-7610.1995.tb02319.x
  • Kort, W., Schittekatte, M., Compaan, E. L., Bosmans, M., Bleichrodt, N., Vermeir, G., & Verhaeghe, P. (2002). WISC-III NL. Handleiding. Nederlandse bewerking. London: The Psychological Corporation.
  • Lai, J. S., Butt, Z., Zelko, F., Cella, D., Krull, K. R., Kieran, M. W., & Goldman, S. (2011). Development of a parent-report cognitive function item bank using item response theory and exploration of its clinical utility in computerized adaptive testing. Journal of Pediatric Psychology, 36(7), 766–779. doi:10.1093/jpepsy/jsr005
  • Logan, G. D., & Cowan, W. B. (1984). On the ability to inhibit thought and action: A theory of an act of control. Psychological Review, 91(3), 295–327. doi:10.1037//0033-295X.91.3.295
  • Netson, K. L., Ashford, J. M., Skinner, T., Carty, L., Wu, S., Merchant, T. E., & Conklin, H. M. (2016). Executive dysfunction is associated with poorer health-related quality of life in pediatric brain tumor survivors. Journal of Neuro-Oncology, 128(2), 313–321. doi:10.1007/s11060-016-2113-1
  • Nutley, S. B., Söderqvist, S., Bryde, S., Humphreys, K., & Klingberg, T. (2010). Measuring working memory capacity with greater precision in the lower capacity ranges. Developmental Neuropsychology, 35(1), 81–95. doi:10.1080/87565640903325741
  • Oosterlaan, J., Bayens, D., Scheres, A., Antrop, J., Roeyers, H., & Sergeant, J. A. (2008). Handleiding van de VvGK6-16 vragenlijst voor gedragsproblemen bij kinderen van 6 tot en met 16 jaar [manual for the DBDRS disruptive behavior disorder rating scale for children from 6 to 16 years old]. Amsterdam, The Netherlands: Harcourt Assessment BV.
  • Oosterlaan, J., Logan, G. D., & Sergeant, J. A. (1998). Response inhibition in AD/HD, CD, comorbid AD/HD + CD, anxious, and control children: A meta-analysis of studies with the stop task. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 39(3), 411–425. doi:10.1017/S0021963097002072
  • Pelham, W. J., Gnagy, E., Greenslade, K., & Milich, R. (1992). Teacher ratings of DSM-III-R symptoms for the disruptive behavior disorders. Journal of the American Academy of Child and Adolescent Psychiatry, 31(2), 210–218. doi:10.1097/00004583-199203000-00006
  • Schatz, P., & Browndyke, J. (2002). Applications of computer-based neuropsychological assessment. The Journal of Head Trauma Rehabilitation, 17(5), 395–410. doi:10.1097/00001199-200210000-00003
  • Schwartz, C. E., Andresen, E. M., Nosek, M. A., & Krahn, G. L. (2007). Response shift theory: Important implications for measuring quality of life in people with disability. Archives of Physical Medicine and Rehabilitation, 88(4), 529–536. doi:10.1016/j.apmr.2006.12.032
  • Schwartz, C. E., Bode, R., Repucci, N., Becker, J., Sprangers, M. A. G., & Fayers, P. M. (2006). The clinical significance of adaptation to changing health: A meta-analysis of response shift. Quality of Life Research, 15(9), 1533–1550. doi:10.1007/s11136-006-0025-9
  • Shaffer, D. F. R. C. P., Schwab-Stone, M., Fisher, P., Cohen, P., Placentini, J., Davies, M., … Regier, D. M. D. (1993). The Diagnostic Interview Schedule for Children-Revised Version (DISC-R): I. Preparation, field testing, interrater reliability, and acceptability. Journal of the American Academy of Child, 32(3), 643–650. doi:10.1097/00004583-199305000-00023
  • Smidts, D. P., & Huizinga, M. (2009). BRIEF executieve functies gedragsvragenlijst: Handleiding. Amsterdam: Hogrefe Uitgevers.
  • Thorell, L. B., & Nyberg, L. (2008). The childhood executive functioning inventory (CHEXI): A new rating instrument for parents and teachers. Developmental Neuropsychology, 33(4), 536–552. doi:10.1080/87565640802101516
  • Toplak, M. E., West, R. F., & Stanovich, K. E. (2013). Practitioner review: Do performance-based measures and ratings of executive function assess the same construct? Journal of Child Psychology and Psychiatry, 54(2), 131–143. doi:10.1111/jcpp.12001
  • Verhulst, F. C., Van Der Ende, J., & Koot, H. M. (1996). Manual for the child behavior checklist: 4-18. Rotterdam, The Netherlands: Dutch Department of Child Psychiatry, Sophia Children’s Hospital Rotterdam.
  • Walsh, K. S., Noll, R. B., Annett, R. D., Patel, S. K., Patenaude, A. F., & Embry, L. (2016). Standard of care for neuropsychological monitoring in pediatric neuro-oncology: Lessons from the Children’s Oncology Group (COG). Pediatric Blood & Cancer, 63, 191–195. doi:10.1002/pbc
  • Wechsler, D. (1992). Wechsler’s intelligence scale for children (3rd ed.). London: Harcourt Brace and Company.
  • Wechsler, D. (2005). Wechsler Adult Intelligence Scale-III (WAIS-III). San Antonio: The Psychological Corporation.